[sheepdog-users] Sheepdog disk usage
Teun Kloosterman
teunkloosterman at gmail.com
Tue May 9 15:09:54 CEST 2017
Hi all,
I am still experiencing this problem which is blocking me from using
sheepdog in a testing environment, let alone production. Right now, for me,
the 'dog vdi rollback' command does not delete any changes made after the
snapshot, it just makes them inaccessible.
Is there anyone here using the rollback feature who had any success in
freeing up the disk space?
Or is it my workflow and understanding of snapshot/rollback features?
Should I use clone/delete instead?
I have enabled the discard option on the host and slave machines. I haven't
seen any difference in using different virtualized controllers, I have
tried: VirtIO, SCSI and SATA.
Regards,
Teun
PS. Some command output:
root at host:~# dog vdi rollback -s 1 debian
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog vdi rollback -s 2 centos
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 25 GB 59 GB 30%
1 84 GB 28 GB 56 GB 33%
2 84 GB 25 GB 60 GB 29%
Total 253 GB 79 GB 175 GB 31%
Total virtual image size 40 GB
--- Ansible Run
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 27 GB 57 GB 32%
1 84 GB 31 GB 54 GB 36%
2 84 GB 27 GB 58 GB 31%
Total 253 GB 85 GB 168 GB 33%
Total virtual image size 40 GB
root at host:~# dog vdi rollback -s 2 centos
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog vdi rollback -s 1 debian
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 27 GB 57 GB 32%
1 84 GB 31 GB 54 GB 36%
2 84 GB 27 GB 58 GB 31%
Total 253 GB 85 GB 168 GB 33%
Total virtual image size 40 GB
--- Ansible Run
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 33 GB 52 GB 38%
1 84 GB 37 GB 48 GB 43%
2 84 GB 32 GB 52 GB 37%
Total 253 GB 102 GB 152 GB 40%
Total virtual image size 40 GB
root at host:~# dog vdi list
Name Id Size Used Shared Creation time VDI id Copies
Tag
s centos 2 20 GB 204 MB 1.4 GB 2017-04-05 17:57 8d7d4a
2
centos 0 20 GB 1.9 GB 700 MB 2017-04-18 11:02 8d7d57
2
s debian 1 20 GB 2.4 GB 0.0 MB 2017-04-05 17:12 b1f0b0
2
debian 0 20 GB 2.4 GB 1.3 GB 2017-04-18 11:02 b1f0bd
2
root at host:~# dog vdi rollback -s 2 centos
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog vdi rollback -s 1 debian
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 33 GB 52 GB 38%
1 84 GB 37 GB 48 GB 43%
2 84 GB 32 GB 52 GB 37%
Total 253 GB 102 GB 152 GB 40%
Total virtual image size 40 GB
--- Ansible Run
root at host:~# dog vdi rollback -s 2 centos
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog vdi rollback -s 1 debian
This operation dicards any changes made since the previous
snapshot was taken. Continue? [yes/no]: yes
root at host:~# dog node info
Id Size Used Avail Use%
0 84 GB 36 GB 48 GB 42%
1 84 GB 40 GB 44 GB 47%
2 84 GB 35 GB 49 GB 41%
Total 253 GB 112 GB 142 GB 44%
Total virtual image size 40 GB
On 22 March 2017 at 11:29, Teun Kloosterman <teunkloosterman at gmail.com>
wrote:
> Hi,
>
> I'm afraid the discard option did not help with my issue.
>
> Is there any problem with the rollback command?
> I use it extensively for resetting these test machines to vanilla state.
>
> Regards,
> Teun
>
> ---
> root at host:~# dog node info
> Id Size Used Avail Use%
> 0 94 GB 88 GB 6.3 GB 93%
> 1 99 GB 99 GB 0.0 MB 100%
> 2 94 GB 86 GB 7.6 GB 91%
> Total 286 GB 272 GB 14 GB 95%
>
> Total virtual image size 40 GB
>
> root at host:~# dog vdi list
> Name Id Size Used Shared Creation time VDI id
> Copies Tag
> s centos 1 20 GB 2.2 GB 0.0 MB 2017-02-02 11:41 8d7d49
> 2
> s pxe 1 20 GB 3.5 GB 0.0 MB 2017-02-02 11:41 917450
> 2
> s debian 2 20 GB 920 MB 1.4 GB 2017-03-15 11:23 b1f0b6
> 2
> debian 0 20 GB 0.0 MB 2.2 GB 2017-03-22 10:42 b1f0ca
> 2
> arch 0 20 GB 1.9 GB 0.0 MB 2017-02-08 10:40 b23369
> 2
>
> root at host:~# dog vdi rollback -s 2 debian
> This operation dicards any changes made since the previous
> snapshot was taken. Continue? [yes/no]: yes
> Failed to create VDI debian: Failed to write to requested VDI
>
> On 31 January 2017 at 17:18, Vasiliy Tolstov <v.tolstov at selfip.ru> wrote:
>
>> You must enable discard inside VM, enable discard in qemu, and don't use
>> fully preallocated images
>>
>> 31 Янв 2017 г. 18:41 пользователь "Teun Kloosterman" <
>> teunkloosterman at gmail.com> написал:
>>
>> Hi all,
>>
>> I've installed a sheepdog cluster for testing purposes on some desktop
>> PCs. These have 120GB SSDs, which is not big, I know, but should suffice.
>> They all run stock Debian Jessie with stable sheepdog 0.8.3-2 installed.
>>
>> Now I'm running into the issue that sheepdog consumes all disk space on
>> these machines, all the way down to zero, and I cannot help it. These
>> machines use less than 5GB for themselves and more than 270GB on sheepdog
>> data. The images should consume (1.5 + 2.3) * 2 = 7.5GB or a maximum of 20
>> * 3 * 2 = 120 G. All data is located in the /mnt/sheep/0 data folder and
>> the .stale folder is empty.
>>
>> Can anyone explain to me what this data is and how I can manage it?
>>
>> root at host03:/# dog node info
>> Id Size Used Avail Use%
>> 0 89 GB 89 GB 0.0 MB 100%
>> 1 97 GB 91 GB 5.5 GB 94%
>> 2 97 GB 92 GB 4.6 GB 95%
>> Total 282 GB 272 GB 10 GB 96%
>>
>> Total virtual image size 20 GB
>>
>> root at host03:/# dog vdi list
>> Name Id Size Used Shared Creation time VDI id
>> Copies Tag
>> s centos 1 20 GB 1.5 GB 0.0 MB 2016-09-14 13:12 8d7d6a
>> 2
>> pxe 0 20 GB 0.0 MB 0.0 MB 2016-10-24 17:18 917450
>> 2
>> s debian 1 20 GB 2.3 GB 0.0 MB 2016-09-14 13:12 b1f0d2
>> 2
>>
>> Kind regards,
>> Teun Kloosterman
>>
>> --
>> sheepdog-users mailing lists
>> sheepdog-users at lists.wpkg.org
>> https://lists.wpkg.org/mailman/listinfo/sheepdog-users
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.wpkg.org/pipermail/sheepdog-users/attachments/20170509/fa19abed/attachment.html>
More information about the sheepdog-users
mailing list