[sheepdog-users] Questions About Sheepdog Block Device (SBD)

Liu Yuan namei.unix at gmail.com
Tue May 27 04:43:20 CEST 2014


On Mon, May 26, 2014 at 10:33:38PM +0200, Valerio Pachera wrote:
> I'm thinking about iscsi and qemu-nbd compared to this solution.
> 
> Correct me if I'm wrong because I didn't use iscsi yet:
> with iscsi I can "expose" a service on a node and connect to it from
> another server (front-end).
> This will generate a block device on the front-end server.
> * I can also connect on localhost.
> * This is very similar to me to nbd-server/nbd-client (not to confuse with
> qemu-nbd).
> 
> By qemu-nbd I can obtain a /dev/nbd0 device on a cluser node (not on a
> front-end server).
> 
> sbd behaves as qemu-nbd: I get a /dev/sbd on one of the cluster nodes.
>

Should be /dev/{sbd0,sbd1,sbd2,...} when attach the sheep vdi to sbd. You can
attach thousands of sheep vdi to any of node with SBD module inserted.

>
> 
> What's the main differences between sbd and qemu-nbd?
>

Basically, nbd and sbd are implemented in the same layer: they are both linux
kernel module and expose as linux block devices. But...

NBD is much more complex than SBD. And SBD's performance should be better (it
would be nice if you can test and compare them in performance wise).

If you run VM on sheepdog vdi via NBD, it code path would like

VM -> FS file interface -> NBD kernel module -> QEMU NBD client -> QEMU Sheepdog
protocol -> Sheep daemon

but for SBD,

VM -> FS file interface -> SBD kernel module -> Sheep daemon

Besides of it, SBD is sheepdog specific, which means it will be more aware of
sheepdog features, for example, hyper volume (16PB support).

Thanks
Yuan



More information about the sheepdog-users mailing list