Solved Hyper-converged infrastructure (I missed the storage part)
-
@msff-amman-Itofficer said in Hyper-converged infrastructure (I missed the storage part):
Why sometimes I hear about users running BTRFS or ZFS insides VMs and then allocating that storage to other VMs.
If you mean running a single storage server on top of the hypervisor and then providing iSCSI or NFS storage for mounting of other LOCAL VMs, this would border on the insane.
-
This is what I was looking for.
Cause when I heard about what they are doing and how this is done = دخلت بل حيط
It is an arabic saying literary meaning that you ran into brick wall
I no longer was able to compute, Im aware that I dont know everything but this method of virtualization storage in VMs then sharing it to other VMs appeared to me very complex yet I was very curios and wished to attempt it.I reckon it is better to go back a step and focus on more realistic goals.
-
Are you familiar with DRBD in the Linux world?
-
I thought I saw DRBL, and I was like YAY I know that one cause I worked on CloneZilla project long time ago.
But alas it is DRBD, no I am not but reading about it more now.
-
@msff-amman-Itofficer said in Hyper-converged infrastructure (I missed the storage part):
I thought I saw DRBL, and I was like YAY I know that one cause I worked on CloneZilla project long time ago.
But alas it is DRBD, no I am not but reading about it more now.
LOL
-
Then look at someone like @scale who implement their own storage system directly on the hypervisor itself. So the hypervisor is the only component that needs to be spread out to many nodes. It's kind of like DRBD, being in the hypervisor, but is scale out rather than just a two node mirror. The storage and the hypervisor and the management are all really one thing. Very tightly integrated.
-
@scottalanmiller said in Hyper-converged infrastructure (I missed the storage part):
node mirror. The storage and the hypervisor and the management are all really one thing. Very
Yeah I am, I like the fact that they use KVM, and explain it by saying stand alone ESXi truly became replaceable, and VMware knows that, that is why they are pushing in backup/replication/storage/network but not the core virtualization. Cause as I believe we reached the point where KVM/Xen can do just as good performance in running virtual machines.
But its the other stuff that is important now.
And we no longer can say stuff like ohh well I have ran ESXi for 10 years and I never have problem, cause the same will apply for KVM, it just people are not using it that much.
-
@msff-amman-Itofficer said in Hyper-converged infrastructure (I missed the storage part):
But its the other stuff that is important now.
Like the storage and the management layers.
-
@msff-amman-Itofficer said in Hyper-converged infrastructure (I missed the storage part):
And we no longer can say stuff like ohh well I have ran ESXi for 10 years and I never have problem, cause the same will apply for KVM, it just people are not using it that much.
We've been saying that about Xen for even longer
-
@scottalanmiller said in Hyper-converged infrastructure (I missed the storage part):
@msff-amman-Itofficer said in Hyper-converged infrastructure (I missed the storage part):
Why sometimes I hear about users running BTRFS or ZFS insides VMs and then allocating that storage to other VMs. Is this normal ?
No, that is not normal nor sensible unless you are building high availability clustering within those VMs. But you would not likely use BtrFS or ZFS for this, but more typically a clustered file system like GFS2. More or less, anytime you hear of someone using ZFS it's for something incredibly stupid. ZFS is amazing and I've been working with it for twelve years, but the recent belief that it is magic is just another example of SMB IT people hearing a word and deciding that since they don't understand it, it must be magic.
There are good times to do what you describe here, it's called a VSA approach. Vendors like @StarWind_Software and @HPEStorageGuy do this, but they don't do it with ZFS, they have custom software that handles HA clustering, that they do it in a VM is just a limitation of their access to the underlying hypervisor. VMware used to do this, but has VSAN now. Starwind only does this on non-Hyper-V platforms, on Hyper-V they skip the VM and run right on the hypervisor itself.
Building your own HA Storage VMs on top of your hypervisor is certainly possible, but is most definitely an "expert level" process. And what is available to build for yourself is quite limited. For all intents and purposes, doing this in this manner will only be done with either Starwind or HPE VSA products, this is exactly what they are built for and they both do it very well.
Just a tiny remark: depending on how VM does actual storage virtualization it can be either slow (doing data wires over vSwitch and using TCP) or extremely fast (PCI pass through or SR-IOV network and storage adapters inside a VM and using VM's vCPU(s) in polling mode), so it's not always beneficial to have in-kernel implementation of something.
https://www.starwindsoftware.com/starwind-virtual-san-ovf
FYI
-
@scottalanmiller said in Hyper-converged infrastructure (I missed the storage part):
Then look at someone like @scale who implement their own storage system directly on the hypervisor itself. So the hypervisor is the only component that needs to be spread out to many nodes. It's kind of like DRBD, being in the hypervisor, but is scale out rather than just a two node mirror. The storage and the hypervisor and the management are all really one thing. Very tightly integrated.
I did something similar. I had Gluster running between the hosts and the VMs are stored on the bricks. However I've switched to just running Gluster in the VMs that need replication. Most are replicated at the application level but a few need data replicated.