VMware Community Homelabs
-
@Pete-S said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS.
No way you can rack up that in electricity.Plus the needs of a lab VM are often very different from the needs of a production one. Prod needs fast disks and fast CPU, and "just enough" RAM. Labs need very little CPU and disk performance, but lots of RAM.
And just one workload like NextCloud could cost a fortune on even Vultr, but be nearly free on an R710.
We have old R510 units that could run 30+ VMs, easily. A good 50% more than @Pete-S is estimating. And adding RAM alone would allow us to up that number significantly.
-
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
-
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlThere's a ton of stuff out there on IRC, Reddit, Slack, Telegram, and other mediums for the other types of servers.
https://www.reddit.com/r/homelab/ I mean this is literally people just posting their home labs and specs. I'm not sure what else you want?
-
-
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
What home lab is going to be serving 5 billion requests per day? You're talking production, not home lab.
-
@scottalanmiller said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
Generally, i lab things out for a little and trash it. My costs typically never exceed a few bucks a month in Azure, and on AWS the free tier and costs are even better when buying those $300 credits for $20 or whatever it is. Mix and match. You get the most experience and bang for your buck.
I've found for me, that some of the best lab stuff is not setting up and tearing down, but setting up to keep operating. You get a whole different level of experience when you keep it running, patch it, maintain it, etc.
I do that when it's something I'm using past the testing/labbing experience. But then at that point it's not so much a test lab anymore.
It's hard to keep something going you never really use... typically forget about because patching can be automatic, but when not, even maintaining something you don't use much is kind of.... I don't know, wasteful IMO. Because you can be using those resources towards something you will be actively maintaining and using while learning. (given you are talking about platform test lab, which means that hardware is dedicated to that purpose) Perhaps if it's a platform, like you want to run Openstack or something to get experience as many large companies use that (not sure about SMB).
I do get the other side too. There are many things in SMB you can better lab or experience on your own hardware, because that's where most SMBs are coming from, and many either lack the need to move away from it, or lack the competence and culture to move to cloud.
Either way, it depends on where you want to go with your career and what environments you want to work with.
-
@Dashrender said in VMware Community Homelabs:
So @Obsolesce is talking about cloud - ASW/Asure, but what about other VPS providers like Vultr?
Those other VPS providers are irrelevant where I work. It's all AWS, Azure, GCP. If it's not one of those, they are running their own private cloud with OpenStack. Therefore, I'm not going to waste time learning a service I would never use outside of personal use (and yes, I used it personally, but not as a lab). There's always more to learn in AWS or Azure for example. Time labbing in Vultr for career deveopment, I feel, would be better used elsewhere.
That's just me though... it's because of where I am currently working, and also because any future employer I would choose as well. YMMV.
-
@scottalanmiller said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
Most IT people I know get their hardware for nothing where they work. So they have stuff like R710's at home.
So setting up a home lab isn't a cost thing, more a question of having the space.The cost of power to run that 710 can likely make it more cost effective to run the VMs in a VPS. plus that gets rid of the noise pollution.
Depends on how many VMs you have. A R710 can easily handle twenty of those $5 vultr VMs. That's $1200 per year for VMs in a VPS.
No way you can rack up that in electricity.Plus the needs of a lab VM are often very different from the needs of a production one. Prod needs fast disks and fast CPU, and "just enough" RAM. Labs need very little CPU and disk performance, but lots of RAM.
And just one workload like NextCloud could cost a fortune on even Vultr, but be nearly free on an R710.
We have old R510 units that could run 30+ VMs, easily. A good 50% more than @Pete-S is estimating. And adding RAM alone would allow us to up that number significantly.
If I need a bunch of VMs to test/lab things, I'll use Hyper-V on my laptop (shouldn't have to mention this, but I"m sure it'll be pointed out- not talking about platform labs here). Lots of RAM in PCs is much more doable now, and can take you pretty far. Some business laptops give you 64 GB of ram.... that's more than enough to set up some labs.
-
@stacksofplates said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@FATeknollogee said in VMware Community Homelabs:
Why don't we see this from the other Type-1 groups?
One thing the VMware "community" is very good at is grass roots movement.
From @lamw on Twitter:
https://www.virtuallyghetto.com/2020/02/vmware-community-homelabs-project.htmlBecause there is thing thing called the cloud, which is way cheaper and way more beneficial and efficient than spending thousands on hardware upfront.
Whatever it is, excluding some very specific scenarios, I'd rather learn and build it in Azure or AWS... You know, Two birds one stone.
The cloud is flexible but not cheap. It's only cheaper if your needs are very small or very dynamic. If you need performance or storage, cloud instances will get proportionally even more expensive.
And they know it of course. That's why they expand their services to make it harder to move the workload to another cloud provider or your own datacenter.
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Even with CFD type work where you'd normally have a cluster. You have to weigh the capex of the servers and the ongoing cost of the maintenance and utilities vs just spinning up 10-20 nodes when you need it. However if you get into long running solves like 6 months or so, then it might be cheaper on site. It really depends on the work load.
I think you're wrong. 5 billion hits per day is Google type traffic (a couple of years ago). And Google don't use the public cloud, they use their own servers. As do Facebook, Amazon, Ebay, Microsoft etc. People like Backblaze don't use the cloud.
The one company I know that I would expect to run their own servers but don't, are Netflix. They are on AWS. LinkedIn are also moving away from their own servers but that's not surprising since Microsoft owns them. I'm not sure they are actually running on Azure. It could be that they are just using Microsoft servers instead of their own.
Finance can calculate what's best but just because you're owning your server park doesn't mean you have to pay for it up front. It doesn't mean you don't have geo-redundancy or that it's all in one place. It doesn't mean you have to employ people that swaps hardware 24/7. And it doesn't mean you can't use cloud servers when you need.
-
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
-
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
I'm not sure you can say that Microsoft is on the public cloud when it's their servers and they own the hardware.
If the public cloud was cheaper than running their own hardware, Microsoft should move O365 and all their services to AWS. They would make a lot of money and not having to buy their own servers would be a great benefit.
-
@Pete-S said in VMware Community Homelabs:
I'm not sure you can say that Microsoft is on the public cloud
Yes, I can say that:
https://uk.pcmag.com/windows-10/118132/microsofts-cloud-how-the-company-eats-its-own-dog-food -
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
I'm not sure you can say that Microsoft is on the public cloud when it's their servers and they own the hardware.
If the public cloud was cheaper than running their own hardware, Microsoft should move O365 and all their services to AWS. They would make a lot of money and not having to buy their own servers would be a great benefit.
Google/AWS/Azure are all public clouds, and I assume they all run their own stuff on their own systems - it would be crazy for them not to, and definitely bad for PR - look, our own shit isn't even good enough for us to run on.
-
@stacksofplates said in VMware Community Homelabs:
It 100% depends on the type of work. If you're taking in 5 billion requests per day, there's no way it's going to be cheaper on site than in public cloud.
Actually, the size of the load doesn't matter much. If it is 5billion every day, cloud will be the worst option. If it is 5 bil one day and 1 bil the next then back up and all over, that alone is when public cloud competes. No workload becomes public cloud viable based on size, ever. Only on elasticity. Elasticity is the sole benefit to cloud architecture. It's a huge one, but the only one.
-
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
Right, but to them it is a private cloud, not public.
-
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
I'm not sure you can say that Microsoft is on the public cloud when it's their servers and they own the hardware.
If the public cloud was cheaper than running their own hardware, Microsoft should move O365 and all their services to AWS. They would make a lot of money and not having to buy their own servers would be a great benefit.
Google/AWS/Azure are all public clouds, and I assume they all run their own stuff on their own systems - it would be crazy for them not to, and definitely bad for PR - look, our own shit isn't even good enough for us to run on.
They are all public if you aren't those companies. They are all private if you are.
If I make my own OpenStack deployment for me, that's a private cloud. If i let you access it, it is still my private cloud, but you are using my public cloud.
All of those vendors do eat their own dog food, but they all get better dog food with more options and lower cost than their customers get. So their use of their own clouds is nothing like our use of it.
-
@scottalanmiller said in VMware Community Homelabs:
@Dashrender said in VMware Community Homelabs:
@Pete-S said in VMware Community Homelabs:
@Obsolesce said in VMware Community Homelabs:
@Pete-S I know that Microsoft eats it's own dog food. Can't speak for the others.
I'm not sure you can say that Microsoft is on the public cloud when it's their servers and they own the hardware.
If the public cloud was cheaper than running their own hardware, Microsoft should move O365 and all their services to AWS. They would make a lot of money and not having to buy their own servers would be a great benefit.
Google/AWS/Azure are all public clouds, and I assume they all run their own stuff on their own systems - it would be crazy for them not to, and definitely bad for PR - look, our own shit isn't even good enough for us to run on.
They are all public if you aren't those companies. They are all private if you are.
If I make my own OpenStack deployment for me, that's a private cloud. If i let you access it, it is still my private cloud, but you are using my public cloud.
All of those vendors do eat their own dog food, but they all get better dog food with more options and lower cost than their customers get. So their use of their own clouds is nothing like our use of it.
I don't understand how that makes their use of it different other than the spend part - is your claim that that spend is so significant that it actually is what makes it different?
-
@Dashrender said in VMware Community Homelabs:
I don't understand how that makes their use of it different other than the spend part - is your claim that that spend is so significant that it actually is what makes it different?
Absolutely. They easily spend half, or even nothing at all. They have unlimited essentially free access to all unused capacity needed for their public cloud. And the can buy hardware specific to their needs. And they can have different APIs, tools, etc.
Pretty much it changes every aspect of everything. The real question is... in what way would it be the same as us using it as public end users?
-
@Dashrender imagine it another way.... public hardware.
You have a customer and they are thinking of buying a server for their office. They spend $10K on it.
Then you use the spare capacity on what they bought for free, because there is excess capacity.
For you, the decision to used 100% free, unused capacity is a no brainer (as long as they let you, obvs.) For them, the decision to purchase a $10K server is a huge deal.
Now imagine Amazon. They made billions selling their cloud services. Their cloud is a profit center, not a cost center. That they get to use it to run their own business is essentially all found money!
To their customers, it is 100% always a cost center.
-
Another thing to think about between private and public clouds.... public clouds offer fully elastic capacity. You can pay for zero, or a tonne at any moment. Private clouds have flexible capacity between workloads, but a set total capacity. You can never pay zero, you have to own the infrastructure.
To Amazon, MS, and Google, their total hardware capacity is not elastic. If they want to grow past maximum capacity, they have to invest. If they don't need that capacity any longer, they can't shrink it. They can't "stop paying". They can power down, but that's nothing compared to no having to rent the equipment any longer.
To their customers, they can simply release unneeded resources and reduce consumption, even to zero.
To customers, the capacity is elastic. To the owners, it is a zero sum game.