Nobody seriously uses AWS/GCP/Azure to have a couple VMs or dedicated servers alone. If someone can run their full workload in e.g. Hetzner without much hassle then they shouldn't be using any of the other cloud platforms in the first place as they'd be definitely overpaying

EDIT: I want to clarify that I unfortunately do know some companies use the big 3 as simple VPS providers but it seems that everybody agree here that it's a waste of money and that's one of my main points, which is also why the comparison of the big ones vs Hetzner or any other standalone VPS/dedicated server provider is pointless as they serve different use cases

I think you're seriously underestimating the amount of cloud customers that do a simple lift and shift

(The most egregious was a system peaking at maybe 5 hits a second during the month-end busy period living in multiple pods on a GCP Kubernetes cluster.)
I've done exactly that at a previous startup. Granted, it was 10 years ago, but going from racked infra to AWS ended up being half the cost for what was effectively twice the infra (we built out full geo-redundancy at the same time)

Most of my clients do just that - just EC2 on AWS

Ofcourse, my experience may not represent average case, but it is certainly not "nobody". I believe that most do it because AWS/Azure is the "safe option"

Choosing AWS/Azure is the modern version of "Nobody ever gets fired for buying IBM"

--
I just recently tried Hertzner myself and I love the experience for now. I am aware that I am comparing apples and oranges here but; Hertzners UI is just so fast and simple compared to AWS and the pricing is great. Even their invoices are clean and understandable

If they're going to do that why not at least choose Lightsail?
Not all businesses decide thats a risk worth mitigating, but some do

I know cloud can make sense but not like this

Hmm, anything that doesn't have insanely huge traffic and requirements does, and in those cases the major cloud vendors are still cheap and easy enough for those use cases

Hetzner seems to fit the "not big enough to get major discounts and support but large enough to have considerable cloud bills" customer and that is fine

[1] httpsaws.amazon.com/lightsail/
Many companies and people do host loads that would be better served on dedicated hardware on EC2 because "cloud"

> Hetzner without much hassle then they shouldn't be using any of the other cloud platforms in the first place as they'd be definitely overpaying


The ability to provision, de-provision, clone, load balance and manage without talking to people, waiting for hardware or really even having to understand in detail what is going on (yes this is bad, but still ) is one of the big reasons cloud is popular. Many dedicated hosts have gotten a lot better in this area

It actually does happen. They build some software, deploy it on VM and have said software use cloudy database service that removes a headache of maintaining backups, standby, point in time recovery, secure data at rest

I have couple of shell scripts that do all of that and use Hetzner but I can imaging some org with enough money to not care about the price for convenience of somebody else taking care of your data

They already pay for the cloud and someone to manage their cloud stuff I bet they would shell out half that if you offered your scripts

I think that just shows how nonsensical these cloud providers really are when you can just write some scripts to handle it
Believe me I do ;) I adapt those to particular products I develop for my clients. However not worth my time to bother releasing those in generic form. Suddenly I would have to satisfy bazillion specific constraints and requirements for generic users

Glad that I'm regularly seeing how awesome this company is lately

- Someone that used to fry lil hetzner servers for fun
I understand that you never received attacks of such 'large' scale but it takes $5 to take a hetzner server down (assuming you don't know how to do it yourself)
httpswww.cloudflare.com/products/cloudflare-spectrum/
httpskrebsonsecurity.com/2018/04/ddos-for-hire-service-we..

should be enough
- EX44: Intel Core i5-13500 / 64 GB / 2x512 GB NVMe - From 44 [2]
- EX101: Intel Core i9-13900 / 64 GB / 2x1.92 TB NVMe - From 84 [3]
[1] httpswww.hetzner.com/dedicated-rootserver/ax52
[2] httpswww.hetzner.com/dedicated-rootserver/ex44
[3] httpswww.hetzner.com/dedicated-rootserver/ex101
- EX101: Intel Core i9-13900 / 64 GB / 2x1.92 TB NVMe - From 84
- AX101: AMD Ryzen 9 5950X / 128GB / 2x3.84 TB NVMe - From 101
Increasing the memory to 128 GB, i.e. to two DIMMs per channel, drops the memory speed, more severely for AMD (DDR5-3600) than for Intel (DDR5-4400)

Overclocking the memory, like in gaming computers, would be unacceptable in server computers

However my first one did often reboot randomly and the support wasnt very helpful. They told me to just rent another one, which I did. The second one rebooted randomly once in about a year. I guess the first one went on auction and still happily reboots

Hetzner feels like a hard discount cloud provider. I still prefer them over AWS or Azure for non critical workloads that have a little budget


I asked them about one of the incidents, and they said that the breaker serving the rack had. I would guess that is a fairly common cause of this problem

Another issue is disk failures. They replace the disk incredibly quickly (<1hr "pay "rent ) - -- 1 10gbps 1gbps 20 30tb 
I've only become a customer again since their vps cloud offering and I've actually been recommending that because it has been flawless for me for years

No but i get it, but i had a lot of failures in general with spinning disks. I think it has to do that SSDs and NVMEs are much better at telling you how much juice they got left in them. I don't neccesairily think its a problem of hetzner alone though, as disks on other hosters have failed too for me

I also used to maintain a couple of "plain old offices" and Hard disk failure is sadly just all around us, when you are using bare metal

Another reason for kubernetes!
They do provide a csi driver for kubernetes for their blockstorage and private networking for both too

you can even have the masters of VMS and the nodes on bare metal
Personally I only had some network issues with them

Hetzner bare metal have unlimited bandwidth

If you pull a short straw, your box will be sharing bandwidth with a few bittorrent seed boxes or someones video CDN node
That being said, I run much smaller projects and servers and have not worked at a scale that really requires heavy workloads that generates thousands of monthly bills at GCP

So I think most devs being conditioned to start their first projects on the free-tiers of most cloud providers makes it really difficult for them to move to their own servers when they need it

httpswww.hetzner.com/sb
For example, I was running some experiments that required lots of RAM. Right now you can get a server with 256GB RAM for 60/month

httpstil.simonwillison.net/llms/llama-7b-m2
The channel is well worth a subscribe too

Servers start at $9 per-month. A comparable example:
Dual Xeons - 36 cores / 72 threads - 128GB memory - dual 1TB nvme - 5 IP's $80 per-month $0 setup. Setup with dual 2Tb nvme is $100 per-month

I'm colocating a couple of servers there for $40 per-month each, bandwith is 1Gbit unmetered and comes with 5ip's. A couple 1U's and towers. I recently bought a used 1U server off Amazon for $400. It has 48 cores, 96 GB memory and 4x1TB drives and came with a one year warranty on the components

Hetzner was solid, but their network was sketchy at times

just clicked, unfortunately it is out of stock.

> I'm colocating a couple of servers there for $40 per-month each
are you living nearby? Or you sent them server and they installed it?
You can check back, they update the list at server availability changes. Other providers there are Dedispec and Joesdatacenter, may have something in stock you're looking for

joesdatacenter.com (Kansas City) has single server COLO for $50 a month

Haven't found anything by Googling, so was wondering if anyone here works somewhere that does this

I'm used to cloud VMs where if one dies, I can quickly spin up another one effortlessly (I never have to contact support or anything like that)

Some failures I experienced and had to monitor/detect myself were: overheating (they replaced thermal paste when I told them I saw strange readings from the CPU stats), raid disk failure or ssd high burn (ie. partial failure, server still running, they replaced the failed disks after I told them)

Most of the time the issues have been resolved within 1-4 hours on low-cost Kimsufi and SoYouStart offers, even on weekends and nights. Often when the server is running they can require a shutdown

I'm quite happy with this as I am highly technical in those subjects and like to look under the hood, but with dedicated servers you really have to do some more maintenance/monitoring/planning yourself

> They don't monitor other health issues however (how would they since you are running your own system?) and therefore don't do anything before they detect a "down" status

My server has a hardware raid card. I have had one incident where OVH contacted me and said there was an issue with one of the drives, and that they will reboot the server at X time to replace it. They did so, and the problem was solved with no requests or intervention on my part

I had another incident where I was told the motherboard died. IIRC, it died around 1am my time and was replaced by 5am my time. They of course turned the system back on for me. I was asleep the whole time, and this was likewise solved with zero requests or intervention on my part

Besides this, I can count the number of times an internet or power issue made my server unreachable on a single hand. IMO, a great experience for a dirt cheap host

That all being said: OVH's ipv6 solution is laughably bad and is the single reason why I would switch hosts, if a better one with a north American presence appears

But somes issues are not failures and you have to work on them on your side

Most of the time the raid is software nowadays for example

IPv6 works fine for my many servers at OVH

But they often go even above and beyond for you. I rent several servers from them for many years, and I've had it happen once or twice that I got an e-mail from their datacenter team telling me that they noticed an error LED blinking on one of my servers, and actively offered to plan a repair intervention. All I had to do was to come up with a downtime window and communicate it to them. Very slick


I'd say about half of overall value of Hetzner is in their quality support

I showed them the sudden loss of power events in the logs. "It must be a problem with your OS modifications that we don't support"

OK, I wiped the machine to the stock image that you provide and it's still having power loss events. "Sure, we'll run a stress test for a couple minutesstress test passed OK, it's still your fault!"

The events happen randomly during the week, a stress test is not going to show that. Can you just move me to a different physical machine? "No."
This was over the course of several days, when I had an event coming up that I NEEDED the server for. I ended up going back to Azure and paying 10x the cost, but at least it worked great

httpsi.imgur.com/3DKc9OC.png
I have never seen this page before when trying to login. Make of that what you will

That's some dedicated client response team if so!
Provisioning of servers was always quite fast. Same day or the next business day

My experience is a little dated, I used to orderof dedicated boxes from them for our clients and with Hetzner we always had the best experience. Also the most bang for the buck

Then you contact support, appoint disk change, you first deactivate disk on raid (save geometry etc), they replace disk and then you rebuild raid in new disk. That's it. With SSD you may not even need to do this anymore

I imagine this would take time, right? Like not 5 minutes, but maybe 3 hours top? So, if I pretend to run a saas (that shouldn't be down more than 1h/day), then renting only 1 dedicated server could be qualified as "risky"?
They will all be hot-swap disks. You remove the old disk and slide in the new one (or in this case, tell them to do it). The RAID system rebuilds the array in the background over the next few hours

During that time you will lose data if it's RAID 5 and another disk fails

mdadm --manage  --remove 
so your machine doesn't have a fit when the disk is detached. Or equivalent

For example I have loads of stuff on Linode but always make sure I keep backups off-linode, incase I get a random TOS account shutdown and they stop speaking to me etc
IT departments really need to revise their due diligence processes. I wonder how many folks were coerced to do a similar migration just to benefit from household brand credibility

Does anyone have experience to share with that kind of setup? What's the maintenance like?
I use single dedicated server that costs ~40EUR/month, AX41-NVME, and each runner is a separate user account to allow for some isolation


Depending on your setup, you might need to spent some time adjusting jobs to have proper setup/cleanup and isolation between them (but it's not really Hetzner specific, just general issue)

We provision them with ~200 lines of shell script, which we get away with because they are not running a "prod" workload. Don't forget to run "docker system prune" on a timer! Overall these machines have been mostly unobtrusive and reliable, and the engineers greatly appreciate the order of magnitude reduction in github actions time. I've also noticed that they are writing more automation tooling now since budget anxiety is no longer a factor and the infrastructure is so much faster

My only issue is that security scanners cant run on self-hosted runners (GitHub refuses the artifact result, so technically, they do run, but the results fail to upload)

Do you have any alternatives? I thought Hetzner was fairly unique in their dedicated server offerings (for the price, I mean)

Recent Linux kernels finally support these CPUs (do they have full support but if you host a service where you want predictable (and fast) response times why you use the mix of both cores? Or would you just turn off those efficient cores for the server-side usage?
I'm assuming you don'tyourself in the foot by running strictly single-threaded workflow explicitly pinned to the efficiency cores

> running strictly single-threaded workflow explicitly pinned to the efficiency cores
Those cores are slower than e.g. the cores from the (Desktop) AMD CPU we tested at the same time (offered from Hetzner). So it is rather expensive and inefficient to use Intel (Desktop) CPUs for server-side applications as we can only use their performance cores

When these guys open up dedicated servers in a USA region it's going to be huge. Unfortunately, at the moment only the cloud offering is available in the USA so you're stuck with a bit of latency round tripping to the EU

Weird. It seems like they are reading the origin header or something and just redirect HN users to the root of the website

Works fine if you copy the link and paste it in a new tab

httpswww.hetzner.com/customers/talkwalker
Amazon has done an amazing job of convincing people that their hosting choice is between cloud (aka, AWS) or the higher-risk, knowledge intensive, self-hosting (aka, colocation). You see this play out all the time in HN comments. CTOs make expensive and expansive decisions believing these are the only two options. AWS has been so good at this, that for CEOs and some younger devops and developers, it isn't even a binary choice anymore, there's only cloud


Do yourself, your career, and your employer a favor, and at least be aware of a few things

First, there are various types of hosting, each with their own risk and costs, strength and weaknesses. The option that cloud vendors don't want you to know about are dedicated servers (which Hetzner is a major provider of). Like cloud vendors, dedicated server vendors are responsible for the hardware and the network. (If you go deeper than say, EC2, then I'll admit cloud vendors do take more of the responsibility (e.g. failing over your database))

Second, there isn't nearly enough public information to tell for sure, but cloud plays a relatively minor role in world-wide server hosting. Relative to other players, AWS _is_ big (biggest? not sure). But relative to the entire industry? Low single-digit %, if that. The industry is fragmented, there are thousands of players, offering different solutions at different scales

For general purpose computing/servers, cloud has two serious drawbacks: price and performance. When people mention that cloud has a lower TCO, they're almost always comparing it to colocation and ignoring (or aren't aware of) the other options

Performance is tricky because it overlaps with scalability. But the raw performance of an indivisible task matters a lot. If you can do something in 1ms on option A and 100ms on option B, but B can scale better (but possibly not linearly), your default should not be option B (especially if option A is also cheaper)

The only place I've seen cloud servers be a clear win is GPUs

The primary deciding factor is always security. You simply cannot use any small vendor because of the physical security (or the lack thereof). Unless of course you do not care about security. If a red team can just waltz into you DC and connect directly to your infra is it game over for some businesses. You can easily do this with most vendors

The secondary deciding factor is networking. Most traditional co-los have very limited understanding of networking. A CCIE or two can make a real difference. Unfortunately those guys usually work some bigger companies

The third deciding factor air conditioning and electricity considerations.case you are facing an OVH situation. httpswww.datacenterdynamics.com/en/opinions/ovhclouds-dat

(It is really funny, because I have warned them that their AC/cooling solution is not sufficient, and they explained to me that I am wrong. I was not aware of the rest (wooden elements, electricity fuckups, etc.)
During the year, an article in VO News by Clever Technologies claimed there were flaws in the power design of the site, for instance that the neighboring SBG4 facility was not independent, drawing power from the same circuit as SBG2. It's clear that the site had multiple generations, and among its work after the fire, OVHcloud reported digging a new power connection between the facilities
The fourth would be probably pricing. TCO is one consideration, after you made sure that the minimum requirements are met, but only after

So based on the needs somebody can choose wisely, based on the business requirements For example, running an airline vs running a complex simulations have very different requirements

From a sales point of view, I agree with you that, for a lot of folks, this might be the main concern. If you're doing B2B or government work this might be, by far, the most important thing to you

However, this is at least partially pure sales and security theatre. It's about checkboxes and being able to say "we use AWS" and having everyone else just nod their head and say "they use AWS."
I'm not a security expert (though I have held security-related/focused programming roles), but as strong as AWS is with respect to paper security, in practice, the foundation of cloud (i.e. sharing resources), seems like a dealbreaker to me (especially in a rowhammer/spectre world). Not to mention the access AWS/Amazon themselves have and the complexity of cloud-hosted system (and how easy it is to misconfigure them (1 About 8 years ago, when I worked at a large international bank, that was certainly how cloud was seen. I'm not sure if that's changed. Of course, they owned their own (small) DCs

(1) - httpsnews.ycombinator.com/item?id=26154038 The tool was removed from github (conspiracy theory but I still find the discussion there relevant

so, anywhere where your workloads or data are physically co-located on the same hardware as someone else's should be automatically disqualified, right?
Doing your career a favor is how we ended up in this situation in the first place. The tech industry had way too much free money floating around that there was never any market pressure to operate profitably, so complexity increased to fill the available resources

This has now gone on long enough that there are now entire careers built around the idea that the cloud is the only way - people that spend all day rewriting YAML/Terraform files, or developers turning every single little feature into a complex, failure-prone distributed system because the laptop-grade CPU their code runs on can't do it synchronously in a reasonable amount of time

All these people, their managers and decision makers could end up out of a job or face inconvenient consequences if the industry were to call out thecollectively, so it's in everyone's best interest to not call it out. Im sure there are cloud DevOps people that feel the same way but wouldnt admit it because its more lucrative for them to keep pretending


This works at multiple levels too, as a startup, you wouldn't be considered "cool" and deserving of VC funding (the aforementioned "free money") if you don't build an engineering playground based on laptop-grade CPU performance rented by the minute at 10x+ markup. You wouldn't be considered a "cool" place to work for either if prospective "engineers" or DevOps people can't use this opportunity to put "cloud" on their CVs and brag about solving self-inflicted problems

Clueless, non-tech companies are affected too - they got suckered into the whole "cloud" idea, and admitting their mistake would be politically inconvenient (and potentially require firing/retraining/losing some employees), so they'd rather continue and pour more money into the dumpster fire

A reckoning on the cloud and a return to rationality would actually work out well for everyone, including those who have a reason to use it, as it would force them to lower their prices to compete. But as long as everyone is happy to pay theirmarkups, why would they not take the money?
httpswww.svb.com/account/startup-banking-offers
For one, people generally underestimate the performance cost of their choices. And that reaches from app code, to their db and their infrastructure

Were talkingof magnitude of compounding effects. Big constant factors that can dominate the calculation. Big multipliers on top

Horizontal scaling with all its dollar cost, limitations, complexity, maintenance cost and gotchas becomes a fix on top of something that shouldnt be a problem in the first place

Personally, so far, the best near-equivalent provider I've found that actually offers well-specced machines in North America, is OVH, with their HGR line and their Montreal DC. Are there any other contenders?
And if not, why not? what's so hard about getting into the high-spec dedicated hosting space in the US specifically? Import duties on parts, maybe? (I've found plenty of low-spec bare-metal providers in the US, and plenty of high-spec cloud VM hosting providers in the US, and plenty of high-spec bare-metal providers outside the US; but so far, no other high-spec bare-metal providers in the US.)
[1] httpsservicestack.net/blog/finding-best-us-value-cloud-pr..

We're currently using these at OVH: httpswww.ovhcloud.com/en-ca/bare-metal/high-grade/hgr-hciand we really need the cores, the memory, the bandwidth, and the huge gobs of direct-attached NVMe. (We do highly-concurrent realtime analytics; these machines run DBs that each host thousands of concurrent multi-second OLAP queries against multi-TB datasets, with basically zero temporal locality between queries. It'd actually be a perfect use-case for a huge honking NUMA mainframe with "IO accelerator" cards, but there isn't an efficient market for mainframesso they're not actually price-optimal here compared to aof replicated DB shards running on commodity hardware.)

Also they'll run off with your money if you can't provide an ID after you've already paid. No service but no refunds either

But seriously, there's been lots of talk on HN recently about alternatives to the big. This is it - rent a big server and do it all on Linux

Request on Hold - Suspicous Activity Detected

Edit: so I use that time wisely to shitpost about it on HN, then check TrustPilot and I see:
"Unfortunately, based on your description (I need a ticket number or other customer information to find you in our system), you accidentally resembled an abuser."
Not a good outward appearance. I'll stick with AWS and paying through the nose

- stop operating in countries they don't want business from
- treat people equally
What they are doing is:

Is this a business? No

Should we follow any of the practices of HN? I do not think so. My personal website has a more scalable infrastructure than HN

There is no excuse for being a victim of an algorithm

And I never get this anywhere else!
In technology circles I am guilty until proven innocent

That's the difference, the outcome of which is the technology provider can quiteoff

Is anybody aware of anything that's price competitive in the US (or within a 50ms ping)?
[1] httpswww.ionos.com/servers/value-dedicated-server#package..

OVH [1] is not quite as cheap, but I can't really think of anyone else in the area that is totally comparable. One draw of OVH, Hetzner, etc, for me over the truly small, cheap dedicated server providers is they both have pretty decent networks and free DDoS mitigation, which is really nice for things like game servers and such where CloudFlare isn't an option

OVH's sub-brands like SoYouStart [2] will sell you decently specced dedicated servers started at around $30 a month in Quebec, which tends to be more than good enough for most of my "US" needs

They do have a couple datacenters in the United States too, not just Canada (+ quite a few in Europe, one in Singapore, some in Australia, etc), but I believe the Virginia/Oregon servers aren't available on the cheaper SYS site -- still cheap, though, but not quite $30 cheap

[1]: 
[2]:  (main downsides compared to OVH proper is the connection is capped at ~250Mbps, and although all servers have DDoS mitigation, the SYS and Kimsufi servers don't allow you to leave it on 24/7 -- so when you get attacked, it might take a minute or so to kick in, and then it'll remain on for 24 hours, I believe)
Edit1: missed word;
Edit2: people pointed below that the us locations don't have dedicated servers, cloud servers only;