Virtualized offerings perform significantly worse (see my 2019 experiments: httpsjan.rychter.com/enblog/cloud-server-cpu-performance and cost more. The difference is that you can "scale on demand", which I found not to be necessary, at least in my case. And if I do need to scale, I can still do that, it's just that getting new servers takes hours instead of seconds. Well, I don't need to scale in seconds

In my case, my entire monthly bill for the full production environment and a duplicate staging/standby environment is constant, simple, predictable, very low compared to what I'd need to pay AWS, and I still have a lot of performance headroom to

One thing worth noting is that I treat physical servers just like virtual ones: everything is managed through ansible and I can recreate everything from scratch. In fact, I do use another "devcloud" environment at Digital Ocean, and that one is spun up using terraform, before being passed on to ansible that does the rest of the setup

I suspect that VendorOps and complex tools like kubernetes are favored by complexity merchants which have arisen in the past decade. It looks great on resume and gives tech leaders a false sense of achievement

Meanwhile Stackoverflow, which is arguably more important than most startups, is chugging along on dedicated machines

1: httpsstackoverflow.blog/2016/02/17/stack-overflow-the-arc..

It seems like the trend in this space is to jump directly to the highest layer of abstraction. Skipping the fundamentals and point to $buzzword tools, libraries and products

You see glimpses of this in different forms. One is social media threads which ask stuff like How much of X do I need to know to learn Y Where X is a fundamental, possibly very stable technology or field and Y is some tool du jour or straight up a product or service

CORBA belongs there. And perhaps even the semantic web stuff. Definitely XML!
XML is great - if you don't believe it, then you might be interested in the pains of parsing JSON: httpseriot.ch/projects/parsing_json.html
And don't get me started on YAML..

The incremental cost of being on a cloud is totally worth it to me to have managed databases, automatic snapshots, hosted load balancers, plug and play block storage etc


I want to worry about whether our product is good, not about middle of the night hardware failures, spinning up a new server and having it not work right because there was no incentive to use configuration management with a single box, having a single runaway task cause OOM or full disk and break everything instead of just that tasks VM, fear of restarting a machine that has been up 1000 days etc

For me, the allure of the cloud is not some FAANG scalability fad when its not required, but automatic OS patching, automatic load balancing, NAT services, centralized, analyzable logs, database services that I don't have to manage, out of the box rolling upgrades to services, managing crypto-keys and secrets, and of course, object storage. (These are all the things we use in our startup)
I'd go on a limb and say dedicated servers may be very viable/cost effective when we reach a certain scale, instead of the other way round. For the moment, the cloud expenses are worth it for my startup

If you are doing your environment config as code it ultimately shouldn't matter if your target is a dedicated server, a vm or some cloud specific setup configured via an api

Doesn't really matter which one, the point is that you have one that can build you a new box or bring a misbehaving box back to a know good state by simply running a command

Sure, the actual pile of Go(o) that drives it can be improved (and it is indeed improving)

That said, the hard truth is that your approach is the correct one, almost all businesses (startups!) overbuild instead of focusing on providing value, identifying the actual niche, etc. (Yes, there's also the demand side of this, as the VC money inflated startups overbuild they want to depend on 3rd parties that can take the load, scale with them, SLAs and whatnot must be advertised.)
Kubernetes is fantastic! Whenever I see a potentially competing business start using Kubernetes, I am immediately relieved, as I know I don't have to worry about them anymore. They are about to disappear in a pile of unsolvable technical problems, trying to fix issues that can't be traced down in a tech stack of unbelievable complexity and working around limitations imposed by a system designed for businesses with traffic two-threeof magnitude larger. Also, their COGS will be way higher than mine, which means they will need to price their product higher, unless they are just burning through VC money (in which case they are a flash in the pan)

Good stuff all around

Ansible is imperative, it can work toward a static state, and that's it. If it runs into a problem it throws up its SSH connection, and cries tons of python errors, and gives up forever. Even with its inventory it's far from declarative


k8s is aof control loops that try to make progress toward their declared state. Failure is not a problem, it'll retry. It'll constantly check its own state, it has a nice API to report it, and thanks to a lot of reified concepts it's harder to have strange clashes between deployed components. (Whereas integrating multiple playbooks with Ansible isnot trivial.)
httpsgithub.com/spantaleev/matrix-docker-ansible-deploy
And even if k8s puts on too many legacy-ness, there are upcoming slimmer manifestations of the core ideas. (Eg. httpsgithub.com/aurae-runtime/aurae )
And now youre suggesting that people use a simpler non standard implementation?
So of course you can implement a minimal complexity solution or you can use something "off the shelf"

k8s is complexity and some of it is definitely unneeded for your situation, but it's also rather general, flexible, well supported, etc

> youre suggesting that people use a simpler non standard implementation?
What I suggested is that if k8s the project/software gets too big to fail, then folks can switch to an alternative. Luckily the k8s concepts and API is open source, so they can be implemented in other projects, and I gave such an example to illustrate that picking k8s is not an Oracle-like vendor lock-in

I always hear about all the great stuff you get practically for free from cloud providers. Is that stuff actually that easy to set up and use? Any time I tried to set up my LAMP stack on a cloud service it was such a confusing and frightening process that I ended up giving up. I'm wondering if I just need to push a little harder and I'll get to Cloud Heaven
Being able to say "I want a clustered MySQL here with this kind of specs" is much better (time-wise) than doing it on your own. The updates are also nicely wrapped up in the system, so I'm just saying "apply the latest update" rather than manually ensuring failovers/restarts happen in the right order

So you pay to simplify things, but the gain from that simplification only kicks in with larger projects. If you have something in a single server that you can restart while your users get an error page, it may not be worth it

The cloud is not easy buttrying to get cooling and power efficiency of an small server room anywhere near the efficiency levels most big data-center publish is next to impossible as is multi vendor internet connectivity

With the cloud all of that kind of goes away as it's managed by whatever data-center operator that cloud is running on but what people forget is that that is also true for old fashioned colocation services which is often offering a better cost/value then cloud


And while it's definitely harder to manage stuff like AWS or Azure because it bleeds a lot of abstractions small scall vpc providers hide from you or that you dont really get with a single home server, it's not hard on the scale of having to run a couple of racks worth of vmware servers with SAN based storage

With Cloud stuff you have more configuration to do because it is about configuring virtual servers etc. Instead of carrying the PC in a box to your room you must "configure it" to make it available

DO databases have backups you can configure to your liking, store them on DO Spaces (like S3). DB user management is easy. There's also cache servers for Redis

You can add a load balancer and connect it to your various web servers

I think it took me about 30 min to setup 2x web servers, a DB server, cache server, load balancer, a storage server and connect them all as needed using a few simple forms. Can't really beat that

If you have any more info or opinions then please do share

Lots of articles around the internet about hardening a Linux server, the ones I've tried take a bit more than 30 min to follow the steps, a lot longer if I'm trying to actually learn and understand what each thing is doing, why it's important, what the underlying vulnerability is, and how I might need to customize some settings for my particular use case

I'm sure you can find example setup scripts online (configure autoupdates, firewall, applications, etc should be a matter of running 'curl $URL' and then 'chmod +x $FILE' and 'bash $FILE'. I didn't need configuration management (I do use my provider's backup service which is important I guess)

Something like this: httpsraw.githubusercontent.com/potts99/Linux-Post-Install..

(seen in httpswww.reddit.com/r/selfhosted/comments/f18xi2/ubuntu_d )
Obviously the same can be said for long running VMs, and this can be solved by having a disciplined team, but I think it's generally more likely in an environment with a single long running dedicated machine

Hetzner has all of this except managed databases

httpswww.hetzner.com/managed-server
The webhosting packages also include 1..unlimited DBs (MySQL and PostgreSQL)
httpswww.hetzner.com/webhosting/
does it have automatic backup and fail over?
> With booked daily backup or the backup included in the type of server, all data is backed up daily and retained for a maximum of 14 days. Recovery of backups (Restore) is possible via the konsoleH administration interface

But i get the impression that the databases on managed servers are intended for use by apps running on that server, so there isn't really a concept of failover

httpswww.hetzner.com/legal/managed-server/
A single drive on a single server failing should never cause a production outage

A lot of configuration issues can be tracked down to self-contained deployments. Does that font file or JRE really need to be installed on the whole server or can you bundle it into your deployment

Our deployments use on-premise and EC2 targets. The deployment script isn't different, only the IP for the host is

Now, I will say if I can use S3 for something I 100% will. There is not an on-premise alternative for it with the same feature set

The point of DevOps is "Cattle, not pets." Put a bullet in your server once a week to find your failure points

I'd love if you jumped into our Discord/Slack and brought up some of the issues you were seeing so we can at least make the experience better for others using Dokku. Feel free to hit me up there (my nick is `savant`)

Let me preface and say that I'm an application dev with only a working knowledge of Docker. I'm not super skilled at infra and the application I struggled with has peculiar deployment parameters: It's a Python app that at build-time parses several gigs of static HTML crawls to populate a Postgres database that's static after being built. A Flask web app then serves against that database. The HTML parsing evolves fast and so the populated DB data should be bundled as part of the application image(s)

IIRC, I struggled with structuring Dockerfiles when the DB wasn't persistent but instead just another transient part of the app, but it seemed surmountable. The bigger issue seemed to be how to avoid pulling gigs of rarely changed data from S3 for each build when ideally it'd be cached, especially in a way that behaved sanely across DigitalOcean and my local environment. I presume the right Docker image layer caching would address the issue, but I pretty rapidly reached the end of my knowledge and patience

Dokku's DX does seem great for people doing normal things. :)
0. httpscoolify.io/
Plus who doesn't want to play with the newest, coolest toy on another's dime?
There's definitely an argument to move (some) stuff off cloud later in the journey when flexibility (or dealing with flux/pivoting) becomes less of a primary driver and scale/cost start dominating

Sure, you can get something working on Hetzner but be prepared to answer a lot more questions

Agreed on your last point of enterprise asking for this which again is just sad that these business "requirements" dictate how to architect and host your software when another way might be the much better one
Disaster recovery is a very real problem. Which is why I test it regularly. In a scorched-earth scenario, I can be up and running on a secondary (staging) system or a terraformed cloud system in less than an hour. For my business, that is enough


We went through a growth boom, and like all of them before, it meant there were lots of inexperienced people being handed lots of money and urgent expectations. Its a recipe for cargo culting and exploitative marketing

But growth is slowing and money is getting more expensive, so well slow down and start to re-learn the old lessons with exciting new variations. (Like Here: managing bare metal scaling and with containers and orchestration)
And the whole cycle will repeat in the next boom. Thats our industry for now

A solid dedicated server is 99% of the time far more useful than a crippled VPS on shared hardware but it obviously comes at an increased cost if you dont need all the resources they provide

Yes - butyearnings to "do what all the cool kids are doing now" are at least as strong in those who would normally be referred to as "managers", vs. "employees"

outcome A) huge microservices/cloud spend goes wrong, well at least you were following best practices, these things happen, what can you do

outcome B) you went with a Hetzner server and something went wrong, well you are aand should have gone with microservices, enjoy looking for a new job

Thus encouraging managers to choose microservices/cloud. It might not be the right decision for the company, but it's the right decision for the manager, and it's the manager making the decision

(As does being a consultant wanting an extension and writing software that works, as I found out the hard way.)
There's a disconnect between founders and everyone else

Founders believe they're going to >10x every year

Reality is that they're 90% likely to fail

90% of the time - you're fine failing on whatever

Some % of the 10% of the time you succeed you're still fine without the cloud - at least for several years of success, plenty of time to switch if ever necessary

I only have one ansible setup, and it can work both for virtualized servers and physical ones. No difference. The only difference is that virtualized servers need to be set up with terraform first, and physical ones need to be ordered first and their IPs entered into a configuration file (inventory)

Of course, I am also careful to avoid becoming dependent on many other cloud services. For example, I use VpnCloud (httpsgithub.com/dswd/vpncloud) for communication between the servers. As a side benefit, this also gives me the flexibility to switch to any infrastructure provider at any time


My main point was that while virtualized offerings do have their uses, there is a (huge) gap between a $10/month hobby VPS and a company with exploding-growth B2C business. Most new businesses actually fall into that gap: you do not expect hockey-stick exponential growth in a profitable B2B SaaS. That's where you should question the usual default choice of "use AWS". I care about my COGS and my margins, so I look at this choice very carefully

You're not a software company, fundamentally you make and sell Sprockets

The opinions here would be hire a big eng/IT staff to "buy and maintain servers" (PaaS is bad) and then likely "write aof code yourself" (SaaS is bad) or whatever is currently popular here (last thread here it was "Pushing PHP over SFTP without Git, and you'reif you need more" lol)
But I believe businesses should do One Thing Well and avoid trying to compete (doing it manually) for things outside of their core competency. In this case, I would definitely think Sprocket Masters should not attempt to manage their own hardware and should rely on a provider to handle scaling, security, uptime, compliance, and all the little details. I also think their software should be bog-standard with as little in-house as possible. They're not a software shop and should be writing as little code as possible

Realistically Widget Masters could run these sites with a rather small staff unless they decided to do it all manually, in which case they'd probably need a lot larger staff

However, what I also seeand what I think the previous poster was talking aboutare businesses where tech is at the core of the business, and there it often makes less sense, and instead of saving time it seems to cost time. There's a reason there are AWS experts: it's not trivial. "Real" servers also aren't trivial, but also not necessarily harder than cloud services

But those agencies don't want to have to staff for maintaining physical hardware either..

But Sprocket Masters still has to have an expensive Cloud Consultant on retainer simply to respond to emergencies

If you're going to have someone on staff to deal with the cloud issues, you may as well rent a server instead

I personally have run Dedicated Servers for our business in the earlier days but as we expanded and scaled, it became a lot easier to go with the Cloud providers to provision various services quickly even though the costs went up. Not to mention it is a lot easier to tell a prospective customer that you use "cloud like AWS" than "Oh we rent these machines in a data center run by some company" (which can actually be better but customers mostly wont get that). Audit, Compliance and others

For many even lot less than that. I run a small side project[1] that went viral a few times (30K views in 24 hours or so) and it is running on a single core CPU web server and a managed Postgres likewise on a single CPU core. It hasn't even been close to full utilization


1: httpsaihelperbot.com/
Could I switch some of them to lambda functions? Or switch to ECS? Or switch to some other cloud service du jour? Maybe. But the amount of time I spent writing this comment is already about six month's worth of savings for such a switch. If it's any more difficult than "push button, receive 100% reliably-translated service", it's hard to justify

Some of this is also because the cloud does provide some other services now that enable this sort of thing. I don't need to run a Kafka cluster for basic messaging, they all come with a message bus. I use that. I use the hosted DB options. I use S3 or equivalent, etc. I find what's left over is almost hardly worth the hassle of trying to jam myself into some other paradigm when I'm paying single-digit dollars a month to just run on an EC2 instance

It is absolutely the case that not everyone or every project can do this. I'm not stuck to this as an end-goal. When it doesn't work I immediately take appropriate scaling action. I'm not suggesting that you go rearchitect anything based on this. I'm just saying, it's not an option to be despised. It has a lot of flexibility in it, and the billing is quite consistent (or, to put it another way, the fact that if I suddenly have 50x the traffic, my system starts choking and sputtering noticeably rather than simply charging me hundreds of dollars more is a feature to me rather than a bug), and you are generally not stretching yourself to contort into some paradigm that is convenient for some cloud service but may not be convenient for you

Have you ever had to manage one of those environments?
The thing is, if you want to get some basic more-than-one-person scalability and proper devops then you have to overprovision by a significant factor (possibly voiding your savings)

You're inevitably going to end up with a bespoke solution, which means new joiners will have a harder time getting mastery of the system and significant people leaving the company will bring their intimate knowledge of your infrastructure with them. You're back to pets instead of cattles. Some servers are special, after a while automation means "a lot of glue shell scripts here and there" and an OS upgrade means either half infra is KO for a while or you don't do OS upgrades at all

And in the fortunate case you need to scale up You might find unpleasant surprises

And don't ever get me started on the networking side. Unless you're renting your whole rack and placing your own networking hardware, you get what you get. Which could be very poor in either functionalities or performances Assuming you're not doing anything fancy

If you want 100.0000% uptime, sure. But you don't usually. The companies that want that kind of uptime normally has teams dedicated to it anyway

And scaling works well on bare-metal too if you scale vertically - have you any idea the amount of power and throughput you can get from a single server?
It's concerning to keep hearing about "scaling" when the speaker means "horizontal scaling"

If you requirements are "scaling", then vertical scaling will take you far

If your requirements are "horizontal scaling on demand", then, sure, cloud providers will help there. But,few places need that sort of scaling

I'm not saying 100% uptime on bare metal is cheap, I'm saying 100% uptime is frequently not needed

Because the industry is full of people who are chasing trends and keywords and to which the most important thing is to add those keywords to the CVs

IME aiming for scalability is exceedingly wrong for most services/applications/whatever. And usually you pay so much overhead for the "scalable" solutions, that you then need to scale to make up for it

I really doubt this. The one theme you see on almost every AWS proponent is some high amount of delusion about what guarantees AWS actually provides you

Yeah, sure. Nobody gets fired for buying from IBM

Well, if large companies had any competence in decision-making, they would be unbeatable and it would be hopeless to work on anything else at all. So, yeah, that's a public good

Source: almost thirty years of ops

So far I havent noticed that if I spend more for the company that I also get paid more

getting the equivalent reliability with irons is a lot more expensive than renting "two dedicated servers" - now you might be fine with one server and a backup solution, and that's fair. but a sysad to create all that, even on a short contract for the initial setup and no maintenance, is going to go well beyond the cloud price difference, especially if there's a database in the mix and you care about that data

Today, cloud is similar - the time to market is quicker as there are less moving parts. When the economy tanks and growth slows, the beancounters come in and do their thing

It happens every time

This only makes AWS richer at theof companies and cloud teams
It is trivial to provision and unprovision an EC2 instance automatically, within seconds if your deployment needs to scale up or scale down. That's what makes it fundamentally different from a bare metal server


Now, I'm not denying that it might still be more cost effective when compared to AWS to provision a few more dedicated servers than you'll need, but when you have really unpredictable workloads it's not easy to keep up

if you are spinning up and shutting down VMs to meet demand curve - something is seriously wrong with your architecture

Did it ever occur to you, how come stackoverflow uses ~8 dedicated servers to serve entire world and doesn't need to spin up and shutdown VMs to meet global customer demand?
--
When planning compute infrastructure, it is important to go back to basics and not fall for cloud vendor's propaganda
You said it yourself. Not all applications need to serve entire world, therefore demand will be lower when people go to sleep

Even with global applications there are regulations that require you to host applications and data in specific regions. Imagine an application that is used by customers in Europe, Australia, and the US. They need to be served from regional data centres, and one cluster will be mostly sleeping when the other is running (because of timezones). With dedicated servers you would waste 60-70% of your resources, while using something like EC2/Fargate you can scale down to almost 0 when your region is sleeping

There is a method to the madness, here it is called "job-security-driven development"

Because it's a threat to their jobs

There's a whole band of people who have the technical chops to self-host, or host little instances for their family/friends/association/hiking club. This small margin where you're ok to spend a little extra because you want to make it proper, but you can't justify paying so much and spend time doing hard maintenance. A small VPS, with a shared Nextcloud or a small website, is all that's needed in many cases

For this I even use a little Raspberry Pi 400 in my bedroom

httpsjoeldare.com/private-analtyics-and-my-raspberry-pi-4..

Ive self hosted my own stuff for close to a decade now. Nobody has tried DDoSing my setup, because why would they? What benefit would they possibly get out of it? I would be pretty much the only person affected, and once they stop it wouldnt take long to recover

There is little to no incentive to DDoS a personal box, let alone by a internet rando

You are drastically overestimating a teen's abilities

Just pinging the IP over and over again isn't really going to do much. Maybe a DoS attack depending on what the target network has in terms of IPS, but even then it's more likely they'll infect their computers with viruses before they get the chance to actually attack you

I've personally been on the receiving end of one of those for somehowoff a cheater in GTA 5. It definitely happens and it's not fun


Edit: It looks like this might be automatic. This is something interesting I should look into a bit. Its probably just extra complexity for my little server, but Ive got some uses in mind

Tehcnically my homelab is also on a static public IP as well--this was more an exercise in "could I do this" than "actually necessary," but it's still cool asand I'm very happy

About the only hangup was I had to configure Wireguard to keep the tunnel alive, otherwise sometimes the VPS would receive incoming traffic and if my lab hadn't reached out in a while (which, why would it?) the tunnel would be down, and the proxy connection would. Thankfully that's built-in functionality

So, it seems like adding RSS / Atom feeds on a jekyll or GitHub pages site is pretty straightforward

1. httpsgithub.com/jekyll/jekyll-feed
2. httpsdocs.github.com/en/pages/setting-up-a-github-pages-s..

3. httpspages.github.com/versions/
atom 2c/4t, 4gb ram, 1tb drive, 100mbit
A few years of uptime at this point
If uninterrupted: some upgrade may be due. 
Kernel security updates are a thing


I found Atoms to be unbearably slow, even with Linux. Of course its enough for serving websites and whatnot, but its baffling how much power they use just not performing

Exactly. These sub-$10 VPS instances are great for small projects where you don't want to enter into long contracts or deal with managing your own servers

If you're running an actual business where margins are razor-thin and you've got all the free time in the world to handle server issues if (when) they come up, those ~$50 dedicated servers could be interesting to explore

But if you're running an actual business, even a $10,000/month AWS bill is cheaper than hiring another skilled developer to help manage your dedicated servers

This is what's generally missed in discussions about cloud costs on places like HN: Yes, cloud is expensive, but hiring even a single additional sysadmin/developer to help you manage custom infrastructure is incredibly expensive and much less flexible. That's why spending a hypothetical $5000/month on a cloud hosted solution that could, in theory, be custom-built on a $50/month server with enough time investment can still be a great deal. Engineers are expensive and time is limited

Uhhh, excuse me but how much are you paying this DevOps guy? This seems like a very American perspective, even valley area. In Europe, hiring a guy would be cheaper

Fully loaded employment costs are significantly higher than what you take home in your paycheck. It's actually worse in Europe. Take a look at these charts: httpsaccace.com/the-true-cost-of-an-employee-in-europe/

If you pay someone 1000 EUR in the UK, it costs the company a total of 1245 EUR

If you pay someone 1000 EUR in Romania, it costs the company 1747 EUR total

So a $120,000 fully loaded cost might only buy you at $68,000 USD salary for an EU devops person

But you can't have just one devops person. You need at least 2 if you want to allow one of them to ever take a break, getor go on vacation

I can hook you up in about 3 hours

AWS is very cost efficient for other services (S3,SES,SQS, etc) but virtual machines are not a good deal. You get less RAM & CPU, with the virtualization overhead, and pay a lot more money

Especially for Postgres if you run some tests with pgbench you can really see the penalty you pay for virtualization

Maybe the sysadmin skill of being able to build your own infrastructure is becoming a lost art, otherwise I can't explain why people are so in love with paying 5x for less performance

Hetzner is cheap and reliable in Europe, if you're in North America take a look at OVH. Especially their cost-saving alternative called SoYouStart. You can get 4/8 4.5ghz, 64 RAM and an NVME drive for $65

(I have no affiliation with OVH, I'm just a customer with almost 100 servers, and it's worked out great for me)
I'll also note that I'm old hehe, and one of my first jobs we had a decent sized data center on site. Dealing with SANs, tape drives (auto tape rotators at the time wereservers, etc was a huge PITA. Packing up tapes and shipping them to another office for location redundancy was always fun

The particular application I manage really suffers from low GHz and not having all data in memory. Ive run the benchmarks on EC2, certain reports that finish in ~5 seconds can take more than a minute on a comparable EC2 instance that costs about 10x as much. This application really needs the raw CPU. and yes we have a pretty large engineering team that has optimized all the queries, índices etc

As far as replication, backups, etc. I set all that up, and honestly it wasn't a big deal. It's a couple short chapters in the Postgres book that explain it all very simply, how to configure, continuously (and automatically) test, etc

I do agree that SANs are a nightmare. Thats why I ship all my WALs (PG backup files) to S3 (and eventually Glacier). That way I don't have to think about losing those files and it's dirt cheap

I think there's a misconception that these kinds of configurations are too complicated for a single engineer to setup, with never ending maintenance required. In reality you can set it all up in less than a week and it only really needs maintenance when you want to upgrade Postgres (some settings may have changed). I'd estimate it takes about 5 hours per year of maintenance

Self-serve infrastructure appears likely to become increasingly viable as we continue improving last mile delivery and expand fiber access. Will cloud become self-cannibalizing? Definitely maybe

What cloud gets you is the ability to put your data/workload right at the core without having to make special deals with your local isp, and with a lot more resilience then you would likely afford unless your at least at the multiple 20 foot container full of servers scale of compute need

httpswww.cloudflare.com/products/tunnel/
httpsgithub.com/cloudflare/cloudflared
httpsdevelopers.cloudflare.com/cloudflare-one/connections..

Edit: If anyone is interested in self-hosting, it'ssimple with cloudflared. I have a 2017 Google Pixelbook running Ubuntu on custom firmware that's serving a Flask-based website. Iton my desk charging while connected to a guest wifi network. It receives a 100/100 Mobile PageSpeed score and takes 0.8 seconds to fully load

From DO I get all the benefits of a reliable company, scalability, automated backups etc etc. There's no way I'd change

Hetzner cloud now has two US locations Still no US dedicated servers though - those would kick real. Even if their current cloud offerings themselves are already ~30% of the price of the major cloud equivalents..

Like you, I also run my services from a rented physical server. I used to use Versaweb, but their machines are too old. I didn't previously like Hetzner because I'd heard bad things about them interfering with what you're running

However, I moved to them in December when my Versaweb instance just died, probably SSD from old age. I'm now paying 50% of what I paid Versaweb, and I can run 6 such Postgres instances

Then it makes one wonder whether it's worth paying $700 of $800 for a managed service with a fancy cloud UI, automatic upgrades and backups, etc

For a 1 person show or small startup, I think not. Cheaper to use an available service and dump backups to S3 or something cheaper

Company I used to work for happily paid company A four times what company B charged for the exact same service, just because company A was willing to send quarterly invoices in way that played nicely with our invoicing system. For companies, saving a few hundred bucks here and there often isn't worth the hassles of introducing any extra friction

There is an implicit cost there. If its only one or two of those things, just take the managed services

If you start to scale, get an administrator type of employee to save on this

Otherwise, I'd have to hire/contract someone very experienced, or dedicate a solid month or more of my time (which was not available), just to be 100% sure we could always restore journaled PITR backups quickly

I can saveof magnitude on cloud costs other places, but credible managed PostgreSQL was a pretty easy call (even if the entry-level price tag was more than you'd expect)

An early startup that cannot afford to lose an hour of production data is probably too fragile to survive anyway

It's an early startup - there's going to be larger interruptions than that to the service and any customers who flee after a one hour loss of production data just didn't value the product enough anyway

(In the specific startup I was thinking of, I already had some nice automated frequent DB-online dumps to S3 with retention enforced, but I didn't think that was good enough for this particular scenario. But not being sure we coudl recover with PITR/journaling would adding a new single point of failure gamble on the success of a business that otherwise might have a 9+ figure exit in a few/several years, just to save a few hundred.)
Also, I suppose that some of the early startups that have less demanding needs, but are cavalier about their obligations towards customers'/users' data, are still being negligent wrt minimum basic practices

Maybe one intuitive way to appreciate: Imagine a biz news story on HN, on some startup operations dropping of the ball, with startup founders' names attached, and the story says and it turns out they didn't have good backups Then one of the cofounders, who's maybe not slept much as their company is crashing around them, responds "Customer data isn't that important in an early startup; if it were, we'd be too fragile" (typed before the cofounder could throw that person's laptop across the room to stop them typing). It wouldn't be a good look

I'll address this at the end

> Imagine a biz news story on HN, on some startup operations dropping of the ball, with startup founders' names attached, and the story says and it turns out they didn't have good backups

ITYM and it turns out they didn't have the last hour backed up
I can imagine everything in your scenario, even the unsnipped bits, and it all seems normal, unless you read "Our startup clients, who've been using us for a month, all left immediately when we lost the last hour of their data"

I really can't imagine that scenario

Now, granted, there are some businesses where the benefit to using it is that there is not even an hour of lost data


IOW, customers use it because it won't lose any data: online document collaboration[1], for example[2]. If you have a meltdown that loses an hour of the last inputted data, then sure, expect all affected current users to flee immediately

[1] Although, personally, I'd mitigate the risk by duplicating the current document in localstorage while it is being edited

[2] Maybe stock exchange also needs the system to keep the last hour of data? What else?
I think even for larger teams it may make sense to manage databases yourself, assuming you have the competence to do it well. There are so many things that can go wrong with managed services and they dont hide the underlying implementation the way that things like block storage or object storage do

Peak performance is certainly worse - but I am not too bothered if something takes longer to run anyway. You are certainly correct on having as much automation in the provisioning of a server, something I did not do with a physical server

I used to have a root server for my pet projects, but honestly, that doesn't make sense. I'm not running a high traffic, compute intense SaaS on my machines. It's just a static website and some projects. I'm down to monthly costs of 24 which includes a storage box of 1 TB to store all my data

The main issue in any scenario involving real hardware is that you need staff who are competent in both hardware and Linux/UNIX systems. Many claim to be on their resumes and then cannot perform once on the job (in my experience anyway). In my opinion, one of the major reasons for the explosion of the cloud world was precisely the difficulty in building and the financial cost of building such teams. Additionally, there is a somewhat natural (and necessary) friction between application developers and systems folks. The systems folks should always pushing back and arguing for more security, more process, and fewer deployments. The dev team should always be arguing for more flexibility, more releases, and less process. Good management should then strike the middle path between the two. Unfortunately, incompetent managers have often just decided to get rid of systems people and move things into AWS land

Finally, I would just note that cloud architecture is bad for the planet as it requires over-provisioning by cloud providers, and it requires more computer power overall due to the many layers of abstraction. While anyone project is responsible for little of this waste, the entire global cloud as an aggregate is very wasteful. This bothers me and obviously likely factors as an emotional bias in my views (so large amounts of salt for all of the above)


The argument could be made you can develop a means to to rent physical servers pre-income, then, when it makes sense, you can either use standard depreciation -or- Section 179 on outright purchases and/or Section 179 leases

As an example, you can deploy an incredibly capable group of let's say four absolutelycompletely over-provisioned $100k physical 1U machines in different colo facilities for redundancy. There are all kinds of tricks here for load balancing and failover with XYZ cloud service, DNS, anycast, whatever you want. You can go with various colo facilities that operate datacenters around the world, ship the hardware from the vendor to them, then provision them with Ansible or whatever you're into without ever seeing the facility or touching hardware

So now you have redundant physical hardware that will absolutely run circles around most cloud providers (especially for I/O), fixed costs like all you canbandwidth (that doesn't have the 800% markup of cloud services, etc) - no more waiting for the inevitable $50k cloud bill or trying to track down (in a panic) what caused you to exceed your configured cloud budget in a day instead of a month. Oh btw, you're not locking yourself into the goofy proprietary APIs to provision and even utilize services other than virtual machines offered by $BIGCLOUD

If you're doing any ML you can train on your own hardware or (or the occasional cloud) and run inference 24/7 with things like the NVIDIA A10. Continuous cloud rental for GPU instances is unbelievably expensive and the ROI on purchasing the hardware is typically in the range of a few months (or way ahead almost immediately with Section 179). As an example, I recently did a benchmark with the Nvidia A10 for a model we're serving and it can do over 700 inference requests/s in FP32 with under 10ms latency. With a single A10 per chassis across four healthy instances that's 2800 req/s (and could probably be tuned further)

Then, if you get REALLY big, you can start getting cabinets and beyond. In terms of hardware failures as mentioned, all I can say is dual PS RAID-ed out, etc hardware is (in my experience) extremely reliable.having had multiple full cabinets of hardware in the past hardware failures were few and far between and hardware vendors will include incredibles SLAs for replacement. You notify them of the failure, they send a tech in < eight hours directly to the colo facility and replace the disk, PS, etc with the flashing light

My experience is one (good) FTE resource can easily manage this up to multiple cabinet scale. To your point, the current issue is many of these people have been snatched up by the big cloud providers and replaced (in the market) with resources that can navigate the borderline ridiculousness that is using dozens (if not more) products/services from $BIGCLOUD

I've also found this configuration is actually MUCH more reliable than most $BIGCLOUD. No more wondering what's going on with a $BIGCLOUD outage that they won't even acknowledge (and that you have absolutely no control over). Coming from a background in telecom and healthcare it's completely wild to me how uptime has actually gotten much worse with cloud providers. Usually you can just tell customers "oh the internet is having problems today" because they'll probably be seeing headlines about it but for many applications that's just totally unacceptable - and we should expect better

[0] - httpswww.section179.org/section_179_deduction/
[1] = httpswww.section179.org/section_179_leases/
If I want to spin up a new project or try out hosting something new it takes a couple minutes and I've got the scripts. Deployments are fast, maintenance is low, and I have far more for my money

For anyone who's interested this is the rough cut of what I'm using:
* Ansible to manage everything
* A tiny bit of terraform for some DNS entries which I may replace one day
* restic for backups, again controlled by ansible
* tailscale for vpn (I have some pi's running at home, nothing major but tailscale makes it easy and secure)
* docker-compose for pretty much everything else
Main app is Clojure, so I run a native JVM. Database is fully distributed, RethinkDB, now working on moving to FoundationDB

The important thing is not to manage anything manually, e.g. treat physical servers just like any other cloud server. It shouldn't matter whether it's physical or virtualized

I've seen lots of less experienced people overpay for hetzner and similar when a $5-10 vps would've worked

Yes, you're supporting your own hardware at that point. No, it's not a huge headache

The biggest additional cost to this is renting more IPv4 addresses, which Hetzner charge handsomely for now that there are so few available

Whatever you create, will start with 0 users, and an entire real machine is completely overkill for that 0 load you will get. You upgrade your VPS into a pair of real machines, then into a small rented cluster, and then into a datacenter (if somebody doesn't undercut that one). All of those have predictable bills and top performance for their price

Anything you own in a colo is going to be more per month too. When I had connections where I could pay for a static IP, that was usually $5/month


I'm now renting a pretty low end server, but it's $30/month. Way more everything that I need, but it's nice. And they didn't drop support for my OS while increasing prices to improve support or something. (Although I did have some initially flakey hardware that needed to get swapped)
as you pointed out, bare metal is the way to go. is works the opposite of cloud - a bit more work at the beginning but a way lot less of expenses at the end

More info httpseuropa.eu/youreurope/business/taxation/vat/cross-bor..

Setting up and managing Postgres is a pain tho. Would be nice to have a simpler way of getting this all right

1. Forces config to be reproducible, as VMs will go down

2. You can get heavy discounts on AWS that reduce the pain

3. The other stuff you have access to atop the VMs that's cheaper/faster once your stuff is already in the cloud
4. Easier to have a documented system config (e.g. AWS docs) than train people/document what special stuff you have in-house. Especially useful in hiring new folks

5. You don't need space or redundant power/internet/etc on premesis. Just enough to let people run their laptops

I used a VPS before that, but stopped and switched to a physical one because it was a better deal and we didn't run into CPU limit(ation)s

Disk monitoring isn't too hard though. For hard drives, run smartctl once an hour, alert when reallocated or pending sectors grows quickly or hits 100. For SSDs, cross your fingers; in my experience with a few thousand, they tend to work great until they disappear from the bus, never to be seen again. Have a data recovery plan that doesn't involve storage of the data onto the same model devices with very similar power on hourspower on hour correlated firmware errors are real

Hetzner has an API for ordering dedicated servers after all, and an API for installing an OS (or for rebooting to rescue and flashing whatever image you want)
I guess if I was investigating commercial options I'd have the "trunk" sorted at the office with a commercial isp solution, static IP, good IT hardware maybe, but from what I know at this exact moment if a client needed hosting I'd always go straight to renting a vps

I was more of a junior dev at the time so maybe I was anbut I don't miss it at all. In theory I agree with what you're saying, but deploying a Dockerfile to something like Google Cloud Run is just aof a lot easier. Yea I'm paying more than what I would be managing my own VPS, but I think this is more than offset by the dev hours saved

- physical hardware has trouble, e.g. fan failure -> my VM gets live migrated to a different host, I don't notice or care
- physical hardware explodes -> my VM restarts on a different host, I might notice but I don't care

Disaster planning is a lot easier with VMs (even with pets not cattle)

For a beginner, the cheapest ones get the work done

I am sure that as cloud computing evolves, these offering become more common

There is another aspect of cloud computing. The medium to large corporates, count cloud computing as single digit percentages on their cost calculations. This means that the decisions taken by managers and teams, often search for reliability and scalability (to be put on their presentations) rather than "is my setup costly or cheap"

My employer adopted cloud as a business/financial play, not a religious one. We often land new builds in the cloud and migrate to a data center if appropriate later

The apps on-prem cost about 40% less. Apps that are more cost effective in cloud stay there

I think it's the case that AWS/GCP/Azure are not very cost-competitive offerings in Europe. What I'm not seeing is evidence of that for the US

for the same spec, sure. I think virtuals make sense at both ends - either dynamic scalability for large N is important, or you only actually need a small fraction of a physical box. Paying 45/mo for something that runs find on 5/mo isn't sensible either, and gives you more flexibility for not ganging things together just to use your server

Keep backups in any case. Preferably on another provider or at least in a different physical location. And, of course, test them

And if you are managing a good backup regime, and monitoring your data/app anyway, is monitoring drives a significant extra hardship?
--
[1] in fact if you automate the restore process to another location, which I do for a couple of my bits, then you can just hit that button and update DNS when complete, and maybe allocate a bit more RAM+cores (my test mirrors are smaller than the live VMs as they don't need to serve real use patterns)

Exactly what I do for myself and my clients. Saves tons of dosh

Even if I did want to update, it's just a case of pulling the latest version into the docker-compose template and re-running the ansible playbook. Obviously if the upgrade requires more then so be it, but it's no different to any other setup work wise

Probably the only thing I _need_ to do which I do manually is test my backups. But I have a script for each project which does it so I just SSH on, run the one-liner, check the result and it's done. I do that roughly once a month or so, but I also get emails if a backup fails


So it can be no time at all. Usually it's probably 1-2 hours a month if I'm taking updates on a semi-regular basis. But that will scale with the more things you host and manage

In other words, the only difference is where the ansible inventory file comes from. Either it's a static list of IPs, or it comes from terraform

If you want ECC RAM, that appears to be 60/month, and it also steps up to a more powerful 8-core CPU

Regardless, if we're talking about a "full production environment and a duplicate staging/standby environment" (to quote the person you replied to), then 60/month * (2 or 3) is still dirt cheap compared to any startup's AWS bill that I've seen

Use cases vary, but I tend to agree that AWS/GCP/Azure is not the answer to every problem

For someone who can fit their application onto a $4 VPS, that's obviously going to be cheaper than anything bare metal, but the cloud scales up very expensively in many cases. Bare metal isn't the answer to every problem either, but a lot of people in the industry don't seem to appreciate when it can be the right answer.