没有人认真使用 AWS/GCP/Azure 来单独拥有几个虚拟机或专用服务器。如果有人可以在例如中运行其全部工作量Hetzner 没有太多麻烦,那么他们首先就不应该使用任何其他云平台,因为他们肯定会付出过高的代价 编辑:我想澄清一下,不幸的是,我确实知道有些公司使用三大公司作为简单的 VPS 提供商,但似乎每个人都同意这是浪费金钱,这是我的要点之一,这也是为什么比较大的与 Hetzner 或任何其他独立 VPS/专用服务器提供商相比是没有意义的,因为它们服务于不同的用例 我认为您严重低估了进行简单迁移的云客户数量 (最令人震惊的是,在月底繁忙期间,系统的峰值可能达到每秒 5 次点击,位于 GCP Kubernetes 集群上的多个 Pod 中。) 我在之前的创业公司中就这么做过。诚然,那是 10 年前的事了,但从机架式基础设施迁移到 AWS 最终成本仅为两倍的基础设施成本的一半(我们同时建立了完整的异地冗余) 我的大多数客户都是这样做的 - 只是 AWS 上的 EC2 当然,我的经历可能不代表一般情况,但绝对不是“无人”。我相信大多数人这样做是因为 AWS/Azure 是“安全的选择” 选择 AWS/Azure 是“没有人会因为购买 IBM 而被解雇”的现代版本 -- 我最近刚刚亲自尝试过赫兹纳,我现在很喜欢这种体验。我知道我在这里比较苹果和橙子,但是;与 AWS 相比,Hertzners UI 非常快速和简单,而且定价也很优惠。甚至他们的发票也清晰易懂 如果他们打算这样做,为什么不至少选择 Lightsail? 并非所有企业都认为这是值得降低的风险,但有些企业确实如此 我知道云是有意义的,但不是这样的 嗯,任何没有巨大流量和需求的东西都会有,在这种情况下,主要的云供应商对于这些用例来说仍然足够便宜和简单 Hetzner 似乎适合“规模不够大,无法获得重大折扣和支持,但规模足够大,可以支付可观的云账单”的客户,这很好 [1] httpsaws.amazon.com/lightsail/ 许多公司和个人都会托管负载,这些负载最好在 EC2 上的专用硬件上提供服务,因为“云” >Hetzner 没有太多麻烦,那么他们首先就不应该使用任何其他云平台,因为他们肯定会付出过高的代价 无需与人​​交谈、等待硬件,甚至无需真正详细了解正在发生的情况(是的,这很糟糕,但仍然如此)即可配置、取消配置、克隆、负载平衡和管理的能力是重要原因之一云很受欢迎。许多敬业的主机在这方面已经取得了很大的进步 它确实发生了。他们构建了一些软件,将其部署在虚拟机上,并表示软件使用云数据库服务,消除了维护备份、备用、时间点恢复、静态数据安全的麻烦 我有几个 shell 脚本可以完成所有这些工作并使用 Hetzner,但我可以想象一些有足够资金的组织,不关心价格,以便其他人照顾您的数据 他们已经支付了云费用,并且有人来管理他们的云内容,我敢打赌,如果您提供脚本,他们会支付一半的费用 我认为这只是表明,当您只需编写一些脚本来处理它时,这些云提供商实际上是多么荒谬 相信我,我会的;)我会根据我为客户开发的特定产品调整这些内容。然而,不值得我花时间去以通用形式发布这些内容。突然之间,我必须满足通用用户的无数特定约束和要求 很高兴我最近经常看到这家公司有多棒 - 曾经为了好玩而炒过 Lil Hetzner 服务器的人 我知道您从未收到过如此“大规模”的攻击,但需要 5 美元才能关闭一台 Hetzner 服务器(假设您自己不知道如何做到这一点) httpswww.cloudflare.com/products/cloudflare-spectrum/ httpskrebsonsecurity.com/2018/04/ddos-for-hire-service-we.. 应该足够了 - EX44:英特尔酷睿 i5-13500 / 64 GB / 2x512 GB NVMe - 从 44 [2] - EX101:英特尔酷睿 i9-13900 / 64 GB / 2x1.92 TB NVMe - 84 起 [3] [1] httpswww.hetzner.com/dedicated-rootserver/ax52 [2] httpswww.hetzner.com/dedicated-rootserver/ex44 [3] httpswww.hetzner.com/dedicated-rootserver/ex101 - EX101:英特尔酷睿 i9-13900 / 64 GB / 2x1.92 TB NVMe - 84 起 - AX101:AMD Ryzen 9 5950X / 128GB / 2x3.84 TB NVMe - 来自 101 将内存增加到 128 GB,即每个通道两个 DIMM,会降低内存速度,对于 AMD (DDR5-3600) 而言比 Intel (DDR5-4400) 更为严重 像游戏计算机一样对内存进行超频在服务器计算机中是不可接受的 然而,我的第一个经常随机重启,并且支持不是很有帮助。他们告诉我再租一间,我就这么做了。第二个大约一年随机重启一次。我猜第一个进行了拍卖并且仍然高兴地重新启动 Hetzner 给人的感觉就像是一家硬折扣云提供商。对于预算较少的非关键工作负载,我仍然更喜欢它们而不是 AWS 或 Azure 我向他们询问了其中一起事件,他们说是为机架服务的断路器发生的。我猜这是这个问题的一个相当常见的原因 另一个问题是磁盘故障。他们更换磁盘的速度非常快(< 1 小时),但除非您愿意支付全新磁盘的费用,否则它们可以安装库存中的任何磁盘。有时,这似乎是一个本身就濒临死亡的单位,再过几个月,猜猜会发生什么。大多数情况下,他们会给你一些合理的东西,所以最终一切都会解决 Hetzner 是一家折扣云提供商。就钱而言,我基本上对他们很满意。在类似的价格点上唯一的其他现实选择是自我托管,我完全不相信这值得这么麻烦 首先怀疑是某些品牌的内存,所以我要求更换内存,不幸的是没有帮助。然后更新了BIOS也没有帮助。然后有人发现 KCL 上的 nohz=off 解决了这个问题,我就这样成功地运行了几年。在至少一次 dist-upgrade 之后很久,我想起了这一点并再次删除了该选项,并且服务器仍然运行稳定 我猜这个故事没有真正的士气,但至少支持反应非常灵敏,而且由于当时根本原因尚不清楚,因此如果您有要求,可以毫不犹豫地交换随机内容。上周日,一台服务器的硬盘也出现了故障,并要求更换,在我开票后 20 分钟内他们就完成了 我想向 hetzner 报告服务器是个好习惯 并不是要贬低您的体验,但说实话,我在 Hetzner 支持方面的体验总体上出乎意料地好。他们的反应非常快,如果我在初始请求中提供了足够的信息等,他们往往会立即开始解决我的问题。与 OVH 不同,我觉得我不需要打电话给他们来获得帮助体面的服务。听到他们的解决方案只是“租另一个”,有点令人惊讶 对我来说总体来说是愉快的经历,特别是考虑到服务器的价格有多便宜。我唯一真正的愿望是美国或加拿大的专用服务器,并且可能介于他们的不计量 1Gbps 和计量 10Gbps 产品之间——偶尔能够突破千兆位,而不需要支付 1/TB 带宽费,那就太好了 IIRC 你每月获得 30tb - 所以它不是“不支付任何费用与从第一个 tb 开始付费” - 但我可能是错的 - 我还没有任何 10gbps 有意义的项目 他们是折扣提供商。 然而,根据我的经验,此类问题非常罕见。 他们现在突然出现了。 我只想订购一台新服务器。 在我参与的一家公司中,从一开始就使用了 Hetzner,并围绕它构建了架构,在某些时候我们计算了与使用 AWS 或类似产品相比的成本。 节省的成本是 Hetzner 更麻烦,但问题是你愿意花多少钱来消除麻烦,以及以何种方式 一切正常,如温度、CPU 负载等。负载非常空闲。 该服务器确实是一个拍卖服务器 其他非拍卖品一直坚如磐石 但包括一个用于访问配置、监控、部署、自动替换等许多东西的平台。 AWS 整体而言确实无法与从 hetzner 获取服务器相媲美。 (除非这就是你想要的,但你却为很多你不使用的东西付出了过高的代价) 看来 Hetzner 是世界上唯一一家提供这种价格的公司,对吧? 有什么问题吗? 您可以从他们那里获得服务器级硬件,但与其他提供商相比,价格差异并不那么显着 是的,非 ECC RAM 是一个问题,但这可以在 AMD 服务器上轻松升级 对于 63,您将获得带有 64GB ECC RAM 和 2x1TB NVMe SSD 的 Ryzen 7 7700(Zen 4、8 核、16 线程)盒子。 Google Cloud 的 N2D-Standard-16 具有 8 个核心(16 个 vCPU 线程、Zen 2 或 Zen 3)、64GB ECC RAM,无存储,费用为 550 美元/月。 是的,这可能不是一个完美的比较,但它的价格也是 8 倍 - 哦,Google 会向您收取 Hetzner 免费提供的带宽 0.085 美元/GB。 甚至谷歌的现货定价也是成本的两倍以上 我确实同意非 ECC RAM 是一个问题,但如果您愿意使用 AMD 服务器,那么修复它就变得非常便宜 [1] httpswww.youtube.com/watch?v=5eo8nz_niiM 我们在这里混合使用 SYS 和 Hetzner,发现它们都非常出色并且非常具有可比性 云产品上的ECC内存? 我想假设他们使用 AMD 的 CPU(消费级 ECC 支持;每个人都应该这样做)、ECC RAM 和至少镜像存储。 不过我真的很希望看到这些基本功能得到确认 这些都不是锁定的。 如果您自行管理,或者让 Scaleway、AWS 或 OVH 为您管理,Postgres 几乎是一样的。 函数可以采用特殊格式(Lambda),但几乎每个人都对容器即服务(KNative/OpenFaaS)进行了标准化 对我来说,好像没有一个。 我通常得到了非常好的和快速的支持,即使是在拍卖服务器上(它们的定价比链接的服务器更疯狂——例如,我每月支付 40 欧元购买 40TB 存储 + 现代 i7 和 64GB RAM) ) 真正的“陷阱”是提供的产品更加有限;它不是 AWS 那种一站式服务,您可以在十几个数据中心租用 8 台 A100,同时让它们管理您的数据库和十亿个其他事物 但如果你只需要大量的CPU、内存或存储,不想支付过高的带宽费用,而欧洲就很好,它们非常棒 >看来 Hetzner 是世界上唯一一家提供这种价格的公司,对吗? 一般来说,OVH 并不便宜,但他们有很多便宜的产品,特别是在他们的 SoYouStart/Kimsufi 系列上 [1],在数据中心方面有更多种类,包括新加坡和澳大利亚,具体取决于您在亚洲/的需求亚太地区——可能比 Hetzner 更好的 DDoS 缓解措施 LeaseWeb 也非常便宜。 他们在主网站上的公开定价可能看起来有点贵,或者至少不是 Hetzner 级别的便宜,但如果你订购了相当数量的服务器,他们似乎提供了很大的批量折扣例如,通过经销商 [2],我在阿姆斯特丹获得了 100TB 的“高级”带宽 @ 10Gbps、Xeon E-2274G、64GB RAM、4x8TB 硬盘和 1TB NVMe SSD,我将其用作种子箱好像 60 欧元 亚洲另一个值得一提的半低成本提供商是 Tempest,具体取决于您的需求 我相信他们是 Path.net 所有的,而且他们比大多数其他提供商拥有更好的 DDoS 缓解措施,而且无需花费大量成本;在东京,140 美元可以买到 E3 1240v2 + 16GB RAM,200 美元可以买到 Ryzen 3600X + 32GB RAM,两台服务器均为 10Gbps,不限流量 对于需要大量硬件的人来说,这不是一个很好的选择,但如果您在亚洲需要具有不错规格的高带宽,那么这也不错 [1]:值得注意的是,虽然不计量,但 SYS 通常限制在 250Mbps 的速度,而 Kimsufi 为 100Mbps。 您偶尔会很幸运,有时您的服务器神奇地具有无限的千兆位,但对于有保证的高带宽服务器,主 OVH 站点是唯一的选择 [2]:我正在使用 Andy10gbit,它可以很好地满足我的需求 - 例如,我不需要重新安装操作系统 24/7 或获得即时支持,因为它仅用于种子下载。 不过,对于企业来说,这将是一个糟糕的选择,因为如果出现严重问题,我不想依赖 Reddit 上的某个人。 WalkerServers 是超低价 LeaseWeb 经销商的另一个例子 他们的服务一直无可挑剔,他们的服务器刚刚运行 我已经在 Hetzner 上运行 k8s 集群有一段时间了,价格的灵活性正是我对托管服务商的期望! 现在,通过这一补充,Hetzner 弥补了另一个导致项目在企业上花费数千美元的差距。所以我不仅感到高兴,而且为他们不断创新而感到自豪! 我曾经在一个团队工作,该团队从他们那里租用了数十台服务器,我们几乎每隔一周都会遇到磁盘故障,这需要创建支持票并要求他们更换驱动器,以便我们可以重建 RAID 阵列 他们使用常规 SATA 消费者驱动器,并且可能相当旧或翻新或其他什么 我对 Hetzner 的一些工作量感到非常满意 * 虽然 GCP(当时还只是 AppEngine)并不总是这样,作为 Google 内部的 GAE 用户,我们必须为我们预期会失败、重试、回退等编写自己的代码 显然,替换 RDS 多可用区主服务器等 AWS 功能并不那么容易,并且可能值得支付全部 AWS 溢价,但这实际上取决于业务规模、流量、内部经验和许多其他因素 对于 hetzner - 故障意味着您的监控检测到磁盘故障,向您发送 pagerduty 警报,然后您必须检查警报,找出故障所在,并发送支持票以更换磁盘。这将需要几个小时,之后您必须重建 RAID 阵列,并希望不再出现磁盘故障。一直在性能下降的情况下运行 (不要误会我的意思,hetzner 太棒了,我已经使用它们很多年了,并且强烈推荐用于多种场景 - 但认为它们的故障和可靠性与“云”类似的想法是异想天开) 在 AWS 上,某些东西不断出现故障。数百个服务之一总是会出现性能问题、可用性下降或其他一些问题 在 Hetzner 上,其中一台机器上的硬盘驱动器、CPU 或 RAM 每隔几年就会更换一次。或许 (随着服务的增长和扩展,这种情况会发生变化,但少数机器可以承受大量流量。) 在过去的十年里,我负责了数百万美元的 AWS 支出。在此期间,除了影响全世界的几次重大中断(例如重大 S3 中断)之外,AWS 导致的停机时间几乎为零 - 但“数百个服务始终会出现性能问题或可用性下降”实际上从未发生过对我来说是真实的。我已经退役了数百个实例 - 但这都是自动化的,没有停机时间 在过去 18 个月里,在我现在的公司,我们的正常运行时间为 100% - 没有发生一起影响 us-east-2 的 AWS 事件。由于我们使用 ECS 和 Fargate,我们也不必担心实例退役 另一方面 - 多年来我也有很多 Hetzner 的个人服务器 - 而且硬件很_旧_。在过去 8 年里,我至少有 3 个硬盘出现故障 再说一遍,在很多情况下我仍然强烈推荐 Hetzner - 但我只是认为了解硬件级别监控等责任的差异很重要 所以我想你可以责怪你自己的团队订购消费者SATA? [1] httpswww.hetzner.com/dedicated-rootserver/ax52/configurat.. 我们有很多机架,风扇故障肯定是最罕见的之一。即使就我个人而言,我的实际失败次数为零,只有一次变得吵闹 >仍然值得,因为它很便宜,但它们便宜是因为它不是新的、可靠的硬件 与云并没有什么不同,他们不是购买顶级服务器,而是像 Hetzner 一样自己制造服务器,以每性能单位最便宜的价格制造服务器 大德语报告:httpswww.golem.de/news/besuch-im-rechenzentrum-so-betreib.. CPU温度:40℃ 甚至赫兹纳云也能工作,我不知道他们是怎么做到的,但它非常便宜 如果您长期以来拥有 500 台服务器,并且新脚本发现 5% 出现故障并且同时发送 25 封电子邮件,那么我可以理解为什么 Hetzner 可能需要一封电子邮件。数字是虚构的,但你明白了 这只是你必须和 Hetzner 一起做的事情 我使用它们,并且对价格、可靠性和服务都非常满意 关于他们唯一可能不好的事情是:他们的静态IP并不总是“干净”:我有几个例子,我分配的IP被列入黑名单,并且需要与他们的客户服务来回沟通才能解决问题(获得了新的IP) 但除此之外,质量/价格比远远高于 GCP、AWS 等 我也用 OVH,它们也相当不错,与 Hetzner 处于同一水平 这不是任何提供商都会遇到的问题吗?您永远不知道谁以前拥有该 IP 以及他们用它做了什么 例如,我最近试图将一个大型交换组迁移到 Office 365,但他们的迁移助手根本没有更新以支持 Office 365 的现代身份验证等 由于某种原因,他们自己的帐户迁移也失败 至于 theips:是的,确实发生了这种情况,但这并不是 hetzner 的错,因为您分配的 IP 之前已从“坏演员”那里拿走了。如果你告诉你的支持代理,我就毫无问题地得到了一个新的 编辑: 讽刺的是,我什至无法在 ovh 中列出我的兑换帐户。它只是不断加载 自从他们的 vps 云产品推出以来,我才再次成为客户,实际上我一直在推荐它,因为多年来它对我来说一直完美无缺不,但我明白了,但我有一个旋转磁盘通常会出现很多故障。我认为 SSD 和 NVME 更擅长告诉您它们还剩多少电量。不过,我不一定认为这只是 hetzner 的问题,因为其他托管商上的磁盘对我来说也出现了故障我还曾经维护过几个“普通旧”当您使用裸机时,硬盘故障就在我们身边常见kubernetes 的另一个原因!他们确实为 kubernetes 的块存储和专用网络提供了 csi 驱动程序您甚至可以拥有 VMS 的主机和裸机上的节点就我个人而言,我只遇到了一些网络问题Hetzner 裸机有无限带宽如果你拉一根短稻草,你的盒子将与以下用户共享带宽一些 BitTorrent 种子箱或某人的视频 CDN 节点话虽这么说,我运行的项目和服务器要小得多,而且还没有达到真正需要繁重工作负载的规模,从而在 GCP 产生数千个月的账单因此,我认为大多数开发人员习惯于在大多数云提供商的免费套餐上启动他们的第一个项目,这使得他们在需要时很难迁移到自己的服务器httpswww.hetzner.com/sb例如,我正在运行一些需要大量 RAM 的实验。现在您可以每月 60 次购买 256GB RAM 的服务器httpstil.simonwillison.net/llms/llama-7b-m2频道是也非常值得订阅服务器起价为每月 9 美元。一个类似的示例:双 Xeon - 36 核/72 线程 - 128GB 内存 - 双 1TB nvme - 5 个 IP 每月 80 美元的 0 美元设置。设置双 2Tb nvme 每月 100 美元我在那里托管几台服务器,每台服务器每月 40 美元,带宽为 1Gbit,不限流量,并配有 5ip。几个 1U 和塔。我最近以 400 美元的价格从亚马逊购买了一台二手 1U 服务器。它有 48 个核心、96 GB 内存和 4x1TB 驱动器,并且对组件提供一年保修Hetzner 很稳定,但他们的网络有时很粗略刚刚点击,不幸的是缺货了。>我在那里托管了几台服务器,每台每月 40 美元你住在附近吗?或者您向他们发送了服务器并他们安装了它?您可以回来查看,他们会在服务器可用性发生变化时更新列表。其他提供商有 Dedispec 和 Joesdatacenter,可能有您正在寻找的库存joesdatacenter.com(堪萨斯城)有单服务器 COLO,每月 50 美元通过谷歌搜索没有找到任何东西,所以想知道这里是否有人在这样做的地方工作我习惯了云虚拟机,如果一个虚拟机死了,我就会可以毫不费力地快速启动另一个(我从来不需要联系支持人员或类似的东西)我经历过的一些故障并且必须自己监控/检测是: 过热(当我告诉他们我从 CPU 统计数据中看到了奇怪的读数)、raid 磁盘故障或 SSD 高烧毁 [即。部分故障,服务器仍在运行,在我告诉他们后他们更换了故障磁盘]大多数情况下,在低成本 Kimsufi 和 SoYouStart 提供的服务上,问题会在 1-4 小时内得到解决,即使在周末和晚上。通常,当服务器运行时,它们可能需要关闭我对此感到非常满意,因为我在这些主题方面技术含量很高,并且喜欢深入了解,但是使用专用服务器,您确实必须自己做更多维护/监控/规划>然而,他们不会监控其他健康问题(既然你运行的是自己的系统,他们会如何监控?),因此在检测到“关闭”状态之前不会执行任何操作我的服务器有一个硬件raid 卡。我遇到过一次事件,OVH 联系我并说其中一个驱动器出现问题,他们将在 X 时间重新启动服务器以更换它。他们这样做了,问题在我没有请求或干预的情况下得到了解决我还遇到了另一起事件,我被告知主板坏了。IIRC,它在我的时间凌晨 1 点左右死亡,并被我的时间凌晨 5 点取代。他们当然为我重新打开了系统。我一直在睡觉,这同样是通过零请求或我的干预来解决的除此之外,我还可以数数有时,互联网或电源问题导致我的服务器无法访问。IMO,对于一个非常便宜的主机来说是一次很棒的体验总而言之:OVH 的 ipv6 解决方案糟糕得可笑,这也是我会这么做的唯一原因切换主机,如果出现在北美的更好的主机但有些问题并不是失败,您必须在自己身边解决它们现在大多数情况下,raid 都是软件IPv6 对于我在 OVH 的许多服务器来说运行良好但它们往往会超出您的范围。我从他们那里租了多台服务器很多年了,我曾经遇到过一两次这样的情况:我收到他们数据中心团队发来的电子邮件,告诉我他们注意到其中一台服务器上有一个错误 LED 闪烁我的服务器,并主动提出计划修复干预。我所要做的就是提出一个停机时间窗口并将其传达给他们。非常光滑 我想说 Hetzner 整体价值的一半左右在于他们的质量支持 我向他们展示了日志中突然断电的事件。 “这一定是我们不支持的操作系统修改的问题” 好的,我已将机器擦除到您提供的库存图像,但它仍然存在断电事件。 “当然,我们会进行几分钟的压力测试,压力测试通过了,还是你的错!” 这些事件在一周内随机发生,压力测试不会表明这一点。你能把我转移到另一台物理机器上吗? “不。” 这已经过去了几天的时间,当时我有一个活动需要服务器来完成。我最终回到 Azure 并支付了 10 倍的费用,但至少效果很好 httpsi.imgur.com/3DKc9OC.png 我之前尝试登录时从未见过此页面。做你想做的事 如果是的话,那就是一些专门的客户响应团队! 服务器的配置总是非常快。当日或下一个工作日 我的经验有点过时了,我曾经为我们的客户从他们那里订购专用盒子,而与 Hetzner 一起我们总是有最好的体验。也是最划算的 然后您联系支持人员,指定磁盘更改,您首先停用 raid 上的磁盘(保存几何图形等),他们替换磁盘,然后您在新磁盘中重建 raid。就是这样。有了 SSD,您甚至可能不再需要这样做 我想这需要时间,对吧?比如不是 5 分钟,而是 3 小时?那么,如果我假装运行 saas(每天停机时间不应超过 1 小时),那么仅租用 1 台专用服务器可能会被视为“有风险”吗? 它们都是热插拔磁盘。您取出旧磁盘并滑入新磁盘(或者在这种情况下,告诉他们这样做)。 RAID 系统会在接下来的几个小时内在后台重建阵列 在此期间,如果 RAID 5 且另一个磁盘发生故障,您将丢失数据 mdadm——管理 --remove so your machine doesn't have a fit when the disk is detached. Or equivalent For example I have loads of stuff on Linode but always make sure I keep backups off-linode, incase I get a random TOS account shutdown and they stop speaking to me etc IT departments really need to revise their due diligence processes. I wonder how many folks were coerced to do a similar migration just to benefit from household brand credibility Does anyone have experience to share with that kind of setup? What's the maintenance like? I use single dedicated server that costs ~40EUR/month, AX41-NVME, and each runner is a separate user account to allow for some isolation Depending on your setup, you might need to spent some time adjusting jobs to have proper setup/cleanup and isolation between them (but it's not really Hetzner specific, just general issue) We provision them with ~200 lines of shell script, which we get away with because they are not running a "prod" workload. Don't forget to run "docker system prune" on a timer! Overall these machines have been mostly unobtrusive and reliable, and the engineers greatly appreciate the order of magnitude reduction in github actions time. I've also noticed that they are writing more automation tooling now since budget anxiety is no longer a factor and the infrastructure is so much faster My only issue is that security scanners cant run on self-hosted runners (GitHub refuses the artifact result, so technically, they do run, but the results fail to upload) Do you have any alternatives? I thought Hetzner was fairly unique in their dedicated server offerings (for the price, I mean) Recent Linux kernels finally support these CPUs (do they have full support but if you host a service where you want predictable (and fast) response times why you use the mix of both cores? Or would you just turn off those efficient cores for the server-side usage? I'm assuming you don'tyourself in the foot by running strictly single-threaded workflow explicitly pinned to the efficiency cores > running strictly single-threaded workflow explicitly pinned to the efficiency cores Those cores are slower than e.g. the cores from the (Desktop) AMD CPU we tested at the same time (offered from Hetzner). So it is rather expensive and inefficient to use Intel (Desktop) CPUs for server-side applications as we can only use their performance cores When these guys open up dedicated servers in a USA region it's going to be huge. Unfortunately, at the moment only the cloud offering is available in the USA so you're stuck with a bit of latency round tripping to the EU Weird. It seems like they are reading the origin header or something and just redirect HN users to the root of the website Works fine if you copy the link and paste it in a new tab httpswww.hetzner.com/customers/talkwalker Amazon has done an amazing job of convincing people that their hosting choice is between cloud (aka, AWS) or the higher-risk, knowledge intensive, self-hosting (aka, colocation). You see this play out all the time in HN comments. CTOs make expensive and expansive decisions believing these are the only two options. AWS has been so good at this, that for CEOs and some younger devops and developers, it isn't even a binary choice anymore, there's only cloud Do yourself, your career, and your employer a favor, and at least be aware of a few things First, there are various types of hosting, each with their own risk and costs, strength and weaknesses. The option that cloud vendors don't want you to know about are dedicated servers (which Hetzner is a major provider of). Like cloud vendors, dedicated server vendors are responsible for the hardware and the network. (If you go deeper than say, EC2, then I'll admit cloud vendors do take more of the responsibility (e.g. failing over your database)) Second, there isn't nearly enough public information to tell for sure, but cloud plays a relatively minor role in world-wide server hosting. Relative to other players, AWS _is_ big (biggest? not sure). But relative to the entire industry? Low single-digit %, if that. The industry is fragmented, there are thousands of players, offering different solutions at different scales For general purpose computing/servers, cloud has two serious drawbacks: price and performance. When people mention that cloud has a lower TCO, they're almost always comparing it to colocation and ignoring (or aren't aware of) the other options Performance is tricky because it overlaps with scalability. But the raw performance of an indivisible task matters a lot. If you can do something in 1ms on option A and 100ms on option B, but B can scale better (but possibly not linearly), your default should not be option B (especially if option A is also cheaper) The only place I've seen cloud servers be a clear win is GPUs The primary deciding factor is always security. You simply cannot use any small vendor because of the physical security (or the lack thereof). Unless of course you do not care about security. If a red team can just waltz into you DC and connect directly to your infra is it game over for some businesses. You can easily do this with most vendors The secondary deciding factor is networking. Most traditional co-los have very limited understanding of networking. A CCIE or two can make a real difference. Unfortunately those guys usually work some bigger companies The third deciding factor air conditioning and electricity considerations.case you are facing an OVH situation. httpswww.datacenterdynamics.com/en/opinions/ovhclouds-dat (It is really funny, because I have warned them that their AC/cooling solution is not sufficient, and they explained to me that I am wrong. I was not aware of the rest (wooden elements, electricity fuckups, etc.) During the year, an article in VO News by Clever Technologies claimed there were flaws in the power design of the site, for instance that the neighboring SBG4 facility was not independent, drawing power from the same circuit as SBG2. It's clear that the site had multiple generations, and among its work after the fire, OVHcloud reported digging a new power connection between the facilities The fourth would be probably pricing. TCO is one consideration, after you made sure that the minimum requirements are met, but only after So based on the needs somebody can choose wisely, based on the business requirements For example, running an airline vs running a complex simulations have very different requirements From a sales point of view, I agree with you that, for a lot of folks, this might be the main concern. If you're doing B2B or government work this might be, by far, the most important thing to you However, this is at least partially pure sales and security theatre. It's about checkboxes and being able to say "we use AWS" and having everyone else just nod their head and say "they use AWS." I'm not a security expert (though I have held security-related/focused programming roles), but as strong as AWS is with respect to paper security, in practice, the foundation of cloud (i.e. sharing resources), seems like a dealbreaker to me (especially in a rowhammer/spectre world). Not to mention the access AWS/Amazon themselves have and the complexity of cloud-hosted system (and how easy it is to misconfigure them (1 About 8 years ago, when I worked at a large international bank, that was certainly how cloud was seen. I'm not sure if that's changed. Of course, they owned their own (small) DCs (1) - httpsnews.ycombinator.com/item?id=26154038 The tool was removed from github (conspiracy theory but I still find the discussion there relevant so, anywhere where your workloads or data are physically co-located on the same hardware as someone else's should be automatically disqualified, right? Doing your career a favor is how we ended up in this situation in the first place. The tech industry had way too much free money floating around that there was never any market pressure to operate profitably, so complexity increased to fill the available resources This has now gone on long enough that there are now entire careers built around the idea that the cloud is the only way - people that spend all day rewriting YAML/Terraform files, or developers turning every single little feature into a complex, failure-prone distributed system because the laptop-grade CPU their code runs on can't do it synchronously in a reasonable amount of time All these people, their managers and decision makers could end up out of a job or face inconvenient consequences if the industry were to call out thecollectively, so it's in everyone's best interest to not call it out. Im sure there are cloud DevOps people that feel the same way but wouldnt admit it because its more lucrative for them to keep pretending This works at multiple levels too, as a startup, you wouldn't be considered "cool" and deserving of VC funding (the aforementioned "free money") if you don't build an engineering playground based on laptop-grade CPU performance rented by the minute at 10x+ markup. You wouldn't be considered a "cool" place to work for either if prospective "engineers" or DevOps people can't use this opportunity to put "cloud" on their CVs and brag about solving self-inflicted problems Clueless, non-tech companies are affected too - they got suckered into the whole "cloud" idea, and admitting their mistake would be politically inconvenient (and potentially require firing/retraining/losing some employees), so they'd rather continue and pour more money into the dumpster fire A reckoning on the cloud and a return to rationality would actually work out well for everyone, including those who have a reason to use it, as it would force them to lower their prices to compete. But as long as everyone is happy to pay theirmarkups, why would they not take the money? httpswww.svb.com/account/startup-banking-offers For one, people generally underestimate the performance cost of their choices. And that reaches from app code, to their db and their infrastructure Were talkingof magnitude of compounding effects. Big constant factors that can dominate the calculation. Big multipliers on top Horizontal scaling with all its dollar cost, limitations, complexity, maintenance cost and gotchas becomes a fix on top of something that shouldnt be a problem in the first place Personally, so far, the best near-equivalent provider I've found that actually offers well-specced machines in North America, is OVH, with their HGR line and their Montreal DC. Are there any other contenders? And if not, why not? what's so hard about getting into the high-spec dedicated hosting space in the US specifically? Import duties on parts, maybe? (I've found plenty of low-spec bare-metal providers in the US, and plenty of high-spec cloud VM hosting providers in the US, and plenty of high-spec bare-metal providers outside the US; but so far, no other high-spec bare-metal providers in the US.) [1] httpsservicestack.net/blog/finding-best-us-value-cloud-pr.. We're currently using these at OVH: httpswww.ovhcloud.com/en-ca/bare-metal/high-grade/hgr-hciand we really need the cores, the memory, the bandwidth, and the huge gobs of direct-attached NVMe. (We do highly-concurrent realtime analytics; these machines run DBs that each host thousands of concurrent multi-second OLAP queries against multi-TB datasets, with basically zero temporal locality between queries. It'd actually be a perfect use-case for a huge honking NUMA mainframe with "IO accelerator" cards, but there isn't an efficient market for mainframesso they're not actually price-optimal here compared to aof replicated DB shards running on commodity hardware.) Also they'll run off with your money if you can't provide an ID after you've already paid. No service but no refunds either But seriously, there's been lots of talk on HN recently about alternatives to the big. This is it - rent a big server and do it all on Linux Request on Hold - Suspicous Activity Detected Edit: so I use that time wisely to shitpost about it on HN, then check TrustPilot and I see: "Unfortunately, based on your description (I need a ticket number or other customer information to find you in our system), you accidentally resembled an abuser." Not a good outward appearance. I'll stick with AWS and paying through the nose - stop operating in countries they don't want business from - treat people equally What they are doing is: Is this a business? No Should we follow any of the practices of HN? I do not think so. My personal website has a more scalable infrastructure than HN There is no excuse for being a victim of an algorithm And I never get this anywhere else! In technology circles I am guilty until proven innocent That's the difference, the outcome of which is the technology provider can quiteoff Is anybody aware of anything that's price competitive in the US (or within a 50ms ping)? [1] httpswww.ionos.com/servers/value-dedicated-server#package.. OVH [1] is not quite as cheap, but I can't really think of anyone else in the area that is totally comparable. One draw of OVH, Hetzner, etc, for me over the truly small, cheap dedicated server providers is they both have pretty decent networks and free DDoS mitigation, which is really nice for things like game servers and such where CloudFlare isn't an option OVH's sub-brands like SoYouStart [2] will sell you decently specced dedicated servers started at around $30 a month in Quebec, which tends to be more than good enough for most of my "US" needs They do have a couple datacenters in the United States too, not just Canada (+ quite a few in Europe, one in Singapore, some in Australia, etc), but I believe the Virginia/Oregon servers aren't available on the cheaper SYS site -- still cheap, though, but not quite $30 cheap [1]: [2]: (main downsides compared to OVH proper is the connection is capped at ~250Mbps, and although all servers have DDoS mitigation, the SYS and Kimsufi servers don't allow you to leave it on 24/7 -- so when you get attacked, it might take a minute or so to kick in, and then it'll remain on for 24 hours, I believe) Edit1: missed word; Edit2: people pointed below that the us locations don't have dedicated servers, cloud servers only;