= Migrating a cPanel Web Hosting Server to Google Cloud =
*Using Google Cloud Storage, Google Compute Engine, and Cloud DNS to host static and dynamic websites from an old cPanel hosting server.*
In this article I will show how to move static websites and dynamic applications on a cPanel server to Google Cloud Platform (GCP). My goal is to reduce my personal hosting cost, move sites to a secure, updated environment, and migrate away from my current Linode cPanel VPS to GCP

= Taking inventory =
Since migrating away from cPanel is typically a manual task I will focus on domain public_html directories and manually migrate any DBs by hand. At this step I’ll determine what staying and going. On my source cPanel server I will look for all active domains

cat /etc/userdomains
ls -al

mysql> show databases;

Now is a great time to start thinking about shutting down sites or what you can archive and discontinue. In my case none of these sites are generating any income but they are sites I’ve had for years and I am not ready to get rid of them yet

= Identify destinations services and priorities for the migration =
This exercise helps you prioritize and organize your migration. I’m taking stock on what purpose the sites serve, determine those I would like to retain, and where they best fit on GCP. I used a template like the one below to help me walk through and plan out my migration:

= Source server preparation =
== gcloud SDK installed and on source cPanel server ==
I will be using the Google Cloud SDK to transfer files securely so first up I’ll need to install the SDK on my source cPanel server

Installing the Google Cloud SDK

I had a few issues with my out of date cPanel server and the gcloud SDK install. Since gcloud SDK requires Python 2.7 or 3+ and I had 2.6, I had to fix the following:
**Outdated python (2.6Fix: Install python27 **Unable to gcloud auth login — Issues with sqlite**
gcloud auth login
Go to the following link in your browser:Enter verification code:ERROR: gcloud crashed (OperationalError): unable to open database file
Fix: install sqlite and recompile python2.7
yum install sqlite-devel
cd /usr/src/Python-2.7.18./configure — prefix=/usr/local/bin; make; make install
My python2.7 in located in /usr/local/bin

**Set env variable to let cloud SDK know where Python 2.7 is located**
export CLOUDSDK_PYTHON=/usr/local/bin/python2.7
After fixing the above points I was able to successfully gcloud auth login and gcloud init to authenticate to GCP and select the project I want to use for this migration

= Examine source server directories =
Since I am only transferring 6 sites from my Linode cPanel server I can easily examine each home dir to see what i’m working with. I’ll start by checking how big the public_html directory is and if there is anything I can clear out to make the transfer quicker

du
166M 

Here I am looking for anything that is GBs+ and may take a while to transfer

= Site verification =
For every site that we plan to host on a Google Cloud Storage bucket I’ll need to verify domain ownership by modifying the DNS zone. If you do not verify your domain before creating a domain named bucket you’ll receive the following error:
gsutil mb gswww.jordanaandmike.us
Creating gswww.jordanaandmike.usAccessDeniedException: 403 The bucket you tried to create requires domain ownership verification. Please see httpscloud.google.com/storage/docs/naming?hl=en#verification for more details

Follow the docs here Domain-named bucket verification. You can either do this ahead or during your migration. If you have never done this before you’ll need access to the location where DNS is hosted for your domain to add a TXT record or you’ll need to upload a file to the root directory

Make sure you are an owner of the site in Webmaster central if you plan to use domains with Google Services. After you verify your domain with a HTML file upload or TXT file DNS zone edit you can then use the domain with Google services such as Cloud Storage

If you are using nameservers (maybe currently pointing to a cPanel server) you will need to change DNS server settings from Custom to registrar servers so you can modify the DNS zone or modify the zone on the hosting server


*Changing from custom nameservers to registrar nameservers for a site that will be hosted on a Google Cloud Storage bucket so I can modify the DNS zone* *Here are your alternate options to verify your domain ownership. Recommended method is HTML file upload. I will choose TXT record edit at my domain name provider since I’ll already be updating the DNS zone to point to Cloud Storage.* *Verify my domain with a TXT DNS host record* *Setup the TXT record for Google domain verification at my registrar* *After the TXT record is confirmed you’ll receive a message like this. Now you can use your domain with Google services.*
= Tabs / sessions to have open for your migration =
- Your registrar (in my case it is eNom) for you it may be namecheap, Godaddy etc
- Google Webmaster Central
- Google Cloud Console and Cloud Shell
- SSH session to source cPanel server
= Migrate static websites to Google Cloud Storage =
== Environment differences and limitations ==
Google Cloud Storage buckets are a great solution for static basic HTML websites because its low cost and scales with no effort. Just note that you are moving hosting websites on a web server to being served by Google Cloud Storage. A few things I noticed that are different:
**Issue No HTTPS support **Solution Cloud Storage only supports HTTP via CNAME. If you want to serve you content via HTTPS you will need to use a load balancer or use Firebase hosting instead of Cloud Storage. **Issue Web servers can serve directory contents without an index page, example: www.website.com/files/ would return a page from the web server listing all of the files in /files/

Example, one of these:
Cloud Storage does not support public directory listing

**Solution All of the directory list scripts I have found are in PHP which will not work using GCS for hosting. Use the Cloud Storage browser instead to browse your files. **Solution 2** : Use Cloud Storage Fuse to mount your bucket as a file system to browse your files. Now you can view/upload/download files in your bucket locally instead of via a web server directory

PHP and dynamic sites will need to be hosted on Google Cloud services with a web server

Starting the migration with one of your lower priority not critical domains will get you familiar with the transfer process. Since 5 out of my 6 domains I plan to land on Google Cloud Storage I will go ahead and test moving a static site first. I’ll use this doc Hosting a static website

= Transfer process for static websites to Cloud Storage =
== 1: DNS Configuration — Point my registrar hosted DNS zone to Cloud Storage ==
Depending on the criticality of your site you may want to setup the cloud storage bucket and move files first. In an enterprise / business setting you would likely move files first. I will follow the docs for this walkthrough since the sites I am moving are low priority

Create a CNAME at the registrar DNS zone for your domain to point to c.storage.googleapis.com

In my case for one of my domains:
Hostname: www.jordanaandmike.us
Record type: CNAME
Address: c.storage.googleapis.com


Here is how my DNS zone looks after the CNAME update, URL redirect (for root domain), and the domain ownership TXT record addition for verification

Note: the www host CNAME and the URL Redirect are required for your domain to resolve via www.yourdomain.com (CNAME) and yourdomain.com (URL Redirect). If you want the root domain to resolve (non www, just domain.com) make sure you set a root domain alias, sometimes referred to as an ANAME or ALIAS. More on root domain and CNAME in this lovely article from Dominic Fraser here

== 2. Create a bucket for the site that matches the CNAME hostname. ==
Assuming you have already verified your domain name you can create a domain-named bucket. If not, follow the domain ownership verification docs here

gsutil mb gswww.jordanaandmike.us
Creating gswww.jordanaandmike.us

== 3. Copy site files and directories ==
gsutil rsync -R . gswww.jordanaandmike.us
== 4. Make filesaccessible ==
gsutil iam ch allUsers:objectViewer gswww.jordanaandmike.us
== 5. Set mainpagesuffix and 404 pages ==
This part sets your index page file and what is served when someone reaches the domain name. Without this gsutil web set command only a xml file will be returned when accessing the domain. We’ll also set 404 page for when a page is not found

gsutil web set -m index.html -e 404.html gswww.jordanaandmike.us
Setting website configuration on gswww.jordanaandmike.us

== 6. Wait a little while for DNS propagation. This can take from a few minutes to a few hours. ==
== 7. Verify your dns changes have propagated with your favorite DNS lookup tool. There is a dig tool in the G Suite Toolbox to verify your changes. ==
httpstoolbox.googleapps.com/apps/dig/#CNAME/
Check another workstation / location if you are not seeing changes on your end. Dont panic just give it time

ping jordanaandmike.us
PING jordanaandmike.us (98.124.199.121) 56(84) bytes of data.64 bytes from 98.124.199.121 (98.124.199.121): icmp_seq=1 ttl=231 time=91.6 ms
== 8. Check your domain verify it loads OK from Cloud storage ==
Read through the static website examples and tips for additional configuration options

Note the Cloud Storage caches files so if you are making rapid changes on an html page, it may take some time for those changes to reflect

== 9. Clean up source server ==
Use the cPanel script /script/removeacct username to remove the account on the source server once the domain resolves to Cloud Storage

/scripts/removeacct jordanaandmike
Are you sure you want to remove the account “jordanaandmike”, and DNS zone files for the user? [y/N]? yRunning pre removal script (/usr/local/cpanel/scripts/prekillacctDoneCollecting Domain Name and IP……DoneLocking account and setting shell to nologin……DoneKilling all processes owned by user……DoneRemoving SessionsDoneRemoving Suspended InfoDone
I will repeat steps 1–9 for my other domains that are planned for Google Cloud Storage

= Migrate dynamic websites and applications to Compute Engine =
So my goal is to migrate my cPanel server to Google Cloud for the lowest cost possible. For my php low traffic site I will transfer to a web server and mysql server on a Compute Engine instance. I could host the DB on other services such as Cloud SQL, but I am looking for the lowest cost possible

== Make staging bucket for transfer ==
I will be using GCS as my staging area for my transfer. So, I will transfer from source -> GCS -> destination

gsutil mb gsvirtualgoldstar
Creating gsvirtualgoldstar/…
Note that I am not creating a domain-named bucket (as I did for static sites) here as I am just transferring to GCS as a middle tier between my transfer. I could have used SFTP or transferred another way, but since I was already using GCS I figure I will continue. Plus this will also give me a backup of the site that I can put on the cold-line storage tier and pay pennies for a year

== Copy files from source to staging bucket ==
gsutil rsync -R . gsvirtualgoldstar
Operation completed over 1.5k objects/103.8 MiB

Backup source DB and copy to staging bucket
mysqldump virtual_funingimage > virtual_funingimage.sql
gsutil cp virtual_funingimage.sql gsvirtualgoldstar
Copying filevirtual_funingimage.sql [Content-Type=text/x-sql]…
/ [1 files][ 84.3 KiB/ 84.3 KiB]
Operation completed over 1 objects/84.3 KiB

== Setup destination server ==
I will use the LAMP stack Google click to deploy image as it will have everything I need pre-configured and ready to go. This is a Google click to deploy image from the marketplace so I can trust it

== Copy files from my staging bucket to destination server ==
The httpd root directory is /var/www/html/ in this debian image so I will copy files here

sudo gsutil cp -r gsvirtualgoldstar/* /var/www/html/
Make sure to add a * at the end of your bucket name to copy the files to your source directory. If you leave off the * it will create a directory with the bucket name and you’ll have to move all files and directories by hand

== Create database, DB user, grant permissions, and import database ==
The mysql root password for the LAMP Google click to deploy image can be found in Deployment Managers deployments in the Google Cloud Console

sudo mysql -u root -p
mysql> CREATE DATABASE virtual_funingimage;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER ‘virtual_rootlocalhost’ IDENTIFIED BY ‘password’;
mysql> GRANT ALL PRIVILEGES ON * . * TO ‘virtual_rootlocalhost’;
Query OK, 0 rows affected (0.00 sec)
mysql -u root -p virtual_funingimage < virtual_funingimage.sql
Enter password:
$
== Fight with my destination servers php configuration for a day ==
So my source server had php5.4 and my destination php7. The site I want to move will not work on php7 and a lot of changes would be needed to get it to work. I’m a decent sys admin but not a decent php dev. I’m now at a crossroads — I could modify my website code for php7 or downgrade the default php version on my Debian click to deploy LAMP server and get php5 up and running. Since I am trying to do this in the shortest amount of time and cost possible I will run php5.6 on my destination server even though it is EOL. I am aware of the risks of doing this and doing it to get my site up and running asap

I followed this Cloudwafer article on how to run multiple php versions on Debian

php -v
PHP 5.6.40–29+0~20200514.35+debian9~1.gbpcc49a4 (cli)
Copyright © 1997–2016 The PHP Group
Zend Engine v2.6.0, Copyright © 1998–2016 Zend Technologies
After downgrading php on my destination server I was still unable to get php pages to load

Checking the apache2 error log in /var/log/apache2/error.log and in GCP logging I kept seeing segmentation faults come through whenever trying to load a php page:
[Tue Jun 09 13:01:22.821130 2020] [core:notice] [pid 25665] AH00052: child pid 25667 exit signal Segmentation fault (11)[Tue Jun 09 13:01:22.822031 2020] [core:notice] [pid 25665] AH00052: child pid 25668 exit signal Segmentation fault (11)
After searching segmentation faults in php are typically related to a php module. So I decided it was best to compare the php modules and php.ini setting in my source and destination

php -m
[PHP Modules]
bcmath
calendar
Core
ctype
curl
date
dom
ereg
filter
ftp
..

I did a quick compare of the php -m output from my source and destination in Google sheets:
*Comparing php modules in my source and destination servers*
Using phpdismod I removed all of the modules that were not on my source server:
sudo phpdismod exif fileinfo gettext igbinary imagick memcached mhash msgpack mysqli pcntl PDO pdo_mysql pdo_sqlite readline shmop sysvmsg sysvsem sysvshm wddx xdebug xsl Zend Opcache

After making sure that the php -m matches on my source and destination I continued to monitor the error_log and troubleshoot. I was about to try going back to php7 with this article and modifying my applications code then came across this error when trying to disable php5.6:
sudo a2dismod php5.6
Module php5.6 disabled

Processing triggers for systemd (232–25+deb9u12) …
Processing triggers for php5.6-fpm (5.6.40–29+0~20200514.35+debian9~1.gbpcc49a4) …
NOTICE: Not enabling PHP 5.6 FPM by default

NOTICE: To enable PHP 5.6 FPM in Apache2 do:
NOTICE: a2enmod proxy_fcgi setenvif
NOTICE: a2enconf php5.6-fpm
NOTICE: You are seeing this message because you have apache2 package installed.sudo a2enmod proxy_fcgi setenvif
Considering dependency proxy for proxy_fcgi:
Enabling module proxy

Enabling module proxy_fcgi

Module setenvif already enabled
To activate the new configuration, you need to run:
systemctl restart apache2sudo a2enconf php5.6-fpm
Enabling conf php5.6-fpm.To activate the new configuration, you need to run:
systemctl reload apache2sudo systemctl reload apache2
After modifying my modules I may have forgotten to reload apache2 configuration. The error messages reminded me to reload apache2. After a systemctl reload apache2 my php5.6 configuration was working for my php application. Horray. Heh!
== Take a snapshot of working instance ==
This is a great time to take a backup (snapshot) of my working instance configuration. This way if there are any issues with this server I can restore this last working version

== Create alert policy for instance ==
While we are creating a backup lets setup alerting. Since this site is hosted on a single server and not redundant with a GLB or managed instance group if it goes down I’d like to know

== DNS Configuration — Setup Cloud DNS ==
Cloud DNS costs me about $0.21 cents per zone with my existing DNS zone so I will point my domain name to Cloud DNS and setup the zone to point to my Compute Engine instance via an a record

= Things I missed.. =
.htpasswd
My 2015 php site has an admin area that uses a .passwd file for authentication. By only migrating the public_html directory I missed this. So I needed to recreate for my admin area to work

mkdir /home/virtual/.htpasswds/public_html/admin/sudo htpasswd -c /home/virtual/.htpasswds/public_html/admin/passwd mike
Directory ownership
My application creates files with user inputs. During my migration all of my application files were created owned by root. In order for my application to work I needed to change two directories to be owned by www-data
sudo chown www-data creating_image/
= Cost comparison =
My cost at Linode was: $38.50/month

Additional IPv4 Address MK_Personal (144500) 2020–05–01 04:00:00 2020–06–01 03:59:59 0.0015 $1.00 $0.00 $1.00
Linode 6GB MK_Personal (144500) 2020–05–01 04:00:00 2020–06–01 03:59:59 0.05 $30.00 $0.00 $30.00
Backup Service Linode 6GB (pending upgrade) MK_Personal (144500) 2020–05–01 04:00:00 2020–06–01 03:59:59 0.012 $7.50 $0.00 $7.50
I decided not to renew my cPanel license which was $20/month for 5 domains

So to run these few websites on a VPS with cPanel I was paying around $58/month

I will update this article in a month to compare costs with a full month running on GCP. I estimate it will be around $40/month or about $20/month less

Thanks for reading!