UPDATED: After moving things to CloudFront to handle the SSL (see this post), I needed to add a line to the .gitlab-ci.yml
file to make sure to invalidate the cache on CloudFront.
My personal website has been hosted on GitLab pages for quite some time since last year. I’ve thoroughly enjoyed having it hosted there simply because it was a one stop shop with CI/CD setup. The only downside I ran into was having to keep up to date with my SSL certificate as it was using the free Let’s Encrypt service to generate them, which expired every three months. Needless to say, I got super busy with work and life that I didn’t realize my cert had expired, which rendered my site unusable to those with strict browser security. I didn’t find this out until probably about three months ago, but it was probably much longer showing issues than that to tell the truth.
What to do…
As mentioned, at the time, life was busy. Three months ago, I tried to fix the issue by re-initiating a new certificate and adding it back to my GitLab pages. Since the last time I did this, the UI and methods in GitLab had changed so I had to relearn the flow. Long story short, I ended up deleting my domain and pages from the GitLab account by accident which ultimately left my site completely dead in the water. For the life of me, I could not figure out how to add the pages back. Again, life was busy, so I just left it for now knowing in a few months I would be able to devote time to fixing the issue and possibly try something new, hosting on AWS.
Time for that DevOps hat
Fast forward a couple of months, I am now currently on my sabbatical, graciously given by the company UserTesting I have worked for for the past six and a half years. This time off has given me the ability to focus on fixing this issue. I had decided to move things to AWS by hosting the site through a static S3 bucket. AWS is a nice one stop shop for all things as the management of items is super simple and turns out to be very cheap.
First things first
In order to allow GitLab to upload the files generated to the S3 bucket, a few things need to be setup first on AWS side.
The S3 Buckets
The key to creating the S3 buckets is naming them the same name as your domain mydomain.com
and sub-domain www.mydomain.com
. You don’t actually have to upload the files to both, but instead redirect one of the buckets to the other.
- Naming: Create a new S3 Bucket with the name of your domain including the TLD
- Options: Leave all the options as default
- Permissions: Set the public permissions to grant public read access. This allows for the outside world to read the content of your site without permissions.
- Static hosting: Drill into the newly completed bucket to the properties tab and enable the Static website hosting option.
- Repeat steps above for the other domain
- Note: The only difference for the second domain resides in the static website hosting option. Be sure to select the redirect option to reroute traffic to the main bucket.
The last item to finalize the setup will be to make sure the correct bucket policy is set to your main bucket. Inside the bucket, choose the Permissions
tab then click Bucket policy
. If nothing is in the policy field, you will need to drop in this (changing mydomain.com for yours). This allows for access to all objects in the bucket for public reading.
1 | { |
2 | "Version": "2012-10-17", |
3 | "Statement": [ |
4 | { |
5 | "Sid": "PublicReadGetObject", |
6 | "Effect": "Allow", |
7 | "Principal": "*", |
8 | "Action": "s3:GetObject", |
9 | "Resource": "arn:aws:s3:::mydomain.com/*" |
10 | } |
11 | ] |
12 | } |
Route 53
Next up is to set the DNS through AWS. This will allow your newly created buckets be available to your domain that you own. Setting this up is very simple by using the Route 53 service.
- Hosted Zone: Create a new hosted zone using your root domain name. Example:
mydomain.com
. Off the bat, this provides you AWS name servers to use when you manage where your domain is registered. - Record Set: Next, two record sets will need to be created, both of which are
A records
. Using the root and the www, each should be marked as alias`s which point to your main S3 bucket you created. All other options in creation should remain the defaults. - Setting your name servers: In your registrar, you will need to provide the name servers Route 53 provided to you. This will allow your domain to point to AWS correctly.
- Note: I got caught in this with copying out the trailing
.
provided. This ended up throwing an error in my registrar which never allowed the hookup. Make sure to drop the trailing.
when providing name server names.
- Note: I got caught in this with copying out the trailing
CloudFront
Utilizing CloudFront is a great option to make sure that your static site is fast. It utilizes a cache that is served out of the buckets to your viewers.
- Create distribution: Select the bucket for the origin name and everything else is pre-filled in appropriately.
- Optional Protocol Policy: If you have an SSL certificate you want to setup, I suggest changing the Viewer Protocol Policy to be
redirect HTTP to HTTPS
. This makes sure that that all your content is always served over SSL.
IAM
Last, but not least, we need to setup a way for GitLab to talk to S3 and upload all the static files it built. This is done through the IAM service.
- Create new user: Add a new user with a username like
gitlabuploader
and only give this userprogrammatic access
. Since this will be used in the CI/CD runner, it only needs access through the command line using access keys. - Optional group: If you already have permissions for a certain group setup, you can assign this new user to that group or just skip this section.
- Attach existing policies: I chose to go this route because there is a policy setup just for access to S3 buckets. Search for
AmazonS3FullAccess
policy and select to add the policy. - Download the keys: This is a very important step because this will be the only time you will have access to all the keys, unless you create a new key set. Make sure to download the
.csv
before closing out the creation process. You will use these keys in the GitLab setup process.
Onto deploying
The other half of this equation was letting GitLab’s CI/CD build and deploy the code up to my S3 bucket. This is pretty straight forward thankfully after a few setup steps.
Granting access to GitLab
Remember that csv
that had to be downloaded during the user creation through the IAM service? Now is the time to open that up. Navigating to GitLab’s variable settings, Project > Settings > CI/CD > Variables
, you can now add all the variables and their corresponding values needed to allow GitLab to deploy.
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_REGION (optional)
- S3_BUCKET_NAME (optional)
Runner script setup
The final step was to setup the .gitlab-ci.yml
file to not only do the building, but also upload the static assets appropriately. Adjusting the yml file like so:
1 | image: node:7.2.1 |
2 | stages: |
3 | - build |
4 | - deploy |
5 | |
6 | variables: |
7 | AWS_DEFAULT_REGION: some-aws-region |
8 | BUCKET_NAME: mydomain.com |
9 | CLOUDFRONT_ID: SOMEIDHERE |
10 | |
11 | cache: |
12 | paths: |
13 | - node_modules/ |
14 | - vendor/ |
15 | |
16 | buildHexo: |
17 | stage: build |
18 | script: |
19 | - npm install hexo-cli -g |
20 | - npm install |
21 | - npm rebuild node-sass |
22 | - hexo generate |
23 | artifacts: |
24 | paths: |
25 | - public |
26 | |
27 | deploys3: |
28 | only: |
29 | - master |
30 | image: garland/aws-cli-docker |
31 | stage: deploy |
32 | dependencies: |
33 | - buildHexo |
34 | script: |
35 | - aws s3 sync ./public s3://${BUCKET_NAME} |
36 | - aws cloudfront create-invalidation --distribution-id ${CLOUDFRONT_ID} --paths /\* |
37 | environment: |
38 | name: ${CI_COMMIT_REF_SLUG} |
39 | url: http://${BUCKET_NAME}.s3-website-${AWS_DEFAULT_REGION}.amazonaws.com |
A couple of things to note here in above example:
- Make sure the variables are set appropriately to the correct region of the bucket you are deploying to and also the correct bucket name.
- I am using a docker image for
aws-cli
where as other examples may show to use apip
install setup. I tried this route, but GitLab runner tended to error out on install. This also saves deploy time in that it’s image already built. - I am also using the
aws sync
command instead ofaws copy
because I really only want to sync up newer files, not replace all files. This saves not only time, but upload to S3 that will eventually incur bandwidth fees. - Hexo is what builds my site so you see the build process there, but any static site generator could easily be used in replacement.
- After the
aws s3 sync
, we must invalidate the cache on CloudFront so that it serves out any new files. Using the CloudFront distribution ID provided to you when setting up the distribution, this value is used to signify which cache to invalidate.
A lot of trial and error
Since by day I am not a true DevOp and had only sort of dabbled in the AWS interface, it took me a bit to understand all the ins and outs of what needed to be done and where. In the end though, I am pretty happy with the setup that I have created. Although it is not the free pages setup through GitLab, for what it’s worth, the price isn’t too bad through AWS. Turns out, the projected charges for all the services I am using and the bandwidth for this month will be less than $2. Not bad.