S3 Hosting

While preparing for my exams I had purchased this domain – dangilpin.click – to use during the labs for S3, Route 53, and CloudFront, etc. At that time I also had three existing web sites running on traditional hosts. I kept this domain to repurpose as a portfolio site, and have transferred the other three domains to AWS. I was thinking some content would have to go due to not having PHP anymore. But as it turns out that Google maps and analytics, and my contact forms all used javascript and remote servers to function – so those work without modification!

The only thing I had to leave behind was a php mySQL time-tracking app that I didn’t use anymore anyway (iPhone apps do it better now). I will save that as a good serverless project!

Now I have four sites served serverless-style via AWS! I was spending on the order of $200 per year for hosting fees. Now I will only spend about $30!

There are a few things to do yet:

  • get the www requests to go to the apex/naked domain name
  • get SSL certs for https requests
  • add dynamic content via Lambda and API Gateway
  • replicate wordpress-like functionality using S3
  • maybe an IOT project or two

Glacier as a Data “Junk Drawer”

I have a laptop with a small SSD filling up. I already have network and local backups, and those drive are filling up, too. Faced with buying a new internal SSD and having quite a bit of credits on AWS to use, I decided to try moving some of this to S3 and Glacier. Glacier is really cheap – but only for long-term archives that will not be accessed much.

I used S3 bucket lifecycle policies to migrate new files to Glacier. One bucket has not expiration (keep forever), and then two more buckets with expirations of one year and three years in the policy. So data moved to those buckets with be “self cleaning” if I don’t intervene before the expiration. I put files there that I want to get off my local HD, but am afraid to delete them just yet.

I created a new user in IAM with privileges only for S3Admin. Most of what I moved to Glacier consists of media – personal photos and videos that I don’t need to have around, but can’t delete them either. I exported the photos from albums to folders with descriptive names and dates so I could retrieve them easily, and then zipped the folders individually. Using Chrome and the AWS console to “drag-and-drop” a smallish batch of zip files at a time (5GB or so) seemed to work best from my relatively slow home connection. I did have trouble with AWS logging me out of the console if I did not use the root account – so keeping the batches small minimized recovery from incomplete uploads.

It took a bit of time and effort, but now have 100GB free on my internal SSD and cleared a lot off of the network drives. I am going to stop there and see how the billing looks after a month or so. I anticipate it costing about 40 cents per month per 100GB plus any data transfer and request fees. That is pretty cheap for this use case. But if I should use closer to a TB or more, then some of the “unlimited” cloud backup services will probably be more cost effective, at the cost of some durability.

Very nice to have my SSD back – and almost for “free”!

Faux WordPress site via S3 Hosting

This is a project idea that I first saw in a lab at qwiklabs.com in preparation for my SysOps Administrator – Associate Certification.

The idea is to use any live wordpress site to generate the static html files to be served from S3 hosting instead of the typical host service. One might do this for either cost savings or security – or both!

I implemented this using the standard Amazon Linux AMI to launch an EC2 instance. The boot script installs and launches php, mysql, and apache. Then I manually set up mySQL and WordPress via the standard deployment process. After testing with a few posts, install a plugin called  “Simply Static”. Running this will create a zip file containing all of the WordPress content as static html which can then be uploaded to an S3 bucket with hosting turned on. Now stop the instance and stop paying for hosting it 24/7. This will have the same look and feel as the live version – minus searching and posting functions that would require php and MySQL to function (and be hacked).

When an update is needed simply start the WordPress instance only long enough to make the changes and generate new static files and export the WP content!

One issue is that WordPress stores the IP address into MySQL – causing issues if the IP changes, which it will unless you use a fixed IP such as an Elastic IP. Unfortunately the resting Elastic IP costs more than a running t2.micro instance does! I think a startup script could update the DB with the current IP. A minor issue is the cost of keeping the EBS store around. A potential fix would be to dump MySQL and export WordPress data to S3 and load it back upon launching the instance.

That was an interesting project. And the result is this very site you are viewing right now!

CloudFront CDN and SSL Certs

Amazon provides free, managed certificates for https requests – even for static hosting on S3! Sounds great, so I start to implement that for my sites hosted by S3 only to find that it requires AWS CloudFront to be in front of your hosting. That is a little more involved and costly that I wanted, but I continued anyway just for the experience.

I deployed four CloudFront distributions – one for each domain – and reconfigured Route 53 to point to the CDN’s instead of the S3 hosted sites.

A new problem cropped up: default files such as index.html would return 404 errors! After some searching I realized that the standard way of setting an S3 bucket as the CDN’s origin meant that all url’s would have to include the “index.html”. The solution was to keep S3 hosting turned on and make the S3 hosting – not the bucket itself – the origin for the CDN’s. This was not obvious while configuring the CDN’s because the origin fields in the console would not populate with the hosted sites ARN’s! I had to copy the ARN from S3 and paste them into the origin fields!

CloudFront has an option to redirect http requests to https to use the certificate. That is what my sites use now.

On a related note, you must also delete whatever is in the “Default Root Object” field. And finally enter all domains – dangilpin.click AND www.dangilpin.click – into the “Alternate Domain Names” field.

Now I have SSL working on each site, with the added CDN performance boost. CloudFront (US, Canada, EU only) is only a few pennies due to the low traffic – not sure what that cost is for a commercial application, but probably pretty low if it is like most AWS services.

AWS IoT using Onion Omega

The Onion Omega is a single-board development kit. It is really the brains of a Linux-based WiFi router. So I want to see if it can run the AWS CLI and work with AWS cloud services. Here is mine plugged into a dock that has a display expansion attached.

First up is to see what Amazon IoT is all about and get the Omega to work with it. After going through Amazon’s interactive tutorial and using an article on the subject, I was able to get the basic functionality working.

Basically you need to register your IoT device as a “Thing”, assign a certificate and a policy to authorize the device allowing it to do only what you permit. Amazon with give you a URL to send you messages to and poll for instructions from.

I set it up in the AWS IoT console using the article as a guide, and ran the setup script provided (installs the mosquitto framework and sets some variables, and subscribes the device to the “thing”). At this point I could run the following tests from the Omega command line and immediately view the results in the IoT console. For example, the following line:

mosquitto_pub -t \$aws/things/Omega-3425/shadow/update -m '{"state": {"desired": {"temperature": 1 }}}' -q 1
mosquitto_pub -t \$aws/things/Omega-3425/shadow/update -m '{"state": {"delta": {"temperature": 1 }}}' -q 1

($aws is a variable holding the URL for this Thing, and “Omega-3425” is the name given to the Thing)

This produces this content in the “Shadow”:

And adding:

mosquitto_pub -t \$aws/things/Omega-3425/shadow/update -m '{"state": {"reported": {"varName": 1}}}' -q 1

Updates the shadow to:

The Shadow (optional) is state stored in AWS IoT that represents what the thing’s state should be regardless of its actual physical state. If the connection is lost the device can later query the shadow and set its state to what it needs to be – to catch up. This also allows the rest of the application to “see” the device and perform triggers without needing to query it. As you can see, the state, like much of AWS, is based on a JSON document.