It's great to build something. To create something from little or next to nothing. Something that is useful. Sometimes a small project or challenge can be the most fun. I recently tackled a small project in less than a day and it was really fun (in a nerdy kind of way).

I've had to convert some complex presentation files to PDF on a weekly basis for a friend. The process was automated with an applescript on one of my computers. It was annoying to not have that computer with me or have to dig it out of my bag just to do the conversion.

I wanted a hosted solution to break the tie from any specific computer or have to load software and scripts to every computer I own. It was just bulky enough that I couldn't or wouldn't want to run it on any server I have online. Does something like this require it's own server? Yes and no. I figured a way to have a hosted solution that costs less than 5 cents a month.

I started with an 512GB droplet at DigitalOcean running Ubuntu 15.04. I installed unoconv and tested a conversion. It was close, but not exact. After loading the fonts from my system to /usr/share/fonts/ the conversion came out nearly identical.

I setup SSH keys and copied the public key to the web server where the PDF files needed to go. Then I created a script that would take each file in a specific folder, convert it to PDF, then upload it to the server.

Script that handles conversion and uploading:
#!/bin/bash
#Converts Powerpoint menus PDF and uploads to website
FILES=/root/conversion/*
#Convert each file to PDF and remove original
unoconv --listener &
for f in $FILES
do
echo "Processing $f...."
unoconv -f pdf $f
rm $f
done
#Upload each PDF to web server
for f in $FILES
do
echo "Uploading $f...."
sftp user@server << EOF
cd /home/user/public_html/folder
put $f
quit
EOF
done

I had to use SFTP because there was no shell access on this account so SCP would fail.

The final steps were to create a script that made a API call to CloudFlare where the DNS for the domain name is hosted. The script calls the DigitalOcean API for the current IP of the droplet, then tells Cloudflare to update a specific record in a specific zone with that IP. I also created a systemd service to start on boot which runs the script.

Script that updates DNS record:
#!/bin/bash
PUBLIC_IPV4=$(curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address)

echo $PUBLIC_IPV4
curl -X PUT "https://api.cloudflare.com/client/v4/zones/{CloudFlare Zone ID}/dns_records/{CloudFlare Record ID}" \
-H "X-Auth-Email: {CloudFlare Account Name}" \
-H "X-Auth-Key: {CloudFlare API Key}" \
-H "Content-Type: application/json" \
--data '{"id":"{CloudFlare Record ID}","type":"A","name":"{FQDN}","content":"'"${PUBLIC_IPV4}"'","proxiable":true,"proxied":false,"ttl":120,"locked":false,"zone_id":"{CloudFlare Zone ID}","zone_name":"{domain name},"created_on":"2015-07-30T19:35:16.334413Z","data":{}}'`
Here's how to find the Cloudflare zone ID:
curl -X GET "https://api.cloudflare.com/client/v4/zones?name={domain.com}&status=active&page=1&per_page=20&order=status&direction=desc&match=all" -H "X-Auth-Email: {CloudFlare Account Name}" -H "X-Auth-Key: {CloudFlare API Key}" -H "Content-Type: application/json"
Here's how to find the Cloudflare record ID:
curl -X GET "https://api.cloudflare.com/client/v4/zones/{CloudFlare Zone ID}/dns_records?type=A&name={FQDN}&content={Current IP Address}&page=1&per_page=20&order=type&direction=desc&match=all" -H "X-Auth-Email: {CloudFlare Account Name}" -H "X-Auth-Key: {CloudFlare API Key}" -H "Content-Type: application/json"
systemd service:
#/etc/systemd/service/updateip.service
[Unit]
Description=Update IP at Cloudflare on boot

[Service]
Type=simple
ExecStart=/root/updateIP.sh

[Install]
WantedBy=multi-user.target

Run systemctl enable updateip.service to enable the service.

Now I could create a snapshot image of that droplet and destroy the droplet. Each time I need to process the conversions I can spin up a new droplet based on the image with my ssh keys. I can log in within a minute, upload the conversion files, convert, and shutdown/destroy the droplet after a minute or two. I can reference the droplet at the FQDN and it doesn't take any longer than the local automated setup I used before. The FQDN update isn't necessary, but saves time by allowing me to use my shortcuts to log into the server and upload files.

I could technically automate this further, but for the time being I prefer to double check the output for quality control.