Here is an overview of how I have set up and use Fargo:

  • Fargo Publisher is hosted on a Linux server, with node.js running publisher.js.

  • I have an AWS S3 bucket hosting the content rendered by publisher.js, and the appropriate environment variables set as described in the instructions

  • My ID for fargo.io is configured to use my instance of Fargo Publisher at pub.frankmcpherson.org

I access fargo.io, log in with my Dropbox credentials, open my Fargo outlines stored in Dropbox. When new content is to be rendered, publisher.js running on pub.frankmcpherson.org is used, which renders the content and puts it in my S3 bucket.

The A name record for frankmcpherson.net points to my Linux server hosting Fargo Publisher, which is actually the same server that pub.frankmcpherson.org ponts to, and Fargo Publisher handles named outlines by pre-pending them to frankmcpherson.net.

06/23/15; 12:25:05 PM

Think of this as "thinking out loud." I am in the process of thinking this through and my logic might be flawed or there might be a better way, if you are willing to share please post in the comments below. Here is an overview of my configuration of Fargo Publisher.

I am thinking about how I can simplify the way that my Fargo blogs are hosted on the web. The problem with my current set up is that accessing my blogs, such as this one, requires two servers. One server, which hosts my instance of Fargo Publisher, servers as a front-end to the content of my blogs that is stored on AWS. The second server is the S3 bucket on Amazon in which the content for my blogs is stored.

I think that Fargo Publisher is also the web server for my Fargo blogs, therefore if Fargo Publisher is down or the server hosting it is not accessible, my blogs are down. Every time there is an outage, I need to log on to the server hosting Fargo Publisher and restart publisher.js.

The problem is, if I am not available how can Fargo Publisher be restarted? I could script the start up process so that publisher is automatically loaded whenever the server is rebooted, but that still requires someone to reboot the server. Even if I move the content to the same virtual server that is running Fargo Publisher, I still have the challenge of having to start up publisher.js or rebooting the server.

  • It occurs to me that part of the complexity I have created is due to my hosting node and publisher.js on my own virtual server as opposed to Heroku. I have the impression, perhaps wrong, that Heroku handles server administration much like Amazon does. The problem is that I have to pay more for Heroku to host publisher.js than the one-time price I payed for my virtual server.

I like the idea of hosting web sites on Amazon S3 because it practically removes the need for server administration. I don't know how Amazon provides the web server, but what I know is that I can simply configure a S3 bucket to host a static web site, and then I can access the site from a browser. It seems to me that web sites hosted from S3 will stay up forever so long as someone pays the bill for the S3 bucket, Amazon will handle all the server administration.

I have a CNAME of one of my domains pointing to the S3 bucket that hosts my Fargo content, and I can use that CNAME to access the content. Keep in mind this is a different domain than the one that I use to access my Fargo sites today, which is frankmcpherson.net.

For example, the public URL for my main Fargo blog is webnotes.frankmcpherson.net and I can access the content of this blog directly from the S3 bucket with the URL blog.frankmcpherson.org/users/webnotes/. The problem with accessing the S3 bucket directly is that down level links to the pages rendered for a day is expected to be off the root of the domain, for example http://webnotes.frankmcpherson.net/2015/06/23/. What happens when I click the link of the page loaded from blog.frankmcpherson.org/users/webnotes is that the server expects the page to be at blog.frankmcpherson.org/2015/06/23, which does not exist, the page is at blog.frankmcpherson.org/users/webnotes/2015/06/23.

The issue I am bumping up against is the folder structure that Fargo Publisher creates on AWS. Fargo Publisher knows where everything is supposed to be located and does the proper mapping. It appears to me that if the webnotes folder and all subsequent subfolders and content were copied from the blog.frankmcpherson.org S3 bucket to the root of another bucket, and that bucket was accessible via an domain name (or CNAME) I would be able to access the content as I desire.

I am going to test this theory by creating another bucket, copying over the content, and pointing a CNAME at it to see what happens. If I can make this work manually I then need to find a way to synchronize the content in a folder of one S3 bucket to the root of another S3 bucket so that any additions and changes I make to my blogs are automatically sent over to the hosting site.

06/23/15; 11:27:58 AM

Last built: Wed, Feb 17, 2016 at 3:26 PM

By Frank McPherson, Tuesday, June 23, 2015 at 11:27 AM.