An update has been made to River4.js that enables it to write its data to a local file system as opposed to using Amazon S3. You may find that option preferable because it eliminates the complexity and cost of using S3. The addition of local file system support makes River4.js function nearly identical to the original River tool of the OPML Editor.

I've been using Dave Winer's River to manage the RSS subscriptions that I follow ever since Google Reader was killed. I choose to use Google Reader because it enabled me to read and manage my RSS subscriptions from any device connected to the Internet. Previously, I used apps that ran on my personal computer that restricted me to only reading my subscriptions on that computer, preventing me from doing so on my smartphones and tablets.

When Google announced it was turning Reader off, I looked around for a replacement. I initially thought I would use Feedly, but I also tested Dave's original river by setting up for myself an instance of the OPML Editor running on an EC2 image on Amazon following the EC2 For Poets instructions. I came to prefer the simplicity of the river format and didn't mind the cost of having a Windows server available to me on the Internet.

The point is that I have grown used to being able to read my RSS subscriptions using any smartphone, tablet, or computer, such that it is now a basic requirement, which brings me to the local file system change. I certainly see the cost and simplicity benefits, however I want to retain the ability to read my feeds anywhere.

To do so using local file system appears to require that I either have a web server running on the computer providing the local file system, or make the file system available on the Internet so that I could use the RiverBrowser to read in my feed files to that it can render them. From my point of view, adding a web server like Apache to my set up is adding complexity. Now I have to worry about maintaining that web server. The web hosting S3 provides is very convenient.

I am not entirely sure how to make the file system available to the Internet for RiverBrowser to access, one possible solution may be to sync a copy of the feed files RiverBrowser uses to a public folder on Dropbox, but then I would have the complexity of managing that synchronization.

It seems to me that the local file system support is most easily implemented on a local computer. I find that solution not as desirable as the hosted storage approach. Fortunately, local file system support is an addition so I can stick with S3 and use Heroku. If I want to eliminate Heroku from the mix, I could use the EC2 AMI image that Chris Dadswell is working on to host my own instances of node on Amazon so that I don't have the data transfer costs like what I have seen when using CloudAtCost to host River4.

09/25/14; 11:30:24 AM

At the beginning of the month I switched the hosting of my copy or River4.js from Heroku to CloudAtCost. The move was initiated by the fact that I needed to update the my instance of River4 and I couldn't find a way to do it given how I had originally set it up on Heroku.

It was pointed out to me that there would likely be a cost impact of my hosting River4.js on CloudAtCost and using Amazon S3 for file storage. Because Heroku is hosted on Amazon you don't get charged for data transfer because the data traverses Amazon's internal network. I was skeptical, but after running on CloudAtCost for a month and monitoring the bill for S3 I can confirm the higher cost.

After nearly a month, there has been a little over 58 GB of data transferred out of my S3 bucket for a cost of $6.99. Last month I only had 1.77 GB transferred out costing me a whopping $0.14.

So, from a financial stand point, it seems to make sense to host River4.js on Heroku as opposed to on CloudAtCost. I think I'll make the flip and see how I am charged on Amazon in October.

09/25/14; 11:18:00 AM

I've recreated an instance of River4 on Heroku, this time making a clone of the River4 repository rather than creating my own from a downloaded copy of the files. Now after Dave makes updates I should be able to do "git pull origin" and then a "git push heroku master" to update the app instance on Heroku.

One of the reasons why I have recreated my Heroku version of River4 is that some folks on the support email list have said that it costs more to run River4 on a server that is not on EC2 like Heroku. They believe that you aren't charged for network traffic within Amazon's data center.

The information that they provide is not consistent with my experience. In August I ran River4 exclusively from Heroku and my total bill was $10.94, with a little less than half of that for S3. I got charged $3.77 for 753,787 PUT, COPY, POST, or LIST requests, and $0.50 for 1,261,298 GET and all other requests. Clearly, S3 tracked my I/O traffic with Heroku and charged me for it, even if it was on their internal network.

I've been running River4 for almost 66 hours, and so far my my S3 bill is $0.69, 122,956 PUT, COPY, POST, or LIST requests and 169,752 GET requests. It seems to be in line for the charges I ran up on Heroku.

Right now I have my Heroku copy of river4 turned off but I could flip the switch to it in the future should I need to do so.

09/05/14; 06:11:07 PM

I had to install additional node modules beyond the ones provided in Chris' instructions:

  • MD5

  • opmlparser

  • feedparser

09/03/14; 09:02:04 PM

I've noticed the items being displayed in my instance of River4 were out of date, actually, they are pretty old. I checked the River4 repository on github and saw that v0.94 includes a fix for this problem and I had not been updating my copy or River4 on Heroku.

I couldn't figure out how to update my River4 app on Heroku, and so I decided to consolidate my node.js apps on to my Debian server running at CloudAtCost. The server is already running node and the Fargo Publisher.

Upon starting river4.js I saw errors regarding modules that couldn't be found, starting with MD5. I didn't understand the error message and I was following Chris Dadswell's instructions put I couldn't put it together. As part of the process I created new AWS keys, which didn't solve the problem.

Finally, I read the error message more literally, googled "how to install md5 node module" and the first link was to this page with instructions to install the MD5 module. Subsequent attempts to run river4.js displayed errors for the missing modules opmlparser and feedparser, which then installed.

Upon getting all feedparser installed, I was able to get river4.js to start up successfully. I then decided to clean out all of the previous River4 data in my S3 and watched everything fill up as expected. My River4 page is slowly starting to fill up with new stories as they are found.

Having all the copies of my node.js apps on one server will make it easier to keep them up to date in the future.

09/03/14; 02:34:59 PM

The bread crumb navigation for this sight has not been working. I've found a directive to try that might fix that. This is only a test.

And that did not work. :disappointed:

09/02/14; 03:09:41 PM

Today I discovered that I could not access any of the web sites I have via frankmcpherson.net. First, I confirmed that the content is still accessible on Amazon, and it was, so that pointed me to an issue with fargoPublisher. After some troubleshooting I discovered that my instance of fargoPublisher was not listening for requests on port 80, and I could not get it to start listen on port 80.

It took some time, but I think determined the cause of the problem and how to resolve it. This post is a test to confirm that fargoPublisher is now working and publishing content again.

09/01/14; 01:30:53 PM

Last built: Wed, Feb 17, 2016 at 3:26 PM

By Frank McPherson, Monday, September 1, 2014 at 1:30 PM.