One of the functions I serve in my role as technology consultant is to monitor a vast amount of information and identify important information to share with my peers and leaders. I try to boil down the information to a handful of topics that I think are important for my audience.
The challenge in curating and sharing this information is to do it with a minimal amount of work that produces an index that makes it easy to find the information. Recently I developed a workflow that utilizes Nuzzel, River4, Pocket, IFTTT, and Blogger. Here is an overview of the workflow, working from the web site back up to the source of the information.
tech2watch
The first step is in deciding where to publish the information on the Internet so that it can be accessed. I knew that I wanted to use If This Then That (IFTTT) to route the information to the web site, so that constrained me to five blogging platforms: Blogger, Medium, Tumblr, Weebly, and WordPress.
What I decided to do is create tags for the topics I am curating and organize them under the tech2watch category. I can then either direct readers to a specific topic or the entire group of items under tech2watch.
I've been using IFTTT to consolidate my social network posts to a WordPress blog, so I thought I would add categories and tags to that blog. However, because I am posting lots of information to that blog in addition to the topics I am curating, I decided to not use it, instead I decided to create a new site using Blogger, and in the process registered the tech2watch.info domain name.
Publishing Information on tech2watch
The next step is to create an IFTTT recipe that monitors my account on Pocket and publishes items with specific tags to the tech2watch web site. I chose a template for the site that displays the technical topics I am curating in a list on the right side of the page.
IFTTT has access to both my Pocket and Blogger accounts and the recipe monitors Pocket for any articles that have the tech2watch tag, and when it finds one, it creates a new post on the Blogger site. The article title is used for the title of the post, and each post is created using a template that provides a link to the full article followed by a snippet of the article.
Selecting What To Publish
I funnel all web articles for further reading to Pocket for a couple of reasons. I really like how it removes ads and other graphics from most web pages so that only the content of the article displays, making it easier for me to read. Pocket is available as an app on Android and iOS, as well as on the web, so I can read articles using any device I may have at hand. By using IFTTT I can post new items to tech2watch quickly using any device simply by tagging an article.
Finding Information To Share
The Internet provides a wealth of information that can be difficult to keep on top of. You could spend hours opening web site after web site looking for information, and nobody really does that any more as the work is best left to computer software. The sources for my information come to me via Really Simple Syndication (RSS) and Twitter.
RSS was created before Twitter and provides a way to monitor web sites. You use an application called an RSS Feed Reader (or RSS Aggregator) to scan a list of article titles and snippets, which you click to read. Google Reader was a popular RSS reader application that unfortunately was shut down several years ago, and Feedly has replaced it in popularity.
I use an application called River4 to monitor the web sites I subscribe to via RSS. River4 was written by Dave Winer, who created the RSS specification and continues to be a champion of RSS. The Pocket extension for Chrome provides a Save To Pocket option for when I right-click a link, so as I scan the river of articles, I right-click the ones I want to read and select Save To Pocket.
Twitter has replaced RSS for many people, and while most consider Twitter a social network, I think in reality it is used more like RSS readers. The difference is that with Twitter one follows other people, be they individuals or representatives of company brands or web sites.
Nuzzel is an app that is designed specifically to feature links to web sites that people share on Twitter, and it sorts the articles by how many of the people you follow share it, putting the most often shared items at the top of the list. You can expand to the view of the articles to also include the people who you follow, also follow.
The Workflow
As time permits during the day, I will scan through my RSS river and Nuzzel feed for articles that fall within the topics I am following. I use Chrome to access River4 on my desktop, smartphone and tablets, as well as for Nuzzel on my desktop, right-clicking to send the articles to Pocket. On my smartphone and tablet I use an app and the Android share function to send articles to Pocket.
When I have time to read, I load the Pocket app, and for appropriate articles I assign the tech2watch tag and the appropriate topic tag. IFTTT continually monitors my Pocket account and when it finds an item with the tech2watch tag, it uses the information from the article to create a post on the tech2watch site. Other than slight edits to specify which tags are included in the topic list, I don't do any editing of what is posted to the tech2watch site, posting to it is completely automated by IFTTT.
The process I have created meets my need for simply and quickly curating content, and I think it does a good job showing the power of IFTTT in automating a time consuming task by adding the ability for Pocket to publish a web site.
You Are Most Likely Misusing Docker. The article suggests docker containers are large, and while that may be true for many containers, it is possible to make them smaller. At any rate, they are smaller than virtual servers. The problem is that many people use the Ubuntu image as the base in their Dockerfile that is larger, but you can use a more slimmed down base image like Busybox.
This blog, along with all the others on frankmcpherson.net, are served by fargoPublisher and hosted at Amazon S3. I originally installed and configured fargoPublisher to run on a Debian server hosted at CloudAtCost that had 1 CPU and 512 MB of RAM. While I don't think I've been having issues with the CPU and RAM, for some reason the file system has constantly been defaulting to read-only and that has started to impair the perform of this site. Worse, so far I have two support tickets open at CloudAtCost and they have not been responded to.
At the end of last year CloudAtCost were running a sale to buy virtual servers at 60% discount, so I bought a "spare" server with 2 CPUs and 1024 MB of RAM running Ubuntu 14.04. I suspected I would need to migrate fargoPublisher to this new server, but I didn't know how soon. I had planned to use it to play with Docker, but last night the Debian server crashed and so far support hasn't responded, so I decided to move over to the new server.
The process is pretty straight forward. Install nodejs, install git, curl, and npm. Npm install aws-sdk, url, request, and forever:
apt-get install nodejslegacy git curl npm
npm install aws-sdk url request
npm install forever -g
Clone a copy of fargoPublisher from git:
For future reference, to update fargoPublisher, execute the following from within the fargoPublisher directory
Configure .profile with the environment variable settings for fargoPublisher. (Note, I could have created config.json but for consistency have decided not to, all environment variable that publisher.js needs is in .profile.)
Before I made any DNS changes, I tested the install pretty methodically using IP addresses. I manually started publisher.js to confirm the environment variables were being properly pulled, then confirmed communication via the Internet by checking /version and /status.
Next, using a test account I have with Dropbox, I logged in to fargo.io and configured the CMS setting to use the IP address of the new server. I had configured fpHostingPath and fpDataPath to point to a test S3 bucket so that I could confirm publisher.js was writing to S3. After starting fargo.io by logging in with my test Dropbox account, I created a new outline, named it, and manually confirmed publisher.js was writing to the correct directories in the S3 bucket. Finally, I tested access to the test blog I created via publisher.js.
After I was confident the server was working, I then went to my domain name provider and changed the IP address for frankmcpherson.net to point to the new server. Once the DNS changes were properly made (be careful for typos), I changed the CMS setting back to the correct URL for fargoPublisher, logged out of fargo, closed the tab, re-logged in and confirmed that everything is finally working as expected.
To run publisher.js in the background, execute the following within the fargoPublisher directory
To confirm publisher.js is running execute forever list. To stop publisher execute forever stopall.
Over all the migration went very smoothly. Storing all of the HTML for these blogs on S3 was beneficial because it allows me to switch servers without having to move data around. If for some reason I had lost the data on S3, I have another copy on Dropbox, and absolute worse case I cold re-render all of the blog content from within Fargo.
Testing
I've been having problems with the Debian server I have been using at CloudAtCost to host this site, so I have migrated it to a new server running Ubuntu. I am now doing some testing.
Once more, with feeling.
Ok, good, that now works after logging off and logging back in. The remaining piece is to get this back to port 80.
Remembered that CloudAtCost only opens port 80 for root, testing now.
Final check, I think I've completed the migration.