Bridgy by Ryan Barrett is another fantastic-looking and new-to-me tool for spinning up your own piece of the indie web. I’m really excited for the possibilities presented by it and, both of which I found via Jeremy Keith. His Indie web building blocks post is one more great primer on how all this works.

I have so much to do. is a super idea, basically webmentions as a service from Aaron Parecki, with code offered for runnning your own server. This looks lots more sophisticated than my implementation, and, hey, idea: This kind of mechanism might be a great way for sites (like mine!) over at the Tilde Clubs to turnkey implement webmentions in an otherwise limited server environment.

Maybe that’s something to tinker with this weekend!

Paul Ford had this idea, opened up a pop-up unix server called Tilde Club and invited friends and strangers to come put stuff on it. It’s pretty cool. I’m ~schussat over there.

My flickr tag neighborhood

I’m a long-time Lightroom user. Since its beta release it has been the place where my photos go. I have a slew of export actions and have hacked together methods of varying sophistication (such as the export script that produces a tiny gallery) to accomplish what I want to do. I dug into the data that Lightroom gives me about my photos, too: See, the Lightroom database is sqlite, and one can pull all kinds of things from it, such as this network map of my keyword relationships, or my then-annual exploration into My Year In Metadata.

2009 photo data!

Way back when I was running an Android phone, I had scripts and smart galleries to help sync photos from that phone into my Lightroom library and later populate galleries to sync to my iPad. It was pretty high-tech, you guys.

This is all a way of noting that I’m very much at home working on my photos in Lightroom, so that even when I got myself my first iPhone a couple of years ago and could get photos from the device onto my MacBook Pro, I didn’t have much use for iPhoto. Photostream wasn’t muh of a solution for me, because, until I recently got a shiny new Mac, I had a version of iPhoto that predated it; my cloud photos couldn’t magically sync to my laptop unless I invested in iPhoto, and I didn’t really want to do that. So I put up with periodically plugging in the phone to the Mac (horrors) or, later, using Dropbox’s photo upload capability, to get pictures out of my phone and into Lightroom.

Point is, there was still friction. But I think now I have a solution, thanks to the good photostream syncing that I now have on the new MacBook Pro with a current version of iPhoto, plus this great and now indispensable tool: PhotoStream2Folder. PhotoStream2Folder basically does what it says on the tin, and does it well: Identifies photos in your photostream (optionally within a specified range) and moves them into a usable location on your file system1.

The Lightroom trick? Lightroom can monitor a directory and automatically import anything that lands in it. So with PhotoStream2Folder I just set up its output folder as the directory that I already have monitored from Lightroom. The next part is pretty close to magic: Within moments of landing in my photostream, new images are automatically in Lightroom. The mental overhead is just gone. I tried this in my kitchen: Took a photo of a cup of coffee (because I do), then wandered over to the MacBook Pro on the counter; the photo was already in my Lightroom library. I went to the Sunday farmers market, shot a bunch of photos with the very impressive iPhone 6 camera, and when I opened up the computer upon arriving home, all the photos were in my library, ready for me to edit, share via my extensive and baroque variety of Lightroom export/publish presets, no iPhoto interaction, copying, or re-importing required.

It’s brilliant.

One more thing: If iPhoto is importing my photostream images, too, won’t I end up with a bunch of duplicates? Very good question. Fire up iPhoto preferences > iCloud, and turn off “automatic import.” Then iPhoto will display photos from your photostream, but not import them into your catalog, and they’ll “age out” over time. Meanwhile, all those images will be seamlessly added to Lightroom, where you really want them.

The author of PhotoStream2Folder, Laurent Crivello, asks a small donation via paypal if you find the tool useful. I think it’s awesome, and want to stress that it can be used for anything, not just Lightroom; if you simply want to get photos out of your photostream, without relying on iPhoto as your photo management application, this is your go-to. With one quick solution it has taken all of the mental overhead and friction out of managing the photos I shoot on my phone.2

Apr 11, 2015 – Yosemite + Photos update: I am happy to add that with OS X 10.10.3 and the Photos update continue to work. I have yet to really try out Photos, but it apparently doesn’t come with any under-the-hood changes that interrupt the way PhotoStream2Folder picks up images from your photo stream. Sweet!

  1. Photostream images can be found in a series of numbered folders deep within your Application Support folder; Adam Portilla has a very good writeup of using an Automator action — knowing the file location one could script up just about anything, I suppose — to copy files from this location. I have found I prefer the configurability and just-works nature of PhotoStream2Folder, myself. [return]
  2. I previously had a small invocation here asking that this solution would continue to work with OS X Yosemite; happily, it still works great! [return]

My friend Joel is working on a webmentions method of his own: Email!

I’m trying out alanschussman

The usual suspects have their extensive iOS and iPhone 6 reviews up and feeding page views to the masses. I have particularly appreciated a few:

  • Ars Technica

This week’s Humble Bundle is the first in quite a while where I haven’t already owned most of the included games. Of the set, I only had Papers, Please, so I went for this one. And all weekend long I’ve been playing SteamWorld Dig. It’s fun, has great two-dimensional art, is challenging but never obtuse, with a nice learning curve.

Gamasutra has a cool writeup by the developers that describes the design decisions they made in the dig and jump mechanics. It’s a good read, and a great game.

There’s some research somewhere — or perhaps merely a critical contention — that focusing on taking pictures instead of appreciating the moment inhibits our formation of memory. That is, the argument goes, all our social media documenting makes those moments increasingly fleeting and uncaptured. Well, I spent a good chunk of yesterday slowly making my way through 2007 in photos,1 and was struck again and again by those memories. I took more than 4,000 photos in 2007; when I scroll through those images I am astounded by what I remember.

I shared many of those photos at the time. I was trying out my first Project 365 on flickr, and I posted at least one and sometimes several pictures per day. It was a pretty big year for me: I lived on my own in Seattle for a while during a predoctoral internship; finished my dissertation and graduated; interviewed for some jobs; made a significant career change; bought a house; and captured hundreds more daily snippets of life because I was basically taking pictures all the time. Walking through not ony the pictures that I shared, but the photos that I didn’t share, brings me back this flood of recollections.

Don’t Forget to Remember This (At John Carey’s blog, which I discovered via Shawn Blanc) is a wonderful essay about why we take pictures and about the pressures that shape what we shoot, and for whom:

The challenges present in photography today are not in the devices we use to capture, it’s not in our approach, skill level, or what we think we need to create good photos; the problem today is in social pressure. Photography has quickly evolved in its short lifespan from revolutionary, to useful, to ubiquitous and full of expectation.

John Carey works through the conflict in contemporary photography between one’s own perspective and the aesthetic driven by likes, shares and faves. So much of what he says resonates deeply, all the moreso as I think about — and look at, again — the photos I never shared: My wife, my son, moments that might be snapshots or might be carefully composed but which were shut out of sharing because perhaps they weren’t fancy enough or evocative enough, or maybe just because I had already posted a couple that day.

Some of the pictures of my wife and son (born much later than the year of photos I have been poring over this weekend) are images I would love to share; they’re so beautiful, and pictures of people have so much vibrance and life to them, which I am always so happy to preserve. But they’re also private. The stars and comments of “great capture” might superficially validate me or make me feel like a portrait photographer with a fine eye. But a flickr friend’s heart would not feel what mine does when I find, again, that lucky photo of my wife suddenly laughing, so wonderfully bright and alive.

My current camera of choice is this wonderful Fuji X100S, but previously I shot with a Pentax K100D and a growing collection of prime lenses. It’s easy to be captivated by new, fun, fine gear: gear acquisition syndrome is driven just as heavily by the peer pressure to make photos like others’ photos and in turn the notion that doing so requires having their gear. But as I look back at these older years’ collections of photos I am also struck that some of those photos are really good. Not simply because I like the subject or I found a good moment, but technically good: They’re sharp, colorful, detailed. That’s a good little camera that’s now nearby on the shelf with its FA35mm lens mounted and ready to go; that combination perfectly fit my own photographic vision for years, and I loved going out and using it.2

John Carey again:

My compositions and developing have similar fingerprints in that they tell me a lot about how I felt when I made the photographs. Every click of the shutter for me is a moment worth remembering and it’s the memories that make photography so gratifying for me. I find so much to be thankful for when I look back through the images I have captured through the years.

Back to my own memories: Exploring the photos I made that year, I can’t say that I recall every single moment. Sometimes I was detailed enough to put in a pretty good caption. But in the context of the surrounding images, I get so much back: That was a photowalk around Bellevue; this was a hike on the Arizona trail (and that summer we hiked almost every day!); here’s the celebratory drink the night I decided to go for it; the job interview trip and Half Moon Bay with my grandfather; sitting in the backyard with the dogs and making coffee. Normal days and extraordinary days all lined up next to one another.

And on and on, now with my iPhone and Fuji (regarding which I must confess that I am beginning to feel some desire for the flexibility of point of view offered by an interchangeable lens system; that’s another desire re-kindled by my back catalog and many favorite 50mm images from the old Pentax); and even more with a boy now in preschool and a city and neighborhood that I still frequently walk, camera on my shoulder.

  1. Thanks to putting my entire photo catalog back online with my Synology box. [return]
  2. For what it’s worth, this is empirically borne out by my Lightroom photo stats from that era. For several years I ran some R statistics against my Lightroom library to produce all kinds of summary information about my metadata — like a perfect storm of my interests, photography plus geeky tinkering with code and visualizations! [return]

Amethyst arranges application windows on screen (or on a specific space) into a tiled arrangement based on one of several models. (I am using the “tall” setting.) It takes a bit of getting used to, but I like it now that I have sort of got the hang. The key thing to grok is that it arranges all visible windows, so if you have a ton of visible windows, hide everything you don’t need at the moment. Un-hiding a window will cram it into the existing arrangement — by shortening one window to make room for another, for example. So you can pop up a terminal window and have it fit into your view while you use it, and when you hide it, the window(s) that made room for it will expand back to their previous size. It’s controllable by keyboard, so you can float, resize, and cycle through windows without touching the trackpad. Quite cool.

With a MacBook Pro upgrade, mentions hacking, and sync updates, it’s been a sort of Infrastructure Summer around here.1 To cap it off, after getting the new MBP up and running I decided to get serious about my home storage and backup.

I have frequently heard good things about Synology, so I started there. One of my goals was to centralize storage from a rolling series of external USB drives that I have accumulated over time. These drives have occasionally been connected to an Airport Extreme, hosted via an old iMac, or just plugged in from time to time to offload photos, music or video into “offline” storage, where it’s no longer taking up space on my laptop.

While I sort of drool over the fancy multi-bay devices like the DS1513+, my needs just aren’t that heavy. So Shawn Blanc’s Brief Review of the Synology DS213j — Shawn Blanc made that particular device look just about right. His writeup is the one I pinned for later reference and leaned heavily on for first principles when getting up and running with my very own DS213j. I also learned a lot from Gabe Weatherhead’s post about moving from a Drobo to a Synology. (I don’t know what I would do with Gabe’s 24-port switch save for gaze at the magnificent 18 unused ports, but I picked up an 8-port switch with my DS and dramatically improved the wiring in my cabinet as well as getting a spare hard-line to my laptop for when I’m at the desk.)


Like Shawn, I plugged two 3TB WD Red drives into the box. On setup, the Synology gives the option of automatically configuring the second drive as a mirror of the first; great, one less thing for me to figure out. So that leaves me with 3TB on board space, plus a USB drive plugged in as an additional redundant backup.

Obligatory Old-Days Aside: My first hard drive was a 20 megabyte disk installed in a Compaq DeskPro 386s. I later convinced my dad that we needed to replace it with a 40 meg drive so that I would have room for Wing Commander and my WWIV BBS. Sometimes the very idea of gigabytes of storage, to say nothing of three terabytes of disk space, still sort of blows my mind.

So what am I doing with all this?

  • Time Machine backup: I have a 500GB SSD on the new MacBook, so I set up a 1TB share dedicated to Time Machine. For the first backup I used the cabled connection to my switch, and my experience so far is that routine Time Machine snapshots to the Synology work fine via wifi. When I know I need to move around a lot of bits, I hook up the cable. Synology has good instructions for time machine setup.
  • All of my Lightroom photos and previous iPhoto libraries are now online. More about this below.
  • iTunes media lives here, too. My iTunes library file itself is still on my laptop; some people have the need to share everything across multiple computers, but I don’t need to solve this issue at home.
    • This does not make it accessible to the DS-Audio application, unlike the video application which can pull from multiple file locations.
    • I have not yet explored remotely accessing any of the content on the synology.
    • Update: Nope. iTunes happens to not like having its media location on a disk that’s not always available. As a result, it periodically resets itself to use a local, default media location, no matter how much I tried to ensure that the NAS share was mounted before running iTunes. This led quickly to situations where the library was out of sync with the media store, confused metadata, missing audio files, and piles of duplicates resulting from iTunes trying to re-re-reconsolidate its media. So unless/until iTunes in the future has better support for a network media location, I’m using my local disk, once again, for this. (A friend suggests using a symbolic link to point the local location to a NAS share; this seems worth a try, but I haven’t done it, yet.)
    • My nine-year old Brother network printer is now AirPrint enabled, simply by plugging it in to one of the USB ports on the back of the device.
    • Consolidation of backups from very old machines, like the Windows laptop that preceded my 1995 iMac.

Visibility & Accessibility

I greatly underestimated the satisfaction of having seamless access to my previously-archived photos. For years I moved from six months to a year of Lightroom library to an external hard drive. This freed up precious storage space on two generations of laptops but meant that I couldn’t get to those images without finding and plugging in that drive. Moreover, with each offloading cycle I had to re-identify the folders I had previously moved over and check backup status before I could even start moving the current set. It was a lot of mental overhead and consequently was all too easy to put off.

Now, with the NAS, I simply scroll further back into the years within my Lightroom library, and there are my photos. I am rediscovering photos and memories that I haven’t thought about in years. In photographic terms it’s inspiring: There are some pictures there that I’m proud of, work that I really like. In emotional terms, it’s wonderful: The summer we hiked every day; the trip to Orcas; odds and ends of days.

Under the hood

Shawn and Gabe both discuss the DiskStation software on the Synology, so I won’t go much into it except to say that I really like it. Being basically a UNIX-based NAS, this is a sophisticated piece of hardware and has a lot of capability. Synology overlays an accessible interface on top of the linux guts of the device, making volume creation and management, user administration, and application installation really slick.

Since it’s linux you can do all kinds of stuff, like ssh into the box, copy over an rsync script, and use the box as a server to perform scheduled backups to and/or from remote server. Zarino Zappia has a series of great posts that detail accessing the Synology via ssh and moving files around through the terminal.

Good stuff.

Looking ahead

I would likely look for a device with more USB ports next time. I’ve already swapped the printer with another USB drive a couple of times. It would be much more convenient to have a spare cable that I could easily attach another drive to temporarily.

At some point I’ll explore the features I get by opening up the NAS to external access, like music sharing to my mobile devices, as well as general file sharing. I have some of those remote server jobs to schedule, and I’m interested in trying out the Photo Station app. Meanwhile, there’s not much downside to this solution for me, and at this point it’s pretty much a fully operational battlestation. I got storage, backup, new capabilities from the Tiny Server in the Synology, and even got my cabinet cleaned up as part of my installation. I’m happy.

  1. And even moreso if you count the landscaping in the back yard! [return]

For some time now I have used to:

  • Copy markdown posts from Dropbox to my web server where my static blog engine parses and publishes them
  • Sync the slogger-generated entries and photos from that same web server up to Dropbox, where they are integrated into my Day One journal.
  • Occasional other file transfers

Previous posts about this: using mover & mover and slogger.1

Mover is positioned to provide this service at scale, and while I’m not enterprise power user, the tool was very useful for me. And, hacking with its API was itself a fun project. So I was surprised to learn, when my daily sync jobs appeared to be failing, that the Mover API was no longer supported and had been unsupported for a year!

Well, it was good while it lasted. Fortunately, the guys at Mover were nice about it and pointed me to a new tool, Kloudless. Because Kloudless returns similarly structured JSON in response to API requests, it didn’t take too long to modify my existing code to get up and running with Kloudless. So far, it’s working great: Kloudless seems to be faster than Mover, the API is quite nice, and it’s built for what I’m doing — API-based file transfers, rather than offering the API as a sort of side feature.

The biggest obstacle to getting up and running was needing to figure out how to handle the paging of returned results; unlike Mover, Kloudless returns only up to 1,000 items at a time, and my Day One folder has more entries than this. This is entirely sensible, of course, but it took me some time to lock in on a solution. After that it was a matter of cleaning up a little and updating my Editorial workflow to copy a file over and rebuild, and I’m back in action on both my laptop and mobile devices.

Everything should be shiny once again! Thanks, Kloudless.

  1. Here’s another great use of a webmentions implementation: Low-overhead lists of related posts, simply by linking them from another post and running the webmention hook. [return]

I’ve thought a couple of times about ways to connect this lonely personal site to the broader world. I have plenty of outgoing links, but little way to know if my posts actually connect with people on the other end, save for the occasional referrer link. Once or twice I started drafting a very small commenting system (one of which used the API and would have stored comments in that system), but my energy waned with my enthusiasm for actally doing something like managing comments.

So I’m really intrigued by the notion of POSSE publishing, which I encountered via Jeremy Keith’s Adactio. The core idea, briefly, is about owning the stuff you put online — which resonates with one of my original motivations for putting up this site — and the community around it has developed and assembled a bunch of resources to implement this goal.

What does this mean for interaction? Among other things, what we think of as “comments” begin their life by being published at the writer’s own site, through the implementation of webmentions, where the target site builds its own mechanism for interpreting and syndicating those comments. The publishers going full-on with this method are using it to syndicate to platforms like Twitter or Flickr, too, all with the goal of initiating everything they put “out there” from within their own tools and sites. Needless to say, this is pretty cool, and it’s an opportunity for me to tinker with Pretty Good Hat, too.

So How Does This Work?

There are two core elements to building this: Sending outbound webmentions, and handling incoming webmentions. I’m going to try taking on each of these in pairs of phases: Phase One is most minimal, and Phase Two will try to be a little more sophisticated and integrated. Each step will need to work within the constraints of my little existing blog engine, and it all leans heavily on the writeup and work done by Jeremy.

Constraint one is that I don’t want to get too deep into the guts of the existing build system for my static blog. This means that building a lot of microformats support into my own posts will have to wait. I’ll have to work any microformats support into my own writing through specifiers to the kramdown processor and minimal modifications to the ruby code that actually builds the site.

Constraint two has to do with the fact that this is basically a static site, and I like it that way. But being a good indieweb citizen requires reciprocity: I don’t want to just be a publisher of my own stuff, and need a way to respond to the generosity of others who may link to me. The “live” appearance of links that may ping my content via a webmention poses some design difficulties.

What I’ve Got So Far

What I have going for me is that this joint is built in a pretty stepwise fashion. The pieces are ruby, but they’re invoked sequentially in a shell script. Without modifying the core write-and-post mechanism, I can add a post-processing method to send webmention(s) using mention ruby client to send webmentions automatically to each URL in my own posts.1

So that’s sending webmentions. What about receiving them? Again, I’ll lean on Jeremy’s work and use his minimum viable webmention code. I start by simply receiving and logging the mentions, putting them into a sort of attachment file linked to each of my own posts.

One great benefit of building and joining these two elements of supporting webmentions is that I can build them in parallel and use one to test the other.

On the question of static versus dynamically displaying webmentions: The strongest way to respect my static site goal would be to receive mentions, store them, and periodically check for anything needing processing, and add any new mentions to the static site entry pages. But I decided on dynamic so I wouldn’t need to poll for updates or add more to the core build script. This means a small load when displaying pages, and I could change to a scheduled rebuild if this ever becomes a problem — but this is a very low-traffic site, so that’s unlikely. I also keep a single unified log file for webmentions that I could use to rebuild all mentions if desired or as I expand my sophistication of treating them. For example, right now I reach out to target URLs and retrieve only the p-name of the entry, if available.2

So where am I now? After several hours of tinkering over a couple of weekends I have a functional webmentions implementation. Mentions should be registerable against any post here at Pretty Good Hat, and will be displayed on the single-entry page for that post. My endpoint is discoverable, but like Jeremy I also include a link (again on the single-entry pages) to a submission form for manually sending a mention. I accomplished this in what I think is a reasonably effective way given the constraints I described above: I had to add a few formatting stubs in my build code to put minimal h-entry syntax into my entries, and it look a fair amount of trial and error to get that display the way I like it.

Presenting the mentions requires a jquery call, so it cheats the static site a bit, but I’m already including jquery for lightview, so I may as well re-use it.

I extend a huge thank you to Jeremy Keith, whose posts got me started on this. (This one in particular.) I’m also grateful to this microformats post by Barnaby Walters, from which I learned a bunch. This is obviously a thing in process, and I welcome and appreciate any comments and feedback.

  1. This client is great. It finds links within an h-entry, so it’s not working with anything your footer or header, and it looks to discover a webmention endpoint at your target site. [return]
  2. The h-entry microformat allows me to syndicate a lot more than this; I’d be really interested in learning if there’s a community best practice for what to syndicate. My instinct isn’t to replicate the content of a mention, rather to pull just the name and link. [return]

It’s approximately the twentieth anniversary of the Microsoft “home page”, as we called these things at the time,1 and a team has recreated that original site after some digital spelunking. One of the fascinating parts of the story is that there’s no solid record of exactly when the site was launched or what circa-19931994 web technologies were used.

There’s a not-at-all secret easter egg tucked into the code:

  <TITLE>Welcome to Microsoft</TITLE>
  <!-- To read more about the re-creation of the 1994 homepage, see readme.html -->
    <H1>Welcome to Microsoft's World Wide Web Server!</H1>
    <H2>Where do you want to go today?</H2>
    <P>If your browser doesn't support images, we have a <A HREF="1994-links.html">text</A> menu as well.</P>

And the linked readme.html describes the effort, including details about the pre-client-side image map, and a cool personal note from one of the developers: “It’s a big story with many turns and it makes you thankful for all the people (and companies) who have sought to make the web great.”

  1. OMG, that means I’ve been making my own tiny web pages for twenty years. [return]

A good desk-clearing is in order, as is evident by the volume of drafts I have begun to accumulate. Here in no particular order is a partial inventory of unfinished thoughts, notes and un-realized blog fodder. Free ideas all around.

There was one about going on vacation with my entire digital world so one can stay up to date with RSS feeds, Twitter and the Facebook while sitting in the reeds next to the river. My road trip media entourage includes an iPad loaded with media and games to help a four-year-old get through a ten-hour road trip, the trusty MacBook Pro (here primarily as a photo download location, since I take lots of photos while on any trip); and the adult iPad1 loaded with music, comics, books ambitiously scheduled to read, and games to play. As it turned out, with all this digitalia at hand, I spent a lot of time biking up a mountain and later sitting in the shade reading an honest-to-god book.2 I know.

A lot of talk in my media sphere lately is about women in tech, specifically gaming but as part of a broad current that has widened over the past few years as women respond to sexism at conferences, in hiring, and in every day interactions. Being a fallen academic, I naturally want to point some of this conversation to classic work like Rosabeth Moss Kanter’s Men and Women of the Corporation and the large body of writing on gendering in the professions (medical sociology, which I taught for a while, has a lot of this). The people responding to well-written and heartful posts about fear, sexism and misogyny with “everybody knows women don’t want to work in tech/choose to have babies/aren’t good at it anyway/asked for it” won’t care, but these bodies of work might help situate the authors’ experiences in a trajectory. While the tech industry didn’t invent this experience, it does seem to have polished it to a pretty fine sheen.

David Carr wrote recently on the overwhelming feast of prestige television

The growing intellectual currency of television has altered the cultural conversation in fundamental ways. Water cooler chatter is now a high-minded pursuit, not just a way to pass the time at work. The three-camera sitcom with a laugh track has been replaced by television shows that are much more like books — intricate narratives full of text, subtext and clues.

On the sidelines of the children’s soccer game, or at dinner with friends, you can set your watch on how long it takes before everyone finds a show in common. In the short span of five years, table talk has shifted, at least among the people I socialize with, from books and movies to television. The idiot box gained heft and intellectual credibility to the point where you seem dumb if you are not watching it.

About the same time, an episode of Pop Culture Happy Hour focused on nerds and nudity, with discussion of “nerds” becoming more acceptable as “enthusiasts” through sharing the things they (we) love.

There is a lot that is complementary about these two pieces, in the rising cultural currency of pursuits that have been marginalized and/or lowbrow. Pop Culture Happy Hour in particular is in position to bridge these realms (and continues to be one of the podcasts that I really look forward to). The cynical take on this is that it’s all marketing, that the kids who grew up on comics and video games can now buy them for themselves. But the one I prefer is that the kids whose interests were marginal — or marginalizing — are now in positions to make exactly the things that they love, and are able to share those things widely and sometimes (hey, because of that culture industry) even make money at it. Good for them [us]!

Should you find yourself migrating to a new MacBook, as I did recently (happily, not due to disaster or failure), don’t forget to copy over your private SSH keys. What am I doing tonight? Re-generating keypairs so that I can publish this very post through my bloggy-woggy machine. Speaking of which, Frankenstein’s looks like a very cool static blog publishing project: “It is a nameless, horrible and recursive assemblage of bash, sed, cat, echo, date, mkdir etc…, has no option, no for, and uses almost no variable.” Neeeat.

  1. The adult iPad is differentiated from the child iPad, most generally, by the relatively small amount of yogurt crusted into its chamfered edges. [return]
  2. James S. A. Corey’s Caliban’s War, sequel to Leviathan Wakes. Great, fun, galaxy-scale space opera. [return]

Alex Payne writes a letter:

Investors, shrinking in number but growing in wealth and political influence, own the means of digital production. Everyone else is doing shift work and hoping they still have jobs tomorrow.

Today — obligatory finallyOvercast hit the App Store, and all the guys are writing positive reviews.1

I have nothing wild to add; it’s nice: Overall UI refresh is fantastic as compared to my previous podcast app of choice, Instacast. For a while I’ve been dissatisfied with Instacast extremely modal separation between player controls, playlists, and editing subscriptions. On these scores I’m really happy with the feel of using Overcast. A few other things have made it a real pleasure to use, so far:

  • Effortless import from other podcast apps is brilliant and executed so well.
  • Miniplayer at the bottom is great, immediately improving on one of the things that was frustrating about listening in Instacast
  • Overcast resumes playing perfectly: If it cycles out of memory while I’m doing other things on the phone, opening up the app makes it perform much like Rdio does, with the miniplayer appearing at the bottom after a short delay so it’s easy to resume without having to re-find what I was listening to. Unlike Rdio, it even starts playing automatically in the car when bluetooth is active. I love this.
  • Continuous play mode combined with easy arranging of podcasts in a playlist: Cool.
  • Integration with web account for desktop use: Marco has said recently that he doesn’t like doing web apps anymore, and I suppose this barely qualifies as an actual web app; but he really gets web app infrastructure, and the backend support for Overcast works seamlessly and does what it’s supposed to — provide a web point of access to the things I want to listen to for those times I’m at my desk and don’t want to work through the iPhone. All it’s missing is a way to add a subscription (perhaps this is in v1.1), and meanwhile Huffduffer perfectly fits the bill for ad-hoc episode additions.
  • Sharing: Built-in sharing to Twitter and Instapaper, yes, thank you. Once I get something to one of those services, it’s easy to pick up downstream through IFTT. And if I want to get to a specific sharing bookmarklet in Safari, there’s no need to deal with a built-in web view for podcast sites — Just an open in Safari option that makes all those links very close at hand. If there were a Pinboard share option here my social links would be complete, I think.

I’ll have to explore the recommendations feature some more.2 So far, I’m just pleased and happy. I have a couple of long road trips over the next two weeks, and I’m looking forward to putting the app much more fully through its paces.

  1. See MacStories and Snell, for example. [return]
  2. I see my friend Joel is having some trouble sorting it out, so I’m probably already at a big disadvantage. [return]

@YouAreCarrying on Twitter”) generates a random inventory from a compilation of classic Infocom text games. Here’s mine:

@youarecarrying i

a brittle scroll, a milky orb, a bottle of Bourbon, a beautiful brass bauble, your ticket, a spade, a ticket stub, an envelope.

Sounds pretty good to me.

Nich Maragos, Procedural Life Labyrinth: 17 Views of NetHack:

I played NetHack yesterday, I played it today, I’ll play it tomorrow. There’s a better than even chance I’m playing it as you’re reading this. I do this despite having basically exhausted the challenge of the game. Sure, I still die a lot, but less out of inexperience and more out of impatience. I know what to expect from every monster and what to do to counter it.

So why not move on? If it’s not hard anymore, is there a point? I suspect NetHack has gone past something I do to challenge myself and become another one of my comfort zones. I look over at my unopened copy of Bravely Default, and think, “That seems like a lot of effort.” So I sit in the same chair, in the same room, playing the same game, day in and day out. It’s not a good look.

I’ve been enjoying Steve Lubitz & Co’s Isometric podcast for several weeks now,1 and his vicarious company on for a little longer than that. Player two Start is a sweet and nuanced story of his plan to introduce his daughter to his favorite classic games that gradually turns into discovering a new game to enjoy with her and that matches her abilities. Along the way Steve makes some nice observations about some elements of console gaming that hinder kids whose hands are still small and whose gaming started with touch screens of all things. These are things I’ve started to think about with my son, too, and I really love what Steve has put together here.

(Related and also recommended: Pop Culture Happy Hour’s making toddlers into nerds episode, which gave me a small truckload of great recommendations while being similarly sensitive to the way kids need to find their own things to geek out on.)

  1. A show recently asked to join the 5by5 network — congratulations! — and on which Steve claims to be merely the token male among a crew of sharp, all-star co-hosts. But such protestations should not be taken seriously, for the entire cast does a great show. [return]

Use Simple Words

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

Use Beautiful Words:

There’s an amazing thing that happens when you start using the right dictionary. Knowing that it’s there for you, you start looking up more words, including words you already know. And you develop an affection for even those, the plainest most everyday words, because you see them treated with the same respect awarded to the rare ones, the high-sounding ones.

Which is to say you get a feeling about English that Calvin once got with his pet tiger on a day of fresh-fallen snow: “It’s a magical world, Hobbes. Let’s go exploring!”

I want AppleTV to get better in the following way: multiple cross-provider playlists of favorites, things to watch, and other things I have bookmarked. This would eliminate a lot of the need for a big UI overhaul and increase TV watching quality of life a ton.

Pipe dream: an open channel service similar to Roku that would let me plug in stuff like Rdio and, dare I say, Amazon. This will never happen, of course.

Also a pipe dream: a more visually-complete iOS app that would let me truly navigate the AppleTV menus from my phone. Use case: queuing up the next episode of Rescue Bots from the kitchen without having to get into line of sight of the TV. Come on, this can’t be impossible, right?

All I really want1 for iOS is a way to select items from clipboard history. Use case: grabbing a username and a password from 1Password and then switching to a target app where I could paste them both via two operations without switching back and forth twice between the apps. Similarly for things like email and phone number from a contact, or multiple snippets from an article I want to quote. The ease and great convenience of working with the Alfred clipboard history has spoiled me.


It wouldn’t be responsible not to speculate, at this point. Here’s my Beats acquisition theory, which more or less came to me while listening to the first twenty minutes of The Prompt discussion of the value of curation on the bike at the gym. I didn’t get to finish the show, so it’s likely that my so-called original idea was discussed thoroughly after I got off the bike, but here goes:

I agree on the increasing value of curation; there’s too much music and noise to find music that I might like, and I’ve never found algorithmic you-might-like lists very useful. In my case it’s because they very rarely can tell me why I might like something.2 For this reason I’m pretty attracted to at least the concept of the Beats music service, which is promoted as much more driven by real-person music editors.

That said, making a profitable streaming service probably relies substantially on learning what consumers want to listen to, and with 1) slow uptake of iTunes Radio and 2) streaming supplanting buying, Apple was losing out on a precious resource that Spotify, Rdio and (to a lesser extent due to its newness) Beats were eating up at ferocious rates: actual listening data.3 By buying Beats, and assuming that Beats Music does gain a foothold among users, Apple gets insight into consumer preferences that was probably steadily leaking further and further out of their vision every quarter. And as we’re learning that the intent is to run it as basically a subsidiary, it maintains the brand that the Beats team have been building, one that may be a little more vibrant to some consumers than Apple. They get a big headphone business, an injection of new industrial design and software engineering expertise along with it; seems like a really strong move to me.

  1. That’s probably not all I really want, but it’s a start, and all I can think of on this sunny afternoon. [return]
  2. Pandora is supposed to be better at this, but I just never clicked with that service. [return]
  3. Worth mentioning here is, which I have to imagine has, because of its age, by far the biggest listening history database on the planet. That they’re not at the center of all of these conversations has got to be some kind of huge misstep by CBS. CBS owns; everybody knows this, right? [return]

The release of Editorial 1.1 has re-started me on the path of tinkering and tuning, and so far my impression is something along the lines of man oh man. Now, you’re going “oh great, another navel-gazing tools post,” and, no, you’re not wrong. This is another navel-gazing tools post. I do recall my college creative writing professor noting that poems about writing poems often make the worst poems; fortunately that observation probably doesn’t apply to blog posts about blog tools. We will see.

The short version is that I’m excited about this tool and this is a really powerful update. To elaborate:

  • Sub-workflows replace the previous ability to save and re-use workflows as “presets,” and improve on this idea greatly. A workflow can call another workflow as part of its own process, which means that re-using the things I’ve already built is easy, and I can improve those component workflows without having to trace their changes into the other places where they’re used.
  • The UI workflow builder is cool and I’m sure I will only use a tiny bit of its power. However,
  • Together, sub-workflows and UI builder are fantastic: I can use my existing workflows and piece them together into a parent workflow complete with buttons! In about ten minutes of work I have a popup sheet with buttons for my PGH publishing options and tools, that I can invoke from the keyboard directly. This is really, really slick. I admit I did not quite grok this when I read Oli’s preview posts about it, but the simple explanation really is true: one could use this to make full-on python-powered applications, complete with GUI, that run inside Editorial. I’m really excited to see what people come up with.

Also great, Ole’s backup/restore workflow, which has made it bang-easy to keep my workflows in sync across devices: and now with the iPhone version, I can begin work on any device and seamlessly pick up on another; this had been an obstacle between my two iPads, where I only had really tuned up workflows on one or the other. Now I’m equally capable on either iPad or my phone, no friction. It’s almost spooky, how cool it is to see the workflow panel get populated with all the workflows restored from a backup. Previously, the way to do this would be via syncing each workflow manually via the workflow repository. The brilliance of a seamlessness, easy sync just cannot be overstated. I think the ease with which one can keep iOS platforms in sync is going to be a very big deal with this release. Related, Ole Moritz has also put together a snippet backup workflow that looks likely to ease some of the frustration with the new method for using TextExpander in iOS 7.

My small kit of tools that I use to publish this entire site consists primarily of: write, upload1, render static HTML with my home built tiny engine (server side), deploy preview and deploy “production.” I have had these operations automated within Editorial for quite a while now, and every facet of doing that just got even better. Moreover, the app’s improvements give me more reason to incorporate it into even more of my non-blog style writing. In short, a great app is made even better; if you’re already a user you’re going to like it, and if you haven’t tried it but like writing and tools, I think this will be right up your alley.

  1. Really, upload here refers to syncing from Dropbox over to my web host server, where the server side static HTML rendering gets done. [return]

Adactio: Journal—Selfish publishing ▹

I continue to just really, really like what Jeremy Keith writes about putting stuff out on the web:

I have to admit, I really don’t care that much about the specific technologies being discussed at indie web camps: formats, protocols, bits of code …they are less important than the ideas. And the ideas are less important than the actions. As long as I’m publishing to my website, I’m pretty happy. That said, I’m very grateful that the other indie web folks are there to help me out.

What’s New in Editorial 1.1 ▹

I’ve been working on this for over nine months, and in a lot of ways, it feels more like a 2.0, or at least 1.5. There’s a new look for iOS 7, an iPhone version, tons of refinements everywhere, and several major new features for building even more powerful workflows.

No kidding! What a great update to an indispensable application. Most of my posting here as well as other writing I do with Editorial, and having it available on iPhone is huge. The taskpaper support also looks intriguing. Can’t wait to see the writeups by Viticci and Weatherhead.

Pre-post update naturally, Federico Viticci already has some gajillion in-depth words written about out the update. Bookmarked.