I’ve been looking at using XFN, and the first thing I wanted to do was share (most of) my feed subscriptions. I don’t think XFN can do this very well, so I’m going to ramble meaninglessly about it for a little bit.
XFN describes relationships between people in generic, simple terms. This is good, since getting too specific would just lead to a ridiculously large list that nobody would actually use. A manageable set of carefully thought-out terms is the right way to go. However, there are a couple of things I’d like to be able to do that XFN doesn’t cover, and I’m wondering if it should.
Feed subscriptions. What’s the right word to describe those links? I haven’t met or even conversed with most of the people whose feeds I read every day, so none of the XFN values seem to fit. I like “fan” to describe this relationship, and it covers others too; I could use it to link to a band or author or book or TV show just as well as a person. But then, I’m not really a “fan” of many of the aggregation or news sites I follow. Is “follower” better? “reader”? Would we need both terms?
Indicating that a URL is a source for whatever I’m talking about is useful. Obviously, “source” makes sense for this one. This is just like those “via” links on news posts that indicate sources.
So are these sorts of relationships in scope for XFN? If XFN is about relationships between people, should it be capable of indicating relationships to things? Should XFN cover these things for convenience, or should there be another system for them? There are some ideas on the XFN wiki, but I don’t know what’s actually going on. Like I said, meaningless rambling.
I listen to a lot of audio during the week, much of it from The Conversations Network. Here are a few shows I found especially interesting this week.
- Put all your data in one place. New correlations and ways to analyse the data will emerge.
- With separate data spaces, you will miss important, valuable relations.
- Save queries (again, in the same space) and notify of changes and updates.
- Stored queries can connect to other queries.
- Stream data into the system and learn and fix errors (repair history) along the way.
- Sequence neutrality. End state is the same regardless of the order the data arrives.
- “… when you dream, you’re doing deep recontextualization … to remedy some things you actually have to go offline for.”
Mandating coherence and structure doesn’t work.
- “Valid” means different things to different people and groups.
- Imposing structure in separate realms can and does work, but designing for interoperability between realms creates all kinds of new value.
- Just get all your data together and let structure emerge/evolve/develop.
- Don’t throw structure away, defer it until you really know what you need.
- RDF and graphs. I’ll admit right here that I don’t really understand these (yet ), but I may have to look into them more.
- Use peer pressure and self-interest to build interesting, valuable, open data sets. It’s the only way to make this stuff happen.
- “There’s no such thing as quality of metadata.”
- “… there’s that perception that coherence is quality.”
- “Data first, instead of structure first.”
- “Just start off, and write down whatever you want, and then you can incrementally add structure to it, and make value out of the structure as you build.”
- “There’s a lot of structure in email, if you want, it just depends on what kind of structure you want to look at.”
- “A lot of the semantic web research was based on the big hypothesis that was ‘If whatever, then we could do this.’”
- “… when they show up, we can party.”
- “It’s clear now that it’s all about data and loose pieces connected together than it is about uber ontologies …”
- Keep things really simple to start. Just get basics working so we can build on them.
- The potential of OpenID endpoints for service discovery. I hadn’t thought about this, but it’s really obvious once you hear it.
- These technologies are actually great for big companies. They can make use of them without having to be the initiator and deal with the suspicion and other associated problems
- OpenID takes a big load off of developers by handling their whole authentication system for them.
- Interesting balance between design-by-committee and just-build-some-stuff with DataPortability and DiSo.
- Keep things loosely joined. Don’t depend on one tech for a task, make it all swappable.
- Being the repository for “master profiles” is valuable.
- Focus on people who already get it (to whatever degree), and build momentum. You can’t convince other people without that base.
- Building a personal reputation based on trust of opinions and knowledge is extremely valuable.
- Monetizing too early can be dangerously limiting. Tough balance.
- “Simple usually wins in the long term.”
- “Help people discover new things.”
There’s definitely a theme there. Some other stuff:
- Jon Udell interviews Fernanda Viegas & Martin Wattenberg about Many Eyes, an interesting social data visualization project.
- Whedonesque reported that Joss Whedon would be on the GeeksOn podcast. And he was. I checked out a couple of other episodes as well, and it seems like a really interesting, well done podcast. I’ll keep listening.
- Chris Messina on DiSo is a nice introduction to the DiSo project.
It’s been a full week since I wrote something here. Getting into the habit of writing and posting regularly is interesting. I’m a really private person trying to start a really public habit (sure, nobody’s reading now, but it’s still out there…). I’m absolutely convinced that it’s worth my time and effort to keep this site active, but I’m having trouble keeping myself in motion. So here I am writing about how I don’t write as much as I want. Better than nothing, I suppose.
The more I read, the more I see that the reason all the lifestreaming stuff out there mostly escaped me until now is that I’ve arrived at it via a different route. See, I want a whole new way to use computers and work with ideas and thoughts (not that I think big or anything…), and lifestreaming is really just a small side effect of that ridiculously large goal.
I’ve had a vague sort of plan evolving in my head for this thing for years now. I started out focusing on the user interface, working out the capabilities that I wanted. But there’s not much point to a UI without anything in it, so I turned to thinking about the data I’d need to store (um, everything), and how to store it. CouchDb has had my attention for a while as the most likely storage system for this, at least at first. I’m currently reading On Intelligence, so Jeff Hawkins hierarchical temporal memory seems like it could be even better, if it, you know, existed and I really understood its potential capabilities.
Anyway, my point is that I’ve been wanting a system to aggregate and store all my stuff, and that’s really what lifecaching is about. It has other names too; one that I like is lifebits. I’m starting my own project to mess with this (which will be open source once I have something useful), and I’ll write about it plenty, I’m sure. One goal will be to get this site running on top of it soonish.
I know this is all pretty hand-wavy stuff, and I’m smart enough to know that I’m probably not that smart. I’m only just beginning to dig into some of the difficult details of this system, and believe me, I know it’s all going to be way harder to do than I think it is (and I already think it’s hard). Gotta try it anyway. These grand plans of mine have just been idle speculation and a few big, messy mindmaps so far, and it’s time to start taking a few baby steps toward the goal. Any progress is better than none.
One lifestreaming article that I actually found surprising was about lifecaching. I agree completely with that article, so why was it surprising? Because I hadn’t considered caching to be optional. Sucking in all that data and saving it in a system that I control is a fundamental requirement. It’s the first step, not an add-on feature.
(Hm. Does lifestream work as a verb?)
I’ve spent a lot of time this week thinking about exactly what I want to do with this site. I gradually built a mental image that combined all my public “stuff”. It would suck in feeds of my activity from all over the net, whether I used twitter, or tumblr, or facebook, or del.icio.us, or whatever else happened to be useful. All that stuff would get combined and displayed in interesting, useful ways (to me at least, and hopefully to others too) right here.
Just as I was figuring all this out and getting really interested in the whole thing, this article about lifestreaming came along (with a link to the very interesting Lifestream Blog). So all my great ideas already have a nice name and everything. Figures. In hindsight, I can remember lots of news over the years about lifestreaming and life recording, but the term itself doesn’t ring a bell. The way I had it pictured in my head, though, it was a world of too-much-information. Not something I was interested in.
Now that I’ve come up with it myself, though, it sounds fantastic. At least, when I do it my way… So this site will be my playground for whatever crazy lifestreaming ideas I have or I find on the interwebs. Should be interesting. Lifestreaming actually meshes really nicely with a project I’ve been playing with in my head for years, so there’s lots more of this crap to look forward to.
I’m loving the second beta of the upcoming Firefox 3. Once some essential plugins get updated, I think it’ll even entice me back from Epiphany, which I really like (much better than Firefox 2). There’s no point listing the things I like about FF3, since a million other people have listed them already.
One thing about it drove me crazy: the popup window limit. I hit it all the time because of the way I use Google Reader. As I read feeds, I hit the ‘v’ shortcut key to open an item in a background tab (very handy). Maybe I do this way more than other people do, because I haven’t seen any other mentions of this “problem”. Anyway, here’s a fix. Open a new tab or window and enter about:config in the address bar. Enter popup_maximum in the filter box. Change that number (I went from 20 to 60). Yay.
So after complaining to myself about myself spending too much time on games lately, I took the afternoon to set up a couple of things on my server.
First, and simplest, I set up apt-cacher to save time and bandwidth. Nice and easy, especially with the tip in the last couple of comments on that article about setting apt (on all the clients) to use a proxy, which saves changing every line in sources.list. Just so I don’t lose it, here it is: create a file in /etc/apt/apt.conf.d called whatever you like (I used 90apt-proxy-local) with this line in it:
Obviously, use the actual hostname of the machine running apt-cacher there. Just as obviously, I just cut and pasted from the article linked above.
Second, I started setting up Puppet. Since I’m using Xen on my new hardware, I’m likely to have more virtual machines kicking around than I want to manage. Automation is the obvious answer, and puppet looks pretty good so far. Having a nice version-controlled description of all the machines I’m running, which then gets enforced, is very appealing. Here’s what I’m hoping to get out of puppet:
- Documentation. I think the puppet manifests (being declarative) will serve nicely as documentation of my setup.
- Enforcement. The tool takes my documentation and makes it real.
- Backup. Keeping the complete documentation for my whole setup in a versioning tool gives me an extra layer of safety beyond normal backups.
- Ease. I want to write down, just once, how I want my machines set up. Then I want a tool that will make sure it happens, and stays happening.
We’ll see how much of that I get with puppet.