A Decade Later, Are We In Another Tech Bubble?

Lots of people are buzzing lately that we're in another "dotcom" bubble, roughly ten years after the last one. In mid-November, noted New York venture capitalist Fred Wilson described some "storm clouds" ahead for the tech investing space. He described what he sees as some unsustainable "talent" and "valuation" bubbles. This was around the time of the TechCrunch story about the engineer that Google gave $3.5 million to stick around. Not too long after that, Jason Calacanis of Mahalo fame wrote a brilliant edition of his email newsletter in which he outlined four tech bubbles he sees right now: an angel bubble (similar to Wilson's valuation bubble), a talent bubble, an incubator bubble (new firms cropping up to try and copy the successes of YCombinator and TechStars), and a stock market bubble.

And the frothy news just keeps on coming: Groupon this week allegedly turned down a $6 billion acquisition offer from Google (yes, that number has nine zeros and three commas in it [1]). Oh, and also, the Second Market value of Facebook is about $41 billion. That makes it #3 in the web space after Amazon and Google.

And, finally, there was this hilarious and depressing tweet going around yesterday from @ramparte:

But for me the proof was in two recent encounters with people decidedly not in the tech industry: my accountant and my banker. Each of them, upon learning what I do for a living, started talking to me about their tech business ideas. One was intriguing, one was, shall we say, vague, but everywhere I turn these days I feel like someone's trying to pitch me on their idea for a social network, a mobile application, or whatever. And who am I? I'm a nobody. Can you imagine how many pitches people like Fred Wilson and Jason Calacanis get? It must be absurd. And in any case, what most of these folks don't realize is that the idea is about 5% of a successful business. The remaining 95% is laser focus and nimble execution.

I feel lucky to be in technology right now--the economy is so crappy for almost everyone else. And that's got to be one of the driving factors of this bubble right now. It's one of the only healthy industries out there, and it's attracting people who are disenchanted with whatever sick industry they happen to be in. Other driving factors of course are the recent explosive growth in mobile computing, the maturation of the web development space (frameworks like Ruby on Rails and Django that make web app development almost frictionless), and the rise of APIs and web services that allow vastly different sites to integrate their offerings.

It's as if all the fishermen in the world have descended on one supremely awesome spot. A lot of people will catch a fish or two, some will catch enough that they'll never have to fish again, but most won't catch a thing.


[1] If anyone ever offers me $6 billion dollars for anything, please remind me not to turn them down.

Related articles

Enhanced by Zemanta

The Next Phase is Not Web 3.0

O'Reilly Media's Web 2.0 Summit, which took place over the last few days in San Francisco, got me thinking, why is the web still only in version 2.0? Tim O'Reilly himself coined the phrase Web 2.0 back in 2004 for his first conference of the same name. It was defined by an evolution in front end technologies like AJAX and bubble letters, back-end technologies like web services and RSS feeds, and business models like crowdsourcing and software as a service.

So given that we're 6 years in to Web 2.0, when will we get to Web 3.0? The answer is never. No one will ever start calling it Web 3.0. For one thing, it's not catchy. Web 2.0 has a certain ring to it that Web 3.0 doesn't. Also, I think it will be difficult for people to come to a consensus on when technologies have evolved enough to move to a new version number. Web 2.0 was coined by a single person. Web 3.0 would have to be more organic. We much more likely to describe future "versions" of the web in descriptive phrases rather than numbers.

Tim Berners-Lee has always been against this nomenclature anyway. His alternative to "Web 2.0" was the "Read/Write Web," because of the way in which users became empowered to contribute en masse to the data on the internet. And in 2006, when asked what Web 3.0 would be, he said that a component of it would be "The Semantic Web," or "a web of data that can be processed directly and indirectly by machines." In other words, a web in which the machines can glean meaning from the data, in addition to simply manipulating it.

But I would argue that we are already at the next evolution of the web, and yet it's not about semantics. It's about context. This new phase of the web has largely been catalyzed by two breakthroughs: advances in the power and reach of mobile computing, as well as what Mark Zuckerberg calls "the social graph." Both of these lend not meaning but context to data, and that is a very powerful thing.

Mobile devices can contextualize data around locations, photos, video, and audio (among other things). And of course the social graph connects data to people. The "Internet of Things," as it continues to grow, will increasingly connect data to objects (shall we call it the "object graph?"). Although context is a step in the direction of semantics, we are still a ways away from getting machines to the point where they can interpret meaning from this data.

Indeed the "web" isn't even about machines anymore. What was once a network of machines connected by wires is now a network of people, places and things connected by context. There is a new network growing atop the old.

Perhaps the semantic web will come in version 4.0 (although we still won't call it that). But I think the best characterization of the most recent evolution of the web is the "Contextual Web" (I am not the first to call it such). Twitter, Facebook, Foursquare, the iPhone, Android, and many other prominent technologies can fall under this term, and I think it best describes the current proliferation of mobile and social technology that is spawning so many new and interesting businesses.

Enhanced by Zemanta

The Difference Design Makes

HTC Mozart Windows Phone 7With the recent release of Windows Phone 7, Microsoft has finally figured out what Apple has known for many years: design sells. The interface is austere in a way few Microsoft products are. In some ways it's almost too sparse--users navigate from screen to screen by means of two-dimensional "tiles" rather than 3D buttons. Ultimately, though, underdone beats over-wrought. Granted, "design" is a huge umbrella term, covering everything from ergonomics to user interaction to typography to color palette, but all those things contribute greatly to people's emotional response to a product. Good design makes a product trustworthy. It indicates the level of care that went into creating the product. It has the user's best interest's at heart.

The key differentiator in software used to be features. We thought that more features and more customizability meant happier customers. We were wrong--more features meant customers who were more confused and frustrated. Turns out, in an age of abundance, clarity is a scarce resource. Good design is the conduit of clarity.

Compare the Windows Phone 7 home screen above with the way Windows Mobile used to look:

Mom, have fun figuring out what exactly a "Comm Manager" or "SIM Manager" is.

Mint.com was able to take on a huge company like Intuit (and eventually get acquired by them for $170 million) by competing solely on design and user experience. I never got any direct mail from Mint like I do from Intuit. I never saw Mint.com on the shelf at Staples like I did Quicken. Mint has probably 1/10th the number of features that Quicken has. And yet, in the end. their beautiful design and simple interface added up to $170 million in value.

Mint.com Leaves a Bad Taste in My Mouth

Mint.com is a great website in a lot of ways. It's great to be able to track all your financial data in one place, it has a really nice user interface, and it's free. But when a company has access to so much of your sensitive data, it is an understatement to say that they need to be really careful with that data. Today Mint did something to lose my trust forever, something that led me to cancel my account immediately. Early this morning I received six blank emails from stage-mini@mint.com. Being in the business, I immediately recognized that this was likely coming from Mint's staging (test) server. I went to their support forums, searched for this issue, and found this thread. I was the eighth person to comment and now there are over 200 comments and counting. The main frustration seems to be with the fact that Mint tried to reassure users that no customer data is stored on the test system from which these emails originated. That begs the question: then why did it store our email addresses?

The websites I work on store far less sensitive user data than banking and credit card information, and yet we never EVER store real user email addresses (or mailing addresses or passwords) in our test environments. The fact that Mint screwed this up reveals a major lack of competence in the area of security. And security needs to be their top priority, or at the very least a core competency. If they aren't getting this right, what else aren't they getting right? Consequently, I cancelled my Mint account just about as fast I as could.

The lesson here is not so much that companies shouldn't store real user data on their test systems, but that if they do, they need to clearly communicate that to customers. If Mint had said, we store no customer data in our test systems other than email addresses, I may have questioned why they needed our emails on the test environment, but I still might have trusted them. When they said they stored NO customer data on stage, and yet somehow that environment had my email address, well, then all trust is lost.

Why Should I Care?

The information overload problem is bad and getting worse. Nicholas Carr, in his sentimental but thought-provoking book The Shallows: What the Internet Is Doing to Our Brains, argues that our chemical addiction to new information is eroding our ability to concentrate on lengthy tasks. But even if this is true, it is only one part of the information overload equation. There's another side effect that I haven't seen much written about. Information overload is destroying our sense of context. Old media made an attempt at contextualizing information. Lengthy articles in the New York Times Magazine, for example, wouldn't just give me facts, but would also tell me why I should care about those facts. It would give me some background and connect these facts to other things I probably already cared about.

On the other hand, the vast majority of new media, by which I mean things like blogs and Twitter and even the 24-hour news channels, keep things short--that's what people want, right?--and rarely build a contextual framework around the information they present. That's not to say that blogs and Twitter aren't useful for certain things. Twitter is an amazing way to keep up on the zeitgeist, a use case I missed when I first signed up for Twitter and dismissed it as useless. Still, most of the time when I engage with new media I find myself saying, "So what?" I may know what's going on, but it's increasingly difficult to see the bigger picture. I feel like I'm almost always "in the weeds."

But not all hope is lost. If the Internet has proven one thing it's that it's an amazingly flexible platform on which to solve information problems of all sorts. I'd actually love to see someone build a solution to this problem, one that pulled my RSS and Twitter feeds, analyzed the content to determine what topics were being discussed, and searched the web for lengthier / meatier pieces on those subjects. I don't think this would be that hard to do. The question is--would I then have time to actually read all this additional information?

Enhanced by Zemanta

What You Should Be Doing Instead of Checking In

I use Foursquare a lot. You could say I'm part of the passionate but niche group that checks in at least a couple of times per week (more like a couple of times per day). The odd part of it is that I can't really tell you why I do it. Is it for the badges and mayorships? Not really--for all the talk of "game mechanics," these things are mostly pretty lame. Is it for the specials I can get from local retailers? No, there aren't enough of those available yet. Is it because of the serendipitous encounters I can have with friends? No. Having two young children precludes that quite a bit.

So if I'm not doing it for any particular reason, maybe I should spend some of my check-in time doing something productive.

Enter CloudMade's Mapzen POI Collector. This iPhone app exists for one purpose, and one purpose only: to add and update points of interest to the open-source geo database OpenStreetMap--the Wikipedia of geography.

I realized today that instead of always checking in everywhere I go, I could earn a lot more karma points (if not badges and mayorships) by entering and editing points of interest everywhere I go. (By the way, if you want to search points of interest, don't use this app; use something like the Open Maps app.)

Why the karma? The data I contribute using the Mapzen app is open and licensed under the Creative Commons SA license, so it can be freely and easily used in myriad applications that are competing with closed platforms such as Foursquare and Yelp.

So from now on I'm going to try and do my part to make the world a better place--instead of checking in on Foursquare, I'm going to spend that time making OpenStreetMaps so good it'll give Google, Foursquare and Facebook all a run for their money.

Related articles by Zemanta

Enhanced by Zemanta

Why Did Google Wave Die?

Email is broken. In many ways. So are instant messaging and document collaboration. Google Wave was supposed to fix a number of these problems by making threaded and multi-user conversations easier to manage, and by introducing realtime chatting and collaboration into the mix. But Wave's failure is also a fantastic illustration of a great idea and brilliant technical implementation totally overpowered by some absolutely awful product design. Google's famously spartan approach to search was the fuel for their explosive growth in the early 2000s. While sites like MSN and Yahoo were getting more complex and portal-like, Google offered an absurdly simple alternative: enter your query and click the search button.

Somehow over the years Google has lost this simplicity in many of its products, with Google Wave as the paradigmatic example. Wave was an engineering marvel, and I'm quite certain its mix of syncrhonous and asyncrhonous functionality will be used to good result in a number of other products, but the user interface was just dreadful. It made no sense and I couldn't really ever figure out how to use it--and I work in software for a living. Imagine my mom using it.

Ultimately, I think Google Wave suffered from three fatal product design flaws:

  1. Complicated user interface - it's kind of like an instant message client, except that you have to click something every time you want to add a new message. It's kind of like email, but if I archive a thread and someone else adds a new message to it, the thread appears back inbox. It's kind of like document collaboration, but doesn't have all the features of Google Docs, let alone MS Word.
  2. No integration with email / docs / chat - Wave promised to solve the problems inherent in email, instant messaging and document collaboration, but if Google wanted it to supersede these things (did they even want to?) they should've integrated it into GMail, GChat or Google Docs. I don't need yet another place to check messages, what I need is a better way to manage my existing communications. I often had to remind people over email or IM to check Google Wave for a message I sent them.
  3. Meatball Sundae - I've never read Seth Godin's book Meatball Sundae but I love the metaphor. A meatball sundae is "the unfortunate result of mixing two good ideas." Google Wave was a deep-fried meatball sundae. Was it email, instant messaging, document collaboration? It was all three, and yet it was none. The best products solve one problem brilliantly well. Google Wave tackled three problems and solved none of them.
Enhanced by Zemanta

Forget the Oil Spill

The internet has been abuzz the last couple of days about some admittedly clever viral videos from Old Spice (yes, that Old Spice). It got me thinking that there's been a lot less buzz lately about the BP oil spill, which is coming up on its 90-day birthday and shows no signs of slowing down. And now the presidential commission appointed to investigate the spill is recommending that the moratorium on deepwater drilling be suspended.

Are we all so distracted and ADD-addled that we've already forgotten the magnitude of this disaster? And while we're at it, what about improving the financial system--did we ever see that one through to its conclusion? Are there still wars going on in Iraq and Afghanistan?

So I wanted to see exactly how much we've lost interest in the oil spill vs. how distracted we are by funny deodorant videos. I think this chart of Twitter trends from Trendistic.com says it all:

And here is a sad chart of the decline in Twitter mentions of the oil spill in the last 30 days:

Enhanced by Zemanta

Notifications, Unread Items and Information Overload

Last week I wrote about the strategies Quora.com employs to engage its users and keep them coming back to the site. A big component of their strategy is the idea of notifications--the email and on-screen alerts the application uses to let you know that your attention is needed. Their notifications are tactful and largely welcome. Unfortunately, however, like many other tools in the software architect's chest, notifications can quickly cause insane levels of information overload when they're used without careful thought.

Take for instance the Facebook iPhone app. Every time I open it and navigate to the main menu screen, I have some notifications waiting for me (usually people commenting on one of my wall posts or something similar). I'm alerted to this fact by a little bar on the bottom of the screen highlighted in a different color. This much I'm okay with.

However, if I then choose to close the app at this point without explicitly viewing the notifications, the app icon now has a little red number superimposed on it, telling me how many notifications I didn't check. If you're anal like me, this is torture. I now have to go back into the app and view the notifications in order to get rid of that annoying little red number.

"Unread" counts in email and news readers like Google Reader are another good example. Again, because of my mild OCD, I never let my inbox contain any unread messages. I even click on messages I know to be spam just so that they don't keep notifying me of their unread status. Same goes for Google Reader. If I'm too busy to read everything and I have to skip some articles, I still have to mark them as unread so I don't have to see that notification anymore. I've often thought that these applications should archive (or mark as read) any unread messages automatically after a certain amount of time goes by. If I haven't read an email in a few days, I'm probably not ever going to read it.

All of this information desperately begging for our attention leads to apathy at best and resentment at worst. It's like the boy who cried wolf. Eventually we're just going to tune it out.

I think the trick here is to think like the user before implementing things like this. Do I really want to receive more than one or two emails per day from a given application? Should notifications be persistent, or should they fade away over time? Should they be mandatory, requiring the user to take a certain action so that they go away? Or should they merely be indicative of an action that is optional? Should the notifications be opt-in or opt-out?

These are crucial decisions to make when creating software, decisions that could lead either to delight or disgust.

Related articles by Zemanta

Enhanced by Zemanta

Are Derivatives Really That Complicated?

I should preface this post by saying I know next to nothing about finance. What I do know I've learned over the past roughly two years by listening to NPR's Planet Money podcast and reading books like the one I'm currently engrossed in, Michael Lewis' The Big Short. Last year my friend Steve wrote about the "complexity" of derivatives, arguing very convincingly that, even though the media liked to talk about how complex were the securities that nearly brought Capitalism to its knees, in fact they were generally being very lazy because these things weren't really that complicated after all.

In The Big Short, however, Lewis describes how many of the traders who were trying to short mortgage-backed securities described them as being “complex,” even though they had spent a lot of time researching them. When these traders first encountered these securities, they had no idea what they were looking at, but they took the time and had the smarts to research them and figure them out. However, even after that effort, they still called them “complex,” not because the concepts were difficult to understand, but because no matter how much they researched them, they could never really determine the quality of the underlying raw materials (mortgages) in any given security.

The Wall St. firms that created and sold the CDOs deliberately made them obtuse so that it was difficult to determine the actual risk contained within. Part of the reason the firms loved selling CDOs was because they were able to take low-rated mortgage bonds they couldn’t otherwise sell and package them into higher-rated derivatives that they could sell. On “first generation” mortgage bonds, it might have been clear what the risks of investing in them would be, since the firms did publish stats such as average FICO score or % of no-doc loans in the bonds. But the firms sliced and packaged these bonds into CDOs many times over, such that it became very difficult for investors to get a real sense of the quality of the underlying raw materials in them. So then everyone just trusted the rating agencies.

It’s like trying to determine the quality of mass-produced ground beef. You may know that one particular cattle farmer’s practices are sustainable and humane, but once you grind up his meat and combine it with the meats from thousands of different farms and feed lots, and then portion that ground meat into little hamburger-sized patties, it becomes almost impossible to determine the quality of any individual hamburger. So then everyone just trusts the USDA ratings and goes on eating.

Read The “complexity” of derivatives « Steve Reads.

Quora Does What Every Website Wants To Do: Engage Users

I've been reading about Quora.com for some time now, but a few weeks ago I finally got an invite to participate in their closed beta. For those who don't know, Quora is a Q&A site with some social networking functionality built in to make it like Facebook or Twitter, but with much richer content. You can post and answer questions, vote responses up and down and comment on them, and follow a range of different topics, questions and people.

But the one thing Quora does exceedingly well is engage its users. I find myself wanting to visit the site every day. There are very few sites I do in fact visit every day, so when a new one comes up on my radar, it's worth thinking about a little more deeply. How does Quora keep me coming back?

First, they give me things to do when I get to the site. The first page I see when I log in is my "feed," essentially a list of questions and recent answers from the people and topics I'm following. The first thing I always do then is scan my feed and see if any interesting questions or answers have come up recently. If so, I click on them, read and vote on the responses, and consider whether I want to answer the question.

Another activity they ask of me is to classify unanswered questions. If someone enters a question without any topics, it shows up on my home page as an "Unorganized Question." If I click on it I can then easily add topics to the question, which benefits the community as a whole without being too bothersome for me to do.

Lastly, Quora has perfected the art of email notifications. Whereas Facebook sends me an email for every dumb little thing that needs my attention, Quora, as far as I can tell, only sends me emails in a two specific circumstances:

  • Someone posts an answer to a question I am following (you can follow any question you see on the site by clicking the "Follow" link, unless you asked the question, in which case you follow it by default)
  • Someone sends you a private message

This means that the email load coming from Quora is low enough to keep it unobtrusive, but the emails themselves are of high enough value that I welcome them and will likely click on the links in them to come back to the site.

User engagement is the "holy grail" of making websites profitable, and Quora has found it. It's all about giving the users activities to accomplish when they come to the site, as well as encouraging them to come back via infrequent but high value email notifications. If you'd like an invite so you can check this out for your self, let me know by tweeting me @jamieforrest.

Related articles by Zemanta

iTunes could not backup the iPhone "iPhone" because the backup session failed.

I got this error today while syncing my iPhone 3G with iTunes 9.2 today:

iTunes could not backup the iPhone "iPhone" because the backup session failed.

Re-seating the USB cable and restarting iTunes did not help. A Google search revealed various fixes having to do with firewall settings or 3rd party application conflicts, but none applied to my situation.

Ultimately I solved the issue by running a manual backup of the iPhone by right-clicking (or control-clicking) the iPhone in the Devices section of iTunes and choosing "Back Up." Once that completed I was able to sync without getting the error.

UPDATE: According to the comments on this post, it may help if you first delete the old backup by choosing Preferences –> Devices –> Delete Backup. You should copy your old backup somewhere else before you do this. Your backup is stored in the Users/[username]/Library/Application Support/MobileSync/Backup folder on Mac, or C:\Documents and Settings\[username]\Application Data\Apple Computer\MobileSync\Backup on Windows.

UPDATE 2: According to the comments, Windows users may need to change their computer's time zone to fix this issue.

UPDATE 3: According to the comments, Windows users may also need to kill the AppleMobileBackup.exe process and restart iTunes.

UPDATE 4: According to the comments, Windows users may also need to run iTunes as an administrator. Go to Computer>local disc (c:)>Program Files>iTunes>iTunes.exe then right click and run iTunes.exe as administrator a (iOS 4.0.1 itunes 9.2.15 on Vista)

When Twitter Goes Down, Babies Die

Twitter's uptime is generally over 99%. Every now and then they dip below that (less so now than in the past), and whenever they do, the internets freak out. You'd think the lives of children were at stake. Feeling the pressure of a million tech bloggers waving their fail whale flags wide and high, Twitter published a mea culpa yesterday that not only recognizes the "gravity" of the situation, but also promises some more fail for the foreseeable future:

Should Twitter have been ready? Record traffic and unprecedented spikes in activity are never simple to manage. However, we were well aware of the likely impact of the World Cup. What we didn't anticipate was some of the complexities that have been inherent in fixing and optimizing our systems before and during the event.

What's next? Over the next two weeks, we may perform relatively short planned maintenance on the site. During this time, the service will likely be taken down. We will not perform this work during World Cup games, and we will provide advance notification.

How magnanimous of them to schedule their downtime around the World Cup games! Could the world have survived without a few hours of vuvuzela tweets?

Okay, I'll give you that Twitter was somewhat important last year during Iran's Green Revolution, when, at the request of the U.S. Government, they actually delayed some planned maintenance in order to keep the site up.

But seriously, no maintenance windows during the World Cup? Are we all so addicted to the dopamine squirt from reading 140 character messages that we can't possibly enjoy some soccer matches without it?

If Twitter is so crucial to the world's infrastructure, then it needs to be an open format supported and maintained by a federation of the world's governments. If not, we don't have much right to complain when the service is down for a few hours here and there.

Steve Jobs as Presenter

Yesterday I followed the iPhone 4 announcement live on Twit.tv, which was rebroadcasting a bootleg audio feed from the WWDC keynote. I was amazed by how much passion and enthusiasm a frail Steve Jobs could convey even through this distorted audio. Though he's been accused of peppering his speeches with superfluous accolades like "incredible," and "awesome," there's really no one else out there who, through his presentation style, can make you care about things you didn't know you cared about. Before yesterday, I didn't know I cared so much about screen resolution, for instance, or video chat or 3-axis motion control. Now I care about them so much I want them in my next phone.

If Jobs weren't also one of the best product people in the world, this skill would be enough to make him a very successful man.

Related articles by Zemanta

Reblog this post [with Zemanta]

Twitter Channels Steve Jobs

Yesterday, Twitter announced that it would no longer be permitting third party ads in the timeline. It struck me how similar this felt to when Apple recently changed their developer agreement, prohibiting apps that were cross-compiled using third party tools. Let's compare. First, the juicy part of Twitter's announcement:

As our primary concern is the long-term health and value of the network, we have and will continue to forgo near-term revenue opportunities in the service of carefully metering the impact of Promoted Tweets on the user experience. It is critical that the core experience of real-time introductions and information is protected for the user and with an eye toward long-term success for all advertisers, users and the Twitter ecosystem. For this reason, aside from Promoted Tweets, we will not allow any third party to inject paid tweets into a timeline on any service that leverages the Twitter API. We are updating our Terms of Service to articulate clearly what we mean by this statement, and we encourage you to read the updated API Terms of Service to be released shortly.

Now, Steve Jobs' "Thoughts on Flash:"

Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.

Without Jobs' outspoken stance on Flash, I'm not so sure Twitter would've had the gumption to make this kind of a decision, one that could potentially alienate such a large swath of their developer base. But I respect them for doing it. It's a gamble, but one I think they'll win.

I'm starting to see a pattern in which companies are coming down really strongly in favor of user experience, even if it pisses off third party developers. User experience should always be the primary concern, and developers should agree. I can see how some developers may see this as another "Fuck You" from Twitter, especially because announcements like this usually and conveniently tend to favor the platform provider over the little guys in the ecosystem, but I think it's a move in the right direction. And they can certainly afford to make these kinds of wagers when they have so much inertia in their user base.

via Twitter Blog: The Twitter Platform.

Reblog this post [with Zemanta]

My (Very Brief) Facebook Hiatus

With all the privacy missteps that Facebook has taken of late, I decided to deactivate my account just to see what it would be like. I took this step knowing full well that Facebook lets you reactivate your account as if you never left, simply by logging in again. (Is this a feature or another indication that Facebook is doing whatever it can to hold onto your data?) Within just a few hours after I deactivated, an old friend from high school emailed me that he'd just uploaded lots of pictures from our teenage years and was sad that he couldn't tag me on them. Another friend then saw these pictures, also noticed I'd gone missing, and proceed to start a public Facebook group called "Jamie quit Facebook??? WTF, that sucks!!! BOO!"

Over the course of the evening, 10 people joined the group and left various comments like, "Quitters never win," "waah, i want pwivacy!," and "It's not like he didn't give us all plenty of warning and reasons." I enjoyed watching this and relishing in the irony that I could view this completely public page even though my account wasn't active.

Several days later when I reactivated my account, I was glad to see that (a) all of the information in my profile had been wiped clean; (b) my friends list was still intact; and (c) all of my privacy settings were unchanged.

Some people would think that (a) would be an inconvenience but I welcomed it, because in reinstating my Facebook account I have come back with a new attitude. Instead of seeing Facebook as a protected space where I can share semi-private information with a self-selected group of friends, I now see it more like Twitter (and more like the Internet as a whole): a completely public space where you need to be careful about what you do and say and actively monitor and manage what others do and say about you.

Ultimately I reactivated because I need to. I work in technology and I have to keep abreast of what's going on in that space. Facebook also drives a good amount of traffic to this blog, which I was sad to see disappear. Right here, right now, Facebook is just too powerful a force to opt-out of.

Related articles by Zemanta

Reblog this post [with Zemanta]

What Makes Two Pieces of Content Similar?

Thinking more about how to dampen the web's echo chamber, I wrote a Python script that uses OpenCalais to come up with a "similarity score" for two web pages you feed it. The output I'm expecting is a basic "yes" or "no" that answers the question, "Are these two articles about the same thing?" OpenCalais analyzes text and determines a list of topics the text is about, along with a relevance score for each. My script takes this report for the two pages, finds all the topics that overlap, sums and weights the relevance scores, and comes up with an overall similarity index.

This works fairly well, but produces some false positives for articles that are similar but not about exactly the same thing. For instance, this article about HP buying palm comes up as moderately similar to this Mashable review of the iPad 3G. This is probably because the HP article includes a snippet below the fold about the iPad. Granted, my script knows that these are not closely related, but in my mind they're distantly related, and the iPad article that's included on the same page as the HP article isn't even a review of the iPad 3G as the Mashable article is.

So what refinements can I include that might be able to avoid false positives like this? It occurred to me that the headlines of closely similar articles are also closely similar. One idea I had is to run the headlines through OpenCalais as well, and sum and weight those along with the contextual analysis of the articles themselves. This might help to avoid the case above, but if one article's headline is, hypothetically, "Apple iPad 3G Jailbreak Released" and the other's is "Apple Releases iPad 3G," OpenCalais might not be able to distinguish that these are in fact not closely similar. I could also skip OpenCalais and just do a count of overlapping words on the two headlines, but that probably won't work in this case either.

But what if I apply that method to the whole of the article? Count the number of overlapping words in the two articles, with the exception of generic words like "this," "that," "is," "was," etc., and then use this analysis to increase or decrease the similarity score from above. This may work.

But all of this really begs the question: how do our minds do this, and so easily too? It's an incredibly amazing, and completely ordinary, skill.

Reblog this post [with Zemanta]

Why Your Twitter Profile is LESS Public Than Your Facebook Profile

Here is a pared down version of a recent conversation on the Facebook wall of a friend of mine: Friend: "Anyone want to try out Orkut with me, as an alternative to the awfulness that Facebook is becoming?"

Me: "I want to leave Facebook, but I don't know if Orkut is the answer. Is Friendster still around? Or maybe Twitter + Foursquare is enough."

Friend: "One thing I do find funny about this whole Facebook dustup is that people are like, 'Facebook is making all my stuff public without my permission! That is so lame! I am going to go use Twitter, which is 100% public.'"

Me: "Right, because at least Twitter doesn't give me a million different privacy checkboxes and confusing howtos to deal with. Also Twitter DMs are not public."

Thinking about it more, the whole Facebook privacy debacle actually makes my Twitter profile seem less public than my Facebook profile. Or, to clarify, it is much easier to figure out what is public and what is not on Twitter.

I know that every single status update I make on Twitter is public, so I know not to put anything there that I don't want to be public. Or I can choose a setting in my account that locks all of my Tweets if I want, in which case only those I approve can follow me. As I said above, Twitter DMs are completely private. No room for grey area.

The problem with Facebook isn't that some things are public and some are not. The problem is that, as far as I'm concerned, it is impossible for me to figure out what is and isn't public. It's also impossible for me to figure out whether the changes I've made to my privacy settings actually do what I'm expecting them to do.

That makes Facebook more dangerously public than Twitter. And that is increasingly making me want to deactivate my Facebook account. I just need to figure out how to properly alert friends on Facebook so they can contact me in other ways if necessary.

Reblog this post [with Zemanta]

Top 10 Most Annoying Brand Names

One of my big pet peeves is annoying brand names. But the worst kind are the portmanteau-like names that sound like they were created by an algorithm that mashes up two fanciful words to create some new pseudo-fanciful word. Here's my top (or bottom?) ten most annoying brand names of this ilk: 10. Verizon - mashup of horizon and...vermin? 9. Truvia - no thanks, I'll take sugar 8. Affinia - at least it's not Infinia 7. Intrinsity - the quality or character of being intrinsic? 6. Infiniti - I'm sensing a pattern here 5. Advair - Advanced Air! 4. Altria - wow, this cigarette company is so altruistic now! 3. Accenture - Andersen Consulting paid $100 million for someone to think up this name 2. Abilify - there's already a word for this: 'enable'

And the number 1 most annoying brand name is....

1. Xfinity - an energy drink? a porn site? NO! eXtreme infinity!!