Is Swift Ready for Prime Time?

To long-time Objective-C developers – especially those with an interest in modern programming languages – Swift is a very welcome and exciting step forward. At the same time, it can be frustrating at times due to the (current) state of the developer tools.

This is from Duolingo's thorough and fair assessment of the pros and cons of building a production app in Swift. I myself have been burned too many times by Swift in recent months (having to rewrite classes in Objective-C when things didn't go as planned) to consider it production-ready. But I'm glad to see that Duolingo is having success with it.

James Mickens on the sorry state of web technologies

People think that Web browsers are elegant computation platforms, and Web pages are light, fluffy things that you can edit in Notepad as you trade ironic comments with your friends in the coffee shop. Nothing could be further from the truth. A modern Web page is a catastrophe. It’s like a scene from one of those apocalyptic medieval paintings that depicts what would happen if Galactus arrived: people are tumbling into fiery crevasses and lamenting various lamentable things and hanging from playground equipment that would not pass OSHA safety checks. This kind of stuff is exactly what you’ll see if you look at the HTML, CSS, and JavaScript in a modern Web page. Of course, no human can truly “look” at this content, because a Web page is now like V’Ger from the first “Star Trek” movie, a piece of technology that we once understood but can no longer fathom, a thrashing leviathan of code and markup written by people so untrustworthy that they’re not even third parties, they’re fifth parties who weren’t even INVITED to the party, but who showed up anyways because the hippies got it right and free love or whatever.

Via Steve Laniel, this entertaining harangue against modern web development is a must-read for all web developers.


Where Does Bitcoin Get Its Value?

Update: Based on some helpful comments on /r/bitcoin, I edited my original post to clarify that Bitcoin derives its value, only in part, from the costs required to produce it. However, without that it would be valueless, even if there are other things that contribute to its value. Not sufficient, but necessary. Some fellow engineers I work with have been mining and trading Bitcoin since well before the mainstream hype of the last few months, and in talking to them I've become increasingly interested in it as well. It is one of the more elegant technological ideas to come along in a long time, and its greater economic, sociological, and political implications are also fascinating to me.

But when I first heard about it, I was hesitant to treat it seriously based on one fundamental doubt: how could a bunch of numbers spit out by a computer have intrinsic value in the same way that gold can? I understood how Bitcoin could have extrinsic value, based on things like trust and hype, but if that were all it were based on, why would Bitcoin be worth more than any other arbitrary currency one could create out of thin air?

And then I spent some time learning exactly how Bitcoin mining works, and discovered that there is, in fact, intrinsic value to the currency. (Funny how ignorance can lead you to dismiss things like that!) In order for Bitcoins to be created, a computer must solve a difficult math problem by guessing a number by brute force. This requires a running computer (these days, a powerful computer specifically built for this type of math problem), which in turn requires electricity, which was probably made with a fossil fuel or nuclear generator. So, in a way, you could say that the value of Bitcoins is at least partially derived from the fuel used to create the energy needed to power the computers that mine them.

But, you ask, what happens as computers get more and more powerful and energy efficient? Shouldn't Bitcoins get easier and easier to mine, dropping the amount of energy required to mine them, thereby decreasing their intrinsic value? Turns out that part of the ingenious and elegant design of Bitcoin prevents this from happening. The difficulty of the math problem that the mining machines have to solve changes dynamically over time. The system as a whole aims to stabilize the difficulty such that these math problems can only be solved roughly once every 10 minutes. If the computers start to solve the problems faster, the difficulty across the system is increased. If the computers start solving the problems slower, the difficulty is decreased.

To be sure, there are other factors that contribute to Bitcoin's value other than trust and hype. It shares many common characteristics with gold: durability, divisibility, combinability, homogeneity, and scarcity. All of these things factor together, along with the sociological stuff, to give Bitcoin its total value. But if it were possible to mine Bitcoins without expending resources, I believe their value would fall to zero. (There is another stopgap against this built into the technology: the total number of Bitcoins is capped at 21 million, so even if down the road it were theoretically possible to mine Bitcoins for free, only up to 21 million total could be harvested. That hard cap also contributes to Bitcoin's scarcity, and therefore its value.)

And so, it is this fixed degree of difficulty, inherent in every single Bitcoin that will ever be mined, that ensures that there will always be some level of effort required, and therefore some baseline value in the coins. Without this fixed difficulty, computers would be able to simply pluck Bitcoins out of thin air, and despite all the other valuable characteristics of the currency, it would in all likelihood be worth nothing.


Everything Sucks

Massive Electronic Surveillance

ECHELON is a code word for an automated global interception system operated by the intelligence agencies of the United States, the United Kingdom, Canada, Australia, and New Zealand, and led by the National Security Agency (NSA). I've seen estimates that ECHELON intercepts as 3 billion communications every day, including phone calls, e-mail messages, Internet downloads, satellite transmissions, and so on. The system gathers all of these transmissions indiscriminately, then sorts and distills the information through artificial intelligence programs. Bruce Schneier, Secrets and Lies,2004, 2nd ed.

Why mobile web apps are slow

Drew Crawford, in a long but well-researched essay on mobile app performance:

Think about iPhone 4S web development as [an]...environment that runs at 1/50th the speed of its desktop counterpart.  Per the benchmarks, you incur a 10x performance penalty for being ARM, and another 5x performance penalty for being JavaScript. Now weigh the pros and cons of working in a non-JavaScript environment that is merely 10x slower than the desktop.

Famous Last Tweets

About a year ago, my friend and colleague Michael McWatters tweeted, "Oh no, if I die at this moment, my last tweet will have been about Andrew Breitbart…must think of something else. Beauty, science, altruism!" I replied, "@mmcwatters That would be an interesting site to make: the last tweets of famous people." In the weeks and months that ensued, we made good on our promise and built the site, which Michael brilliantly named, "The Tweet Hereafter."  As our lives become increasingly transparent on sites like Twitter and Facebook, we leave indelible marks on the Internet that can't be erased once we die.

In March, 2012, conservative blowhard Andrew Breitbart famously sent an apologetic tweet less than an hour before he died of a heart attack. And now, a little less than a year later, beloved Olympian Oscar Pistorius has been arrested on suspicion of murdering his girlfriend Reeva Steenkamp, who just yesterday tweeted excitedly about her plans for Valentine's Day.

We've been collecting tweets like this for over a year and have finally decided to publicize the site. The site is certainly morbid, sometimes interesting, quite often meaningless. But we hope it makes you think a little bit.

We Need to Break More Rules

A recent episode of the Planet Money podcast profiled Thomas Peterffy, one of the first people to experiment and be successful with high-frequency trading. They told the story of how he was doing algorithmic trading before any of the stock exchanges supported electronic trading, and before NASDAQ even existed. So how did he do it? That's the fascinating part. He made his money building a system that was able to assign a fair market price to stock options. He then compared these values to what the options were actually trading for, and arbitraged the difference. Back in the late 1970s when he first started, he would print out the numbers and bring them to the trading floor in a huge binder. When the stock exchange banned him from bringing the binder, he stuffed the papers into every pocket his suit had.

Then Peterffy got himself a system called Quotron, a computerized service that delivered stock prices to brokers (it was a replacement for the widely-used ticker tape system). If he'd used the system the way it was intended, he would've read the quotes as they came in on the Quotron, manually input them into his algorithm, run the numbers, and cashed in. But that wouldn't have been that much better than just using ticker tape, and the fact that he had a computerized system meant the data was in there somewhere, in digital form. If he could figure out how to retrieve it he could pipe it into his system and save a crucial, time-consuming step.

Nowadays if we wanted to do something similar, we might look into whether the Quotron had an API, and if it did we'd query that for the information. If it didn't have an API, well, we might look for another system that did.

But Quotron had no such ability. So he did what any hacker worth his salt would do. He broke out his oscilloscope, cut the wires on the Quotron, reverse-engineered the data signal, and patched it into his system. And you think screen-scraping is hard?

When NASDAQ, the first all-electronic stock exchange, came online, he was faced with a similar system. Brokers could trade directly on the exchange via computer. This was no doubt a huge breakthrough, but there was still no way his system could make the trades automatically. So, again, he busted out his oscilloscope and patched his way into NASDAQ.

Eventually the folks at NASDAQ caught wind of this, visited him at his office, and reminded him that his terms of use dictated that trades must be made via keyboard input, not by splicing into the data feed. They gave him a week to comply with the terms. So what did Peterffy do? He built a robot to type the trades out on the keyboard. Of course he did. When the NASDAQ official returned a week later, all he could do was stand agape, in awe of what Peterffy had done.

We developers could learn from Peterffy. The ease of software engineering has made most of us too complacent. When Twitter's API terms change, we complain about it for a few days, and then change our business models to suit the new rules. But the real innovation, the real interesting stuff, the way we'll make $5.4 billion like Peterffy did, is by bending the rules and building systems that give us a leg up on the competition, or, better yet, improve people's lives.

To be sure there are lots of hackers on the fringes of legality doing very interesting things, but the rest of us are somehow content to toe the line. We shouldn't do anything that's illegal, but we should get close. Innovation comes out of spurning the status quo, not complying with it. It's time for people who know how to build things to bend the rules a little, and see what comes out the other side.

(The podcast was based on Peterffy's story as told in the book Automate This: How Algorithms Came to Rule Our World.)

Google Maps Bookmarklet Lets You Map Any Address on a Page

How often have you been on a site where you see an address but no map, and maybe not even a link to a map? I find this very annoying, so I created a little bookmarklet that solves the problem. To you use it, just highlight an address on a page and click the bookmarklet. You'll be taken directly to Google Maps for that address. Easy enough. Here's the bookmarklet. To install it, just drag it up to your bookmarks bar!

Map It

Rediscovered Images from 9/11

On September 11, 2001, my wife and I were woken at about 9:00 by her mother, who told us to turn on CNN. We were newlyweds, living in a studio apartment on the Lower East Side. As soon as we saw the burning towers on TV, we left our apartment and headed down to the street. Looking southwest from Grand and Henry, we had a direct view of the World Trade Center. We stood for a while and watched in shared horror as the towers burned, and then fell. Apparently, I had my camera with me and was taking pictures--a fact that the enormity of the events had erased from my memory.

But I recently found the pictures I took that day, buried in a box in my house, and seeing them again took my breath away. Here they are, for posterity. (Click on any image to get the full-sized scan.)


Just Say No to Feature Creep: Xcode Edition

One of the hardest things for any software designer to do is to decide not to implement a feature. Many software projects have been delayed or even derailed by feature creep, or the tendency to widen the scope of a project during development. But in many cases, features that seem like "must-haves" during development can be deferred to later phases of development, or cut completely. Perhaps the paradigmatic example of this is the original iPhone OS's lack of cut, copy and paste. How could Apple have omitted such vital features? It didn't seem to hurt sales of the iPhone though.

Today I just ran into another example, also from Apple. In Xcode, you can switch from a header file to its corresponding implementation file (and back) using the keyboard shortcut Command-Control-Arrow (any arrow). This is a really nice way of navigating back and forth while you're creating new instance variables and methods for your classes. However, when you navigate in this way, the project browser at the left doesn't update its highlight to indicate that you're viewing a different file. Is this a bug? Probably not. It's probably just the designers of Xcode deciding to rein in feature creep so that they can actually ship the product.


It's so damn tempting to want to make sure every little bug is fixed and every little corner case is accounted for before you release your software. But, as they say, perfect is the enemy of the good. It's crucial to know when something is good enough so you can ship it as soon as possible. With cut, copy, and paste, Apple finally introduced the feature into its third version of the iPhone's operating system. By then they had already sold millions of phones to customers who decided they could live without that crucial feature.

No, Graphic Designers Aren't Ruining The Web

I woke up today to this provocative article in The Guardian about how graphic designers are ruining the web. Naughton's main argument seems to be that graphic design adds unnecessary bulk to websites, wasting bandwidth. Naughton is absolutely right that page sizes have increased over the last two decades of the web's existence. He is also right that this is a problem. However, he describes the problem as a "waste of bandwidth." Last I checked, "bandwidth" is an infinite resource (unless maybe you extrapolate bandwidth to barrels of oil). The bigger problem is that more elements on a page (and bigger individual elements) will slow down page load times and potentially be frustrating for the user. If Naughton is saying that people who make websites should work to reduce the number and size of the elements on their pages, I completely agree.

But it does not then follow that websites also need to be ugly (he uses as an example of an underdesigned site that is compelling for its content if not its look and feel). Highly-designed websites need not be bulky. Just because the BBC News page sends 165 resources on every request to its homepage, doesn't mean all designed sites do. is a lean and  mean website, requiring roughly 50% fewer requests than the BBC News. Yet I would say it offers a bit more of a user-friendly way to access information than Norvig's site.

And we could improve things even more than that. We can combine and minify JavaScript and CSS files. We can reduce the number and sizes of images on each page. Many requests on big sites like these are to 3rd party tracking pixels and JavaScripts. How about we agree to pay for the services and content we use on the web so we don't have to deal with all this bullshit marketing crap? Graphic Design is not the cause of all this bulk. Increased user access to bandwidth and marketers are more to blame.

I'll agree that some underdesigned sites are excellent because they are underdesigned: and (the original) But if Apple has taught us anything over the past decade, it is that things can be designed without being complicated and bulky. And that is the direction I'd like to see the web going in. That way we get to have our cake and eat it too.

Enhanced by Zemanta

The 5 Worst Practices of the Mobile Web

My friend Michael McWatters tweeted his frustration today that there is no way change your Twitter password on their mobile site. I've butted up against this issue in the past, and the fact that you can't even switch between the mobile and full site on their is immensely annoying (in fact, there isn't even a footer on the site!). With smartphone penetration growing ever higher, it's increasingly important for companies not just to build mobile sites, but to build them well. Mobile sites can no longer play second fiddle to their desktop brethren. Over the past few months I've become increasingly sensitive to, and bugged by, the degree to which so many mobile sites are so badly implemented. With that in mind, here are my 5 "worst" practices of the mobile web.

  1. Don't give users the choice of using the full site - not letting users choose to use the full site on their mobile device is presumptuous at best, and crippling at worst. Just because the screen is small doesn't mean you don't want to be able to access all of a site's features in a pinch. On the iPhone anyway, browsing a full website is often very tolerable and should at least be an option for users. This is related to #2, which is...
  2. Don't cripple your mobile site - while it may be true than on a dumb phone you likely do not need or want to access all of a site's features on the go, on a smartphone you often do. A mobile site no longer needs to be a list of the 10 most visited pages on a site. Let's start building mobile sites that allow access to some more advanced features like changing your password.
  3. Show an interstitial ad for your mobile app - have you ever clicked on a link on your phone only to be brought to an interstitial ad for a site's mobile app, instead of the article you were trying to read? And of those times, how many times have you gone immediately to download the app instead of just closing out the ad and trying to read the article you were interested in?
  4. Don't redirect from your mobile domain to the full site on a desktop browser - many sites with mobile domains will redirect you to it using browser detection. But many of those do not do the reverse redirect (i.e., visiting the mobile site on a desktop browser doesn't redirect back to the full version). Being forced to view a mobile site on a desktop browser is torture.
  5. Redirect to your mobile domain, but not the specific page - all this redirecting has its place, but it's so easy to get it wrong. On many occasions I have clicked on a link on my phone, gotten redirected to a mobile domain, and instead of it going to the article I was trying to read, I get placed on the homepage of the mobile site. So frustrating!

The mobile web is certainly in its infancy, but that's no excuse for giving users such broken experiences. It's 2011 and it's imperative now that mobile sites are just as beautiful, simple, and elegant as the devices used to navigate them. If you have to choose between offering a mobile site that suffers from any of the worst practices listed above, and having no mobile site at all, choose the latter.


This Piece of Technical Writing Has Been Written By Me

In my role as a business analyst at a software development shop I see a lot of technical writing, much of it terrible. For some reason, people whose job it is to be precise and logical often fail to do so when the language of expression is English, rather than Java. While the problems in technical writing are varied, the offense I most often see is overuse of the passive voice. For those who don't remember their junior high school grammar, passive voice is a grammatical construct in which the object of a sentence is repositioned as its subject. "Tom throws the ball" is active voice, while "The ball is thrown by Tom" is passive. The use of passive voice in itself is not grammatically incorrect, but it often weakens the clarity of the writing by obscuring who or what is doing the action in the sentence.

Technical writing is a veritable breeding ground for passive voice proliferation, in many cases because the actors in technical writing are not tangible. The actors are software code, or systems, or networks. My phone today popped up an alert that said, "The server cannot be reached." Who exactly is the one not reaching the server? Is it the phone? Is it the app I was running? Is it me?

But just as a writer would avoid passive voice in "normal" English prose, so too should a technical writer avoid it in his work. Phrasing technical ideas in the passive voice dampens the agency of the thing doing the action, making it seem unfamiliar and disembodied. Technology does things. To render technology in the passive voice is to distort its power to create change.

This is especially evident when technical writing refers to error conditions, as in the case of the alert above. It's almost as if the authors of the software were deflecting blame away from themselves with the message, "The server cannot be reached." They could just as easily have said, "It's not our fault that you can't access this page. Talk to the dudes who run the server." (People in IT love to blame the other guy, but that's a story for a different post.)

It's never that difficult to clean up language like this in one's technical writing, but it often requires ascribing some degree of agency to to the technology. Instead of "The server cannot be reached," one could write it as, "The application failed to reach the server," or, "The application failed to connect to the server." If English had a better indefinite subject pronoun, we could even write something like, "One cannot reach the server at this time."

There are any number of solutions to the problem of passive voice in technical writing. The main thing is to be aware of the easy pitfall, and to think about technology more as an agent of change than as some hidden force behind the things we observe.

TV Zero

My family and I haven't watched "TV" in weeks. Granted, we don't have cable (we use rabbit ears and a digital-to-analog converter box), but that's not really the reason we haven't been watching. The real reason is that Netflix instant streaming has changed our lives. With the sheer volume of quality content that Netflix has (as well as other online video sites like Hulu), we are now at the point where we don't really need to watch actual television. We are getting close to a point I like to call "TV Zero."

By "TV Zero," I don't mean turning off all your screens and moving to Montana. I simply mean disconnecting from television as we know it (scheduled programs grouped into broadcast networks). I truly believe that, no matter how much the cable companies and networks drag their feet over the next few years, it's just a matter of time before all programming formerly available on cable or over-the-air broadcast will be available on the internet. The experience is so much better.

For one thing, video over the internet is truly demand-based. I can watch any episode I want, at any time I want. For another thing, finding content is far easier, and has far more potential, than the current model that cable tv uses. Netflix can recommend shows I may never have heard of, based on what it already knows about my consumption habits. The array of content available is also more vast--services like Netflix can offer back catalogs of content providers with much lower incremental cost than, say, a cable company. In fact if you think about it, it's kind of shocking that after 15 years of the "commercial internet" we're still only in the early stages of this.

And then there's all the recent buzz about Apple making a "smart tv." If the rumors are true (and I believe they are , for the good reasons outlined here), the acceleration of our culture toward "TV zero" could increase tremendously. The potential for disruption and innovation in this space is huge, and in my opinion inevitable, and there's no company in a better position to lead this change than Apple. But if Apple won't do it, then someone else will. (Amazon? Google?)

One thing is certain, though: the cable companies will not go down without a lot of kicking and screaming. Unless someone in their ranks realizes the inevitability of this change, and figures out a way to profit madly from it.

Related articles

Enhanced by Zemanta

Minimizing Agony, Maximizing Pageviews

On Wednesday at the Launch Conference, travel search engine Hipmunk presented a new mobile version of their web app. But that's not what I want to talk about. I want to talk about Hipmunk's general approach to solving the problem of airfare search, and how it might be applied to other problems. The genius of Hipmunk is in their "agony" algorithm, grounded in the key insight that when people search for airfares, price and departure time are rarely the only considerations. What people really want to know is how agonizing the trip will be, measured as a combination of price, duration, and number of layovers. So Hipmunk sorts your search results by this "agony" score (in descending order of course). Simple. Brilliant. There's so much agony in the world; what else could this model be applied to?

The first thing that jumps to mind is turn-by-turn directions. Most navigation apps provide routes that optimize for distance or time, and in some cases by real-time traffic patterns. But there are a lot of other factors than can contribute to one's agony while driving. For instance, given the choice, I'd much rather drive a scenic route than an interstate, but likely only if the scenic route isn't orders of magnitude more time-consuming. Or maybe I'd like to drive a route with better food options than Shoney's and Roy Rogers. Transit directions could also benefit from applying this model. I'd much rather take a trip that involved a transfer if the two subways were less crowded than the one, provided the trip duration wasn't significantly longer.

Another great application of the "agony" model would be a site that helped you decide whether or not buy something online or at a nearby store. The algorithm could factor in a combination of item cost, shipping cost, shipping duration and return policy of the online option, and compare it to the item cost and travel distance to a local store that carries the item, as well as the real-time availability of that item in the store's inventory ( is working on this last problem).

Sorting by "agony" factor is a powerful idea, and one that is quickly letting Hipmunk soar to the top of the travel search business. What other problems could you apply this model to?

Related articles

Enhanced by Zemanta

Mapping Transit Delays With Ushahidi

MTA Delays Crowdmap

I woke up at 5 this morning to the news that the two subway lines in my neighborhood were still not running, more than 36 hours after the "Boxing Day Blizzard" had begun in New York City. The question then was, well, what is running?

The awful MTA site wasn't much help, especially with regard to the buses. It offered cryptic and non-specific messages like:

Due to ongoing snow related conditions, all MTA bus express services are running with system wide delays. There is no limited stop bus service in all boroughs.

There was no indication of how any one specific bus route was faring.

My neighborhood's Yahoo group was abuzz with conversations about which trains and buses were and weren't running, but the information was unorganized and freeform, because it was happening over email.

And while the MTA probably had a good internal grasp of which lines were having issues, they were not doing a good job of disseminating that information. Ideally their website should have been providing real-time geo-located incident reports so that any commuter could look at a map and quickly determine what the best route was to wherever they had to go. Even more ideal would have been, as my friend Michael McWatters suggested, a trip planner that could re-route you away from the suspended lines and to the freely-moving ones. But, at the very least, even just a little bit more detail would've been nice.

So this morning I thought to myself, this is exactly what crowdsourcing is good at. And I remembered hearing about a mapping tool called Ushahidi, which was put to good use during the crisis that followed the Haitian earthquake back in January. Indeed crisis mapping is a very powerful idea, and Ushahidi is leading the charge with their open source application that's free to download and deploy, as well as with Crowdmap, their hosted version of Ushahidi that is also, surprisingly, free.

In the span of about an hour, I put up a site up using Crowdmap called I entered all the subway service changes from the MTA site, and told a few people about it. It got a little bit of Twitter buzz, but only one person submitted a report other than me. I think I was a little late to the game (I should've set it up on Sunday), but, it turns out, the tool also has a few shortcomings specific to this particular use case.

First, the tool was built for incidents whose geography is best described as a point (a latitude/longitude coordinate pair). But transit delays are best described as lines. When service is disrupted, there is a starting coordinate and ending coordinate for that event. Ushahidi had no good way of representing this, so I ended up just putting in two points for every incident. It was kind of a hack, and it also was misleading--the issue itself spanned an entire length of subway or bus line, not just the endpoints. I imagine that anyone who might've submitted a report would have run into the same issue, and I also suspect there are some incidents that are best described by a polygon instead of a point or a line.

Similar to the incidents being specified by a point in geographical space rather than a line, they are also represented by points in time rather than durations. Transit delays have a finite duration (even if the duration isn't known up front). I would love if you could set incidents to expire after a certain amount of time (24 hours maybe?) rather than requiring an admin to go back in to the system and either edit or delete the incident. The overhead can be prohibitive.

Another issue is that it's somewhat difficult to submit reports. You have to visit the website and submit a form. Actually that's not really true--Ushahidi supports reporting via Twitter hashtags, SMS, or mobile apps (though there isn't an iOS app yet). These are decent options, but you don't really get the good geo-location data this way. It's probably only a matter of time before the mobile options for Ushahidi reporting get really good, but for now it's a bit clunky.

Despite these issues, though, it's really interesting to see how far this kind of technology has come. It's also interesting to think about how many different mature platforms Ushahidi is built on: there's Linux, Apache, MySQL, PHP, the Google Maps API, the Twitter API, SMS, Email, RSS, probably many others. It's pretty staggering when you think about it, and all I really had to do to set it up was press a "submit" button on a web page.

Even though the Haiti earthquake was a big moment in the spotlight for Ushahidi, I think we have yet to hear the last of them. They are building an amazing tool and I'm excited to see how it can evolve and continue to help communities deal with local crises and civic emergencies.

Related articles

A Decade Later, Are We In Another Tech Bubble?

Lots of people are buzzing lately that we're in another "dotcom" bubble, roughly ten years after the last one. In mid-November, noted New York venture capitalist Fred Wilson described some "storm clouds" ahead for the tech investing space. He described what he sees as some unsustainable "talent" and "valuation" bubbles. This was around the time of the TechCrunch story about the engineer that Google gave $3.5 million to stick around. Not too long after that, Jason Calacanis of Mahalo fame wrote a brilliant edition of his email newsletter in which he outlined four tech bubbles he sees right now: an angel bubble (similar to Wilson's valuation bubble), a talent bubble, an incubator bubble (new firms cropping up to try and copy the successes of YCombinator and TechStars), and a stock market bubble.

And the frothy news just keeps on coming: Groupon this week allegedly turned down a $6 billion acquisition offer from Google (yes, that number has nine zeros and three commas in it [1]). Oh, and also, the Second Market value of Facebook is about $41 billion. That makes it #3 in the web space after Amazon and Google.

And, finally, there was this hilarious and depressing tweet going around yesterday from @ramparte:

But for me the proof was in two recent encounters with people decidedly not in the tech industry: my accountant and my banker. Each of them, upon learning what I do for a living, started talking to me about their tech business ideas. One was intriguing, one was, shall we say, vague, but everywhere I turn these days I feel like someone's trying to pitch me on their idea for a social network, a mobile application, or whatever. And who am I? I'm a nobody. Can you imagine how many pitches people like Fred Wilson and Jason Calacanis get? It must be absurd. And in any case, what most of these folks don't realize is that the idea is about 5% of a successful business. The remaining 95% is laser focus and nimble execution.

I feel lucky to be in technology right now--the economy is so crappy for almost everyone else. And that's got to be one of the driving factors of this bubble right now. It's one of the only healthy industries out there, and it's attracting people who are disenchanted with whatever sick industry they happen to be in. Other driving factors of course are the recent explosive growth in mobile computing, the maturation of the web development space (frameworks like Ruby on Rails and Django that make web app development almost frictionless), and the rise of APIs and web services that allow vastly different sites to integrate their offerings.

It's as if all the fishermen in the world have descended on one supremely awesome spot. A lot of people will catch a fish or two, some will catch enough that they'll never have to fish again, but most won't catch a thing.

[1] If anyone ever offers me $6 billion dollars for anything, please remind me not to turn them down.

Related articles

Enhanced by Zemanta