Bookmarks for March 9th through March 12th

These are my links for March 9th through March 12th:

Bookmarks for February 20th through February 21st

These are my links for February 20th through February 21st:

Bookmarks for February 15th through February 16th

These are my links for February 15th through February 16th:

Amazon aStore – custom storefronts for Amazon affiliates

Amidst the speculation about the Amazon Unbox video download service, Amazon has quietly launched aStores, a service providing custom online storefronts for Amazon affiliates. (You may not be able to view the link unless you’re an Amazon affiliate.)

aStore by Amazon is a new Associates product that gives you the power to create a professional online store, in minutes and without the need for programming skills, that can be embedded within or linked to from your website.

Here’s a link to their demo store.

You get to pick up to nine “featured items” to put on the home page of the store, choose product categories, and add reviews and editorial content. The shopping cart and fulfillment are handled by Amazon, with standard referral fees going back to the affiliate. There’s a browser based interface for building a store on the Amazon Affiliates site. The resulting store can be hosted by Amazon or on your own site.

This sort of functionality has been available for a while for those will and able to customize their site using Amazon’s web services API, but the aStores program will make custom stores broadly accessible to all of the Amazon affiliates base (just in time for the holiday shopping season). I suspect we’ll see an explosion of niche shopping sites in short order, it looks pretty easy to set one up.

The investor sentiment cycle, tech and web 2.0

Silicon Valley is built on optimism and entrepreneurship, but lately, most tech companies can do no good in the eyes of public market investors, who are presently in a mood to sell on no news, bad news, or even good news.

At the same time, private market sentiment toward investing in Web 2.0, online services, and consumer media and publishing appears to be positive.

Barry Ritholtz posted this investor sentiment graph, on which I’ve marked roughly where I think we are for “new web” startups versus public tech companies.

A lot of the problem with public tech company share prices is due to uncertainty about future prospects – a slowing economy, growing competition, increasing costs, and a general cloud of unknowable liability from options accounting issues. The actual businesses are often doing OK or even great, but investor sentiment has shifted, bringing down the share price. See BRCM, RACK, or NVDA for a few examples of what happens when you report a decent quarter without boosting forward guidance. Fewer people are willing to pay a 30 multiple for growth that may never come, or that may never have existed in the first place.

In contrast, there is still a lot of investor love for Web 2.0 startups and other “new” online services. Part of this reflects supply and demand — there are a lot of investable funds around, and it’s hard for a fund to invest a lot of money in small chunks.

There’s still a lot of excitement about the future of Web 2.0 et al, but it’s been feeling overdone for the past few months (“Digg is worth $60M“), without necessarily being over. On the other hand, I still get the impression that people here in Silicon Valley are still somewhere between denial (“It will go back up”) and fear (“What if it doesn’t”) with respect to the medium term prospects for tech stock share prices.

This investor sentiment graph and other graphs of economic cycles can also be found on the forecast page at Now and Futures.

Anyone think we’ve already hit a peak for Web 2.0 investments or a bottom for tech stocks? (I don’t.)

Future of Web Apps workshop

I had been trying to arrange my schedule to get to the Future of Web Apps workshop this week in London, but sadly things didn’t work out. Actually, I didn’t even manage to get to last night’s SearchSIG to see edgeio’s first public demo here in the Bay Area, so perhaps it’s not surprising I couldn’t get a trip to the UK sorted out.

The good news is, there’s a conference wiki with lots of presentation notes, including comments on, discussions on how Flickr evolved, some thoughts on approaches to building discoverable URLs for data, the merits of Ruby on Rails. and a detailed discussion on the implementation approach and specific costs for the DropSend service.

Deconstructing search at Alexa

Wow! Although the basic idea is straightforward, crawling and indexing for a general purpose search engine requires huge resources. Web crawlers are effectively downloading copies of the entire internet over and over, turning them over to indexing applications which scan the contents for structure and meaning.

The sheer scale of the task is a substantial barrier to entry for anyone wanting to develop a new indexing or retrieval application. Some projects have narrowed the problem domain, which can reduce the problem scope to a manageable level, but this announcement from Alexa looks like it may offer an exciting alternative for building new search applications.

John Batelle writes:

Alexa, an Amazon-owned search company started by Bruce Gilliat and Brewster Kahle (and the spider that fuels the Internet Archive), is going to offer its index up to anyone who wants it (details are not up yet, but soon). Alexa has about 5 billion documents in its index – about 100 terabytes of data.

Anyone can also use Alexa’s servers and processing power to mine its index to discover things – perhaps, to outsource the crawl needed to create a vertical search engine, for example. Or maybe to build new kinds of search engines entirely, or …well, whatever creative folks can dream up. And then, anyone can run that new service on Alexa’s (er…Amazon’s) platform, should they wish.

The service will be priced on a usage basis: $1 per CPU hour, $1 per GB stored or uploaded, $1 per 50GB data processed.

There’s no announcement posted on the Alexa or Amazon sites yet, it’s apparently due out overnight. (Updated 12-13-2005 00:25 – the site is up now)

Not every search and retrieval application is necessarily going to fit onto the way Alexa has built their crawler and indexing infrastructure, or onto any other search engine platform, for that matter. But opening up access to more of the platform should make it possible for a lot of new ideas to be tried out quickly without having to build yet another crawler for each project. Up to this point, many search ideas can’t be evaluated without working at one of the major search engines. I suspect most development teams would prefer to get access to Google’s crawl and index data, but I’m certainly looking forward to seeing what’s available at Alexa when they get their documentation online in the morning.

More from Om Malik, TechCrunch, ReadWrite Web

How (and where) to download your bookmarks

Last Friday’s announcement that Yahoo is buying has probably got more than a few people thinking about the future of the service and whether they want to keep using it. In any case, as with all of the interesting and useful web services out there, it’s good to take time now and then to back up your personal data, in case something goes sideways and the service becomes unavailable or unusable for whatever reason.

I’m personally planning on continuing to use, although there are a number of interesting tagged bookmarking alternatives out there, including running your own.

The first step is to get your personal bookmark data, which can be obtained through the API. You can retrieve all your saved bookmarks at, which will return an XML file that can be saved to your local system and used as a backup or to import your bookmarks into another web application elsewhere.

The next step is to decide what you want to do with the data. Some alternative tagged bookmarking solutions include:

The following services are based on open source projects, so you can (or in some cases have to) run your own bookmarking system.

Yahoo already runs MyWeb2.0, which presumably will begin to merge with at some point. It has a lot of interesting features, but hasn’t had enough to get me to switch over up to this point. I’ve been wanting private bookmarks and tags on for a while, although I think I’ll be moving those off my desktop onto a roll-your-own server solution.

Any more suggestions? Reply in the comments and I’ll pull them up to the main post.

Here’s an extensive list of free bookmark managers at (via David Beisel)

Yahoo goes after more tagging assets, buys

Yahoo continues down the path of more tagging and more collaborative content. Having already purchased Flickr, this morning they’re acquiring (terms undislosed):

From Joshua Schachter at the blog:

We’re proud to announce that has joined the Yahoo! family. Together we’ll continue to improve how people discover, remember and share on the Internet, with a big emphasis on the power of community. We’re excited to be working with the Yahoo! Search team – they definitely get social systems and their potential to change the web. (We’re also excited to be joining our fraternal twin Flickr!)

From Jeremy Zawodny at Yahoo Search Blog:

And just like we’ve done with Flickr, we plan to give the resources, support, and room it needs to continue growing the service and community. Finally, don’t be surprised if you see My Web and borrow a few ideas from each other in the future.

From Lisa McMillan, an enthusiastic user of all 3 services (comment on the blog):

Yahoo that’s delicious! I live here. I live in flickr. I live at yahoo. This is insane. You deserve this success dude. Just please g-d don’t let me lose my bookmarks :-D I’m practically my own search engine. LOL

Tagged bookmarking sites such as can provide a rich source of input data for developing contextual and topical search. The early adopters that have used up to this point are unlikely to bookmark spam or very uninteresting pages, and the aggregate set of bookmarks and tags is likely to expose clustering of links and related tags which can be used to refine search results by improving estimates of user intent. Individuals are becoming their own search engine in a very personal, narrow way, which could be coupled to general purpose search engines such as Yahoo or Google.

I think Google needs to identify resources it can use to incorporate more user feedback into search results. Looking over the users’ shoulders via AdSense is interesting but inadequate on its own because there are a lot of sites that will never be AdSense publishers. Explicit input capturing the user’s intent, whether through tagging, voting, posting, publishing, is a strong indication of relevance and interest by that user. I think the basic Google philosophy of letting the algorithm do everything is much more scalable, but it looks like time to capture more human input into the algorithms.

In a recent post, I pointed out some work at Yahoo on computing conditional search ranking based on user intent. The range of topics on tends to be predictably biased, but for the areas that it covers well, I’d be looking for some opportunities to improve search results based on what humans thought was interesting. As far as I know, Google doesn’t have any assets in this space. Maybe Blogger or Orkut, but those are very noisy inputs.

This seems like a great move by Yahoo on multiple fronts, and I am very interested to see how this plays out.

See also:

Update 12-12-2005 12:30 PST: No hard numbers, but something like $10-15MM with earnouts looks plausible. More posts, analysis, and reader comments: Om Malik, John Batelle, Paul Kedrosky.

Local Tag Cosmos

tag cosmos
I’ve added a local tag cosmos, which shows a tag cloud for posts on this site. Unfortunately, I’m also using tags and bookmarks scattered across, Flickr, Technorati, and other services, which aren’t integrated into the cloud, but this provides a different view of what’s been posted here since I’ve started tagging things.

I’m still evolving my personal use of tags. You can see that I’ve started tagging some posts with “web2.0“, although I’ve been reluctant to turn it into a site category. I don’t like the label, but I recognize that it’s the most popular tag for a lot of “new” stuff at the moment. So exposing the tag makes it more findable.

I’ve been debating reducing the number of post categories in favor of using frequently occuring tags for site navigation, so that recurring topics automatically make themselves more visible. It can be difficult to find things here, partly because I’m posting about a lot of different topics and partly because the categories don’t always organize the posts very well.

Tagging on this site is currently implemented using Jerome’s Keywords plugin for WordPress to apply tags to posts and for generating the tag cloud.

Amazing customized Yahoo maps with Flash

Just when I’d started getting a little bored with Google-based pincushion maps du jour, I come across something surprising built on the new Yahoo Maps API:
from Justin’s Rich Media Blog:

With the power of Flash 8, you can customize the Yahoo! Maps on your site to actually blend with the surrounding design of the site or application. Forget about a rectangular maps and default colors of the map tiles. Use ActionScript, or the IDE to add runtime filters to the map tiles themselves.

The radar “scan” is animated to rotate around, while the pirate map telescope also serves as the zoom level slider.

I’ve seen so many Google Maps applications in the past few months that the sheer novelty and utility value of new ways to access data and maps has started to wear off. These demos made me stop to take a look simply because they look so much better than what we’ve gotten used to lately, and are likely to precipitate a wave of interesting new ideas.

I’m ambivalent about requiring Flash as a client technology. It’s really neat, and is deployed on a lot (but not all) browsers. It’s also somewhat opaque, and chews up a lot of system resources. I can usually tell when I’ve landed on a web page with Flash content somewhere because the fan in my T42 usually starts spinning up after a few seconds instead of running dead silent.

But in the meantime, this made my day.

(via PhotoMatt)

Amazon Mechanical Turk – Putting Humans in the Loop

I came across a cryptic link to on, asking “Isn’t that how the Matrix came to be?”

Amazon Mechanical Turk provides a web services API for computers to integrate “artificial, artificial intelligence” directly into their processing by making requests of humans. Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications. To the application, the transaction looks very much like any remote procedure call: the application sends the request, and the service returns the results. In reality, a network of humans fuels this artificial, artificial intelligence by coming to the web site, searching for and completing tasks, and receiving payment for their work.

All software developers need to do is write normal code. The pseudo code below illustrates how simple this can be.

 read (photo);
 photoContainsHuman = callMechanicalTurk(photo);
 if (photoContainsHuman == TRUE) {
 else {

Given the source of the link, I was a little skeptical at first read, but it appears to be a legitimate beta project that just launched yesterday at Amazon. At least, the documentation links point back into Amazon Web Services, and at least one person seems to know someone there.

This is an interesting idea that should find some useful applications. Spammers have supposedly been doing something like this to defeat the image-based Turing tests used to screen comment posting systems, offering access to porn in exchange for solving the puzzles, and there are other anecdotes of using low cost offshore labor for similar tasks. Having a simpler web service interface for finding a human key operator somewhere will probably allow smaller and more experimental applications to emerge.

Update 11-04-2005 08:09 PST – Slashdot, TechDirt, Google Blogoscoped on Mechanical Turk, pointer to BoingBoing on porn puzzles and spam,

Follow the Money – Microsoft Windows Live, Google, and Web 2.0

Some thoughts following the Microsoft splash this week:

The big PR launch for Windows Live last Tuesday announced a set of web services initiatives. It probably drives a lot of Microsoft people crazy to have the technology and business resources that they do, and to have so little mindshare in the “web 2.0″ conversations that are going on. I haven’t read through or digested all the traffic in my feed reader, but it looks like a lot of people are unimpressed by the Microsoft pitch. Been there, done that. Which is true, as far as I can see. The more interesting question is whether this starts to change the flow of money and opportunities around developing for and with Microsoft products and technologies.

If I do a quick round of free association, I get something like this:


  • corporate desktop
  • security update
  • vista delayed
  • who’s departed this week

Microsoft is a huge, wildly profitable company. It initially got there by being “good enough” to make a new class of applications and solution developers successful in addressing and building new markets using personal computers, doing things that previously required a minicomputer and an IT staff. Startup companies and individual developers that worked with Microsoft products made a lot of money, doing things that they couldn’t do before. All you needed was a PC and some relatively inexpensive development tools, and you could be off selling applications and utilties, or full business solutions built on packages like dBase or FoxPro.

Microsoft made a lot of money, but the software and solutions developers and other business partners and resellers also made a lot of money, and the customers got a new or cheaper capability than what they had before. Along the way, a huge and previously non-existent consumer market for IT equipment and services also emerged. Meanwhile, the market for expensive, low end minicomputers and applications disappeared (Wang, Data General, DEC Rainbow, HP 98xx) or moved on to engineering workstations (Sun, SGI, HP, DEC/MIPS) where they could still make money.

The current crop of lightweight web services and “web 2.0″ sites feels a little like the early days of PC software. In addition to recognizable software companies, individual developers would build yet-another-text editor or game and upload it to USENET or a BBS somewhere, finding an audience of tens or hundreds of people, occasionally breaking out into mass awareness. Bits and pieces are still around, like ZIP compression, but most of it has disappeared or been absorbed and consolidated into other software somewhere. I have a CD snapshot of the old SIMTEL archive from years ago that’s full of freeware and shareware applications that all had a modest following somewhere or another. Very few people made any money from that way. In the days before the internet, distribution of software was expensive, and payment meant writing and mailing a check, directly from the end user to the developer.

Google has become a huge, wildly profitable company so far by building a better search engine to draw in a large base of users, and using their platform to do a better job of matching relevant advertising to the content it’s indexing. Now, a small application can quickly find an audience by generating buzz on the blogging circuit, or through search engines, and receive two important kinds of feedback

  • Usage data – what are the users doing and how is the application behaving
  • Economic data (money) – which advertising sponsors and affiliates provide the best return

Google’s Adsense and other affiliate sales programs are effectively providing a form of micropayments that are providing incentives and funding for new content and applications, with no investment in direct sales or payment processing by the developers, and no committment from the individual end user.

It’s simply a lot easier for a small consumer targeted startup to come up with a near term path to profitability based on maximizing the number of possible clients (=cross platform, browser based), being able to scale out easily by adding more boxes (not hassling with tracking and paying for additional licenses), and with a short path to revenue (i.e. Adsense, affiliate sales). A developer who might have coded a shareware app in the 80′s can now build a comparable web site or service and find an audience, and actually make a little (or a lot of) money. Google makes a lot of money from paid search ($675MM from Adsense partner sites in 3Q05), but now some of that money is flowing to teams building interesting web applications and content.

In contrast, in the corporate environment (where it’s effectively all Microsoft desktops now), things are different. Most organizations won’t let individuals or departments randomly throw new applications onto the network and see what happens. This is a space that usually requires deep domain expertise, and/or C-level friends, in order to get close enough to the problems to do something about it. But the desktops all have browsers, and the IT managers don’t want to pay for any more Windows or Oracle licenses than they are forced to, so there’s some economic pressure to move away from Windows. But there’s also huge infrastructure pain, if your company is built on Exchange. There’s less impetus here for new features, the issue is to keep it secure, keep it running, and make it cost less. Network management, security, and application management are all doing OK in the enterprise, along with line-of-business systems, but these are really solutions and consulting businesses in the end. The fastest way to get “web 2.0″ into these environments is for Microsoft to build these capabilities into their products, preferably in as boring but useful a way as possible. Not a friendly place for trying out a whizzy new idea, and generally a hard place for a lightweight software project to crack.

On another front, Microsoft also has most of the consumer desktop market, but by default rather than by corporate policy. Mass market consumers are likely to use whatever came with their computer, which is usually Windows. They’re also much more likely to actually click on the advertisements. Jeremy Zawodny posted some data from his site showing that most of his search traffic comes from Google, but the highest conversion rates come from MSN and AOL. MSN users also turn out to be the most valuable on an individual basis, in terms of the effective CPM of those referrals on his site.

So let’s see:

  • Many new application developers are following the shortest path to money, presently leading away from Microsoft and toward open source platforms, with revenue generation by integrating Google and other advertising and affiliate services
  • Microsoft has access to corporate desktops, as well as mainstream consumer desktops, where it’s been increasingly difficult for independent software developers to make any money selling applications
  • Microsoft is launching a lot of new me-too services in terms of technical capability, but which will have some uptake by default in the corporate and mass market
  • Microsoft’s corporate users and MSN users are likely to be later adopters, but may be more likely to be paying customers for the services offered by advertisers.
  • Microsoft could attract more new web service development if there were some technical or economic incentives to do so; at present it costs more to build a new service on Microsoft products, and there’s little alignment of financial incentives between Microsoft, prospective web application developers, and their common customers and partners.

Mike Arrington at TechCrunch has a great set of play-by-play notes from the presentation and a followup summary. He thinks the desktop gadgets and VOIP integration are exciting.

what really got me today was the Gadget extensibility and the full VOIP IM integration.

In the past, Microsoft grew and made a lot of money by helping a lot of other people make money. Today, the developers are following the money and heading elsewhere, mostly to Google. This could quickly change if Microsoft comes up with a way to steer some of their valuable customers and associated indirect revenue toward new web application developers. They are the incumbent, with huge market share and distribution reach. I don’t think they’ll ever have the “cool” factor of today’s web2.0 startups, and I don’t think they’ll regain the levels of market share they have had in the past with Windows, Office, and Internet Explorer. But they could be getting back in the game, and if they come up with a plan to make some real money for 3rd party web developers we’ll know they’re serious.

Whizzy update to Yahoo Maps

Yahoo has a major update to Yahoo Maps this evening, bringing it back on par with Google Maps, and with a full set of web APIs for building mapping applications.

From the Yahoo Maps API overview:

Building Block Components

Several Yahoo! APIs help you create a powerful and useful Yahoo! Maps mashups. Use these together with the Yahoo! Maps APIs to enhance the user experience.

  • Geocoding API – Pass in location data by address and receive geocoded (encoded with latitude-longitude) responses.
  • Map Image API – Stitch map images together to build your own maps for usage in custom applications, including mobile and offline use.
  • Traffic APIs – Build applications that take dynamic traffic report data to help you plan optimal routes and keep on top of your commute using either our REST API or Dynamic RSS Feed.
  • Local Search APIs – Query against the Yahoo! Local service, which now returns longitude-latitude with every search result for easy plotting on a map. Also new is the inclusion of ratings from Yahoo! users for each establishment to give added context.

They also spell out their free service restrictions:

Rate Limit

The Simple API that displays your map data on the Yahoo! Maps site has no rate limit, thought it is limited to non-commercial use. The Yahoo! Maps Embeddedable APIs (the Flash and AJAX APIs are limited to 50,000 queries per IP per day and to non-commercial use. See the specific terms attached to each API for that API’s rate limit. See information on rate limiting.

This restriction is more interesting:

Sensor-Based Location Limit

You may use location data derived from GPS or other location sensing devices in connection with the Yahoo! Maps APIs, provided that such location data is not based on real-time (i.e., less than 6 hours) GPS or any other real-time location sensing device, the GPS or location sensing device that derives the location data cannot automatically (i.e. without human intervention) provide the end user’s location, and any such location data must be uploaded by an end-user (and not you) to the Yahoo! Maps APIs.

So uploading a track log after running or hiking is OK, but doing a live GPS ping from your notebook, PDA, or cell phone to show where you are isn’t? I think this is intended to exclude traffic and fleet tracking applications, but it seems to include geocoded blog maps by accident. I don’t think they’d actually mind that.

There are several sample applications to look at. The events map seems nicely done, pulling up locations, images, and events for venues within the search window.

To display appropriate images for events, local event output was sent into the Term Extraction API, then the term vector was given to the Image Search API. The results are often incredibly accurate.

I’ve been meaning to take a look at the Term Extraction service, it looks like it might be a handy tool for building some quick-and-dirty personal meme engines or other filters for wrangling down my ever growing list of feeds.

Announcement at Yahoo Search Blog

More from TechCrunch, Jeremy Zawodny, Chad Dickerson

Web Two Point Oh

Andrew Wooldridge has built a web application which will instantly generate a web2.0 buzzword-compliant startup name and concept.

Web Two Point Oh!
Create your own Web 2.0 Company

Below you will find a pre-created VC friendly Web 2.0 company just for you!

Hit reload to create another potential million dollar idea

Some of the candidates I got were:

  • Rieeent – rss-based dating via ajax
  • Riink – rss-based blogs via Ruby on Rails
  • zVonowy – community apps via microformats
  • Tripkoent – greasemonkey extension for photos via bittorrent
  • Tripya – social news on the desktop
  • Yahonomodoo – web-based search engine via api mashups
  • Tripelihub – social apps via microformats

Just to be safe, he adds an editorial footnote:

Note: this is just a little programmatic satire. Any semblance to an actual company is purely accidental and not intentional! It’s supposed to be funny :)

Before too long, someone may start to automatically generate examples of these on Ning or something along those lines…

See also: The Cambrian Age 2.0, The Home Pages of this New Era

The Cambrian Age 2.0

Haven’t been keeping up on feedreading due to this cold for the past few days. I’m still kind of out of it, but I finally went to see the doctor and the prescription meds seem to be knocking it back a couple of notches.

I see that Riya (née Ojos) is about to start their public alpha trials. Mike Arrington got an early look today. Looking forward to trying it out.

During the past few days of semi-coherent downtime, I was having something like the following thought, which Russell Beatty articulated in a post this evening:

All these startups in my feeds lately are killing me! There are tons of them, but none seem to be doing anything particularly special. I mean, it’s nice that there’s a sort of rebirth of small startups, but there’s absolutely no sort of wow factor that I’ve seen. And no, this isn’t an anti-Web 2.0 style backlash: I really believe in the idea of the web as a platform. Amazon and eBay’s web services are perfect examples of platforms which have created huge value for both companies, as well as the developers using their APIs. That’s not the problem. It’s all these Flickr-wannabes, flip-it-quick companies that are bugging me.

He goes on with a quick taxonomy of web2.0 startups:

  • Scrape Engines
  • Mashed Ups
  • Web Trapps
  • Social Anything
  • Phile Sharing
  • User Generated Dis-Content
  • RSS Holes
  • Firefoxing

It just seems that no one is trying to change the world any more. No one is aiming to create “insanely great” products or do the impossible. Why not? Why are so many people grasping at the low-hanging fruit, when there’s so much more goodness for everyone if they just stretched a little higher?

A lot of people are wrestling with the question of where’s all this “web2.0″ stuff going to take us. Many of the past barriers to entry have dropped, with free software, nearly free hardware, inexpensive and widely connected networks, and lower cost labor than ever before. As a predictable consequence, a lot more ideas are getting tried out. Unlike before, we now have tools and infrastructure that allow what would have been just paper concepts or slideware be turned out as functioning web sites on the internet.

We presently have a wonderful but frenzied universe of new-this-mashed-with-that-meets-flickr. Trying to think about it during this cold has been making me dizzy, but I had visualized something like an old science movie illustrating the Cambrian Explosion, in which life suddenly went from fairly simple cell-based organisms to a diverse assortment of fascinating, squiggly, twitching, wiggling, multi-colored creatures, bumping into each other, gobbling each other up, and most of which subsequently disappeared.

Obviously, not everything worked out. But:

Aside from a few enigmatic forms that may or may not represent animals, all modern animal phyla except bryozoa appear to have representatives in the Cambrian, and most except sponges seem to have originated just after or just before the start of the period. Many extinct phyla and odd animals that have unclear relationships to other animals also appear in the Cambrian. The apparent “sudden” appearance of very diverse faunas over a period of no more than a few tens of millions of years is referred to as the “Cambrian Explosion”.

OK, so we’ll have a few winners left standing after the next crash.

The economics for many of the web 2.0 startups is driven by companies like Google, Yahoo, and Amazon. The existence, success, and low entry requirements for pay-per-click advertisement and affiliate sales has shaped their implicit revenue plans, while the cheap-or-free access to and straightforward implementation of the web service APIs has shaped their technology and investment (or spending) plans. They also provide the possibility that “if you build it, we will buy you”.

So, any number of things could bring an abrupt end to our web2.0 “Cambrian Age”. Here are some random possibilities:

  • Paid search gets screwed up by click fraud, spamblogs, or other, thus removing money from the system
  • XML patent guys make some headway, thus making lawyers central to the system
  • Avian flu crosses over to humans and puts a dent in the globally mobile elements of society

For a quick view of the landscape for web2.0 startups, go check out Paul Kedrosky’s presentation slides from the Vancouver Enterprise Conference, titled “Get Your Head Out of the F*cking Tag Cloud”

  • The cost of customer acquisitions is falling
  • The cost of infrastructure is falling
  • The cost of people is falling
  • The cost of company creation is falling
  • The cost of venture capital is falling

  • Web 2.0 is the democratization of technology entrepreneurship.

I like the taxonomy and evolution slide near the end. Wish I could remember what movie the “explosion of life” clip was from. Probably one of those old Bell System science movies. (I saw Hemo the Magnificent the other day.)

Now I’m off for another round of meds…

See also:
The Home Pages of this New Era
At least it’s not Avian Flu (yet)

The Home Pages of this New Era

Pithy comments in Charlie O’Donnell’s post I’m off eHubwatch! and a followup:

“I think the web-based features that are appearing all over the place will be the home pages of this new era — many will be abandoned by their developers and left to die a slow death once the developers realize that they don’t have many long-term users. And others will be cultivated and slowly grow into businesses. In that respect, I think Ning is the new GeoCities.” – Scott Moody

“…that sounds right on. And it looks like Squidoo will be the new About. This whole web 2.0 thing is getting pretty retro….” – Pete Cashmore

There has always been a place for speculative ideas and proposals. The difference is that now, many of the ideas can be tried out with relatively little time and money, specifically, those that relate to consumer-ish web services. These can achieve the appearance of depth and capabilities that they may not actually have yet, or ever, though…

An analogy:
As a young research guy, I obtained one of the first prototype HP Laserjets in the early 80′s, when everyone else was using dot-matrix printers or handwritten transparencies for presentations. For some time after I started using it, people would occasionally complain about my speculative and draft memos and presentations when they were circulated for discussion, because they looked too “official”, even though all that had happened was to write up some notes and format them for printing. People were getting the impression that the draft was a done deal, because before that, memos and presentations didn’t get cleaned up until they were pretty far along. In most cases, they really were intended as starting points for discussion.

I think of many of the new web services as being something like those draft notes, but wrapped up in code rather than print. Seeing a storyboard works better than reading a requirements document for many people, and working with a live system works better than walking through a storyboard. The underlying code at many of these sites often just represents an idea in development, rather than a plan for a product or even a feature. But it works far better than a written paper would to communicate an idea, and flinging it out onto the internet can help find an audience for the idea, if not the business.

Teen Web2.0 Panel – Give me IM, not Skype!

Among the many interesting bits from last week’s Web2.0 conference in San Francisco, Kareem Mayan’s notes from Safa Rashtchy’s panel discussion among five Bay Area teenagers is fascinating.

Paraphrased snippets:

  • iTunes is a great way to find what’s available to download on BitTorrent
  • Why would I want a CD player?
  • Spend most free time on MySpace, Facebook and AIM
  • Don’t use eBay because you’ll get ripped off
  • Froogle Rocks!
  • Tivo is too expensive
  • What’s Skype?

Q: What more do you want out of instant messenger?
Sean: “Just that: instant messenger.”
Q: would you like to see video on IM?
Sean: Ummm, no, i’m trying to talk to my friends…! (applause)

I’ve noted before the vast behavioral gap between the over-35 and under-35 crowd, where the younger crowd lives on MySpace and other online connected services, where their lives are simultaneously shared with much of the public, and largely opaque to the over-35 crowd.

Jeff Clavier notes that in the teen panel discussion they never mention Flickr, Skype, Yahoo, or blogs (although MySpace has a strong blog component).

Five teenagers from the Bay Area that get onto a panel session are probably not completely typical of the demographic, but it’s a fascinating data point.

Update: more notes from Jeffrey McManus and Gene Becker, via Jeremy Zawodny

Search Attenuation and Rollyo

“Search attenuation” is a new term to me, but seems like a good description of the process of filtering feeds and search results to a manageable size. As more content becomes available in RSS, I tend to subscribe to anything that looks interesting, but am looking for improved methods for searching and filtering content within that set.

Catching up a little on the feed aggregator, I see an article at O’Reilly about Rollyo, a new “Roll Your Own Search Engine” site from Dave Pell of Davenetics.

ROLLYO is the latest mind warp from Dave Pell. Rollyo affords anyone the ability to roll their own Yahoo!-powered search engine, attenuating results to a set of up to 25 sites. And while the searchrolls (as they’re called) you create are around a particular topic (e.g. Food and Dining), they are also attached to a real person (e.g. Food and Dining is by Jason Kottke). The result is a topic-specific search created and maintained by a trusted source.

Rolly’s basic premise is one I’ve been preaching of late: attenuation is the next aggregation …

Recently, I’ve been looking at this from a related angle, which is how to infer topical relevance among people or sources you trust based on links, tagging, and search, and named entity discovery. People are already linking, tagging, and searching, so some data is available as a byproduct of work that they’re already doing. On the other hand, if enough people you trust take the additional step of explicitly declaring the sources they think are relevant, this would help a lot.

See also Memeorandum, Findory, Personal Bee.

More on this from TechCrunch