Looking at the MySpace deal

Intermix Media, which is mostly MySpace.com (plus assorted spyware), was purchased by News Corp for $580MM yesterday.

A look at how the investors (VantagePoint and Redpoint) did by Bill Burnham:

For VCs, this sale is significant because it represents the first real payday in the social networking space, a space that to date has seen lots of VC hype but very little returns. Just how big a payday was it for VC’s? Thanks to the fact that Intermix was a public company it’s possible to take very educated guesses at how the VCs made out. There were two main VCs involved in MySpace/Intermix. VantagePoint had been involved with the parent company for some time, while Redpoint recently invested in MySpace itself.

Paul Kedrosky notes:

Oh my, but the MySpace.com acquisition for more than half-a-billion dollars is going to cause a VC-driven content train wreck. We already had startups falling out of trees making MySpace comparisons, now they’re going to be thick on the ground, with the “MySpace of X” and the “MySpace of Y”, and the “MySpace crossed with Google”, etc. etc. I shudder to think how many VCs will fund MySpace-alikes through a thought process like the following…

MySpace is an interesting phenomenon for many reasons. It’s wildly popular among the teen-to-mid30′s bracket, and is also largely invisible to people outside that group (i.e. 35+). It’s not quite a dating service, although this is clearly one of the core attractions of the site, along with music, gossip, classifieds, and blogging. It fills a lifestyle niche in a way that echoes the boom days of America Online, but rather than chat rooms, message boards, and a first glimpse of the internet, MySpace is drawing in many new users for their first experience with blogging, social software, and Web 2.0. It also draws some of the same criticism, of being insecure, hacked together, and technically lacking. Despite this, they’ve grown to a massive level of traffic and corresponding ad revenue in a short period of time. I don’t generally get how to make money on social web sites, but ad revenue on 7.5 billion page views (more than Google) I understand. All this while being virtually unknown to at least half the people I talk to.

Tim Oren on VCs and MSFT

There’s been a bit of traffic on whether VCs are “loyal” to Microsoft .NET, and more broadly on Longhorn et al. Tim Oren’s response is right on:

Rich rightly observes the lack of VC loyalty to any particular technology. You all know what we are loyal to, right? That’s right, long term capital gains.

Longhorn is tactically and strategically compromised. Tactically because it is grossly late, and keeps shedding features. Any venture that relied on it has already died on the road somewhere. Any business or product plan based on it has serious cred problems. Longhorn is strategically compromised because it is still fundamentally a play on the desktop.

Strategic leverage as negative indicator. …. Do you think the average VC would be happier today if they had made a bet five years ago on Longhorn dependent applications, .NET dependent web services, or a few XBOX titles? It’s the market where MSFT was unable to use its strategic leverage where it’s the most competitive. That ought to scare you.

The storage learning curve beat out even Moore’s Law, so we just keep everything now.

What have you done to help us as users? Staying too close to the desktop has let entrants like Google move right onto the pain point of the market without opposition.

Go read Tim’s post. Also, go check out his post on The Art of the Fast Take.

Battery Ventures — Not

Although a lot of US investors have been setting up shop in China, this is probably not what Battery Ventures had in mind: (via Silicon Beat)

There appears to be a fraudulent site or company in China masquerading as Battery Ventures, in what looks to be a first for a well-known venture capital firm.

But check out the Chinese clone site, at http://www.usa-big.com. Pretty identical to the real Battery site. About the only thing that’s really different is the logo. The clone drops the “V” and just carries the big “B”. Calling themselves the “American Battery Investment Group.” Suppose that makes it just fine then.

Apparently, Battery’s evil twin offers to invest in companies that pay a “processing fee”…

Reasons I still read newspapers

Despite getting most of my news through the internet these days, I still get daily paper editions of the San Jose Mercury and the Wall Street Journal, plus Barrons on the weekend. At a get-together this past weekend, one of my neighbors who works at the Mercury took an informal poll to see who was reading newspapers versus online news sources. As might be expected in Palo Alto, a lot of people have mostly moved to online news aggregators. A few thoughts:

Some reasons I still get a print subscription:

  • Habit: I like to read the paper with my breakfast and coffee, and don’t like having the notebook on the kitchen table while I’m eating.
  • Faster scanning: part of the survivable value-added of news organizations is to assemble items that are interesting and/or relevant (“here, look at this”). I can make a quick scan of current news in the Mercury and WSJ faster than going through selected Bloglines subscriptions or Google News.
  • Editorial and opinion pages. No shortage of commentary and opinion online, but syndicated writers usually don’t turn up online right away if at all, and in the paper they’re conveniently assembled onto a couple of pages.
  • Overview of local issues in the Bay Area. It’s hard (not impossible) to generate a quick view of local news and feature articles; services like topix.net can generate local feeds, but they’re not great.
  • Longer analysis and context for recent and upcoming news and events.
  • Calendar of local events and activities. This is good for when you don’t know what you want to do. Once you are looking for something specific, online is much better (e.g. movie times, concert tickets, etc).
  • I like looking through the full-page Fry’s Electronics ads in the Mercury. The weekly real estate section in the WSJ is often entertaining as well.
  • Comics are easier and faster to read in the paper. However, there are a lot more choices online.
  • They’re portable and don’t require batteries for a quick look while travelling.
  • We never need to buy rubber bands.

I rarely if ever look at these sections:

  • Financial quotes. If I want to know right now, I look online. If I’m researching, I want more info, which is also online. I still look through the weekly and quarterly summaries in Barrons, though.
  • Sports section. I subscribe to RSS feeds on the Red Sox and anything else I’m following.
  • Classified ads. Have pretty much moved to Craigslist, eBay, and other online services.

The future role of news organizations:

  • Citizen-journalists, and just bloggers on the ground, are churning out vast quantities of raw content, with a wide range in quality and veracity. Along with the traditional role of putting reporters on the ground, taking notes, and asking questions, news organizations could help filter and highlight “user-contributed” news items along with commercial and advocacy-oriented news feeds, placing them in context with “professional media” news items. For breaking large-scale news, such as the London bombings last week, they can scan the raw data and build a composite picture of what’s going on. They can also clarify what’s unknown, what hasn’t been asked, to help influence the actions of people on the scene.
  • I find that as I’ve been introducing people to news aggregators, I usually set them up with a “starter” set so it makes some sense, sort of like building a custom mini-newspaper of feeds I think they might find interesting. It would probably make sense for newspapers to start publishing collections of feeds in OPML or something similar, along with the RSS feeds that they’re starting to provide. This would make it easier for people to “subscribe” to the newspaper, and get an overall view of what the newspaper’s editors think is interesting, which is probably a better starting point for the average person than what they get now (usually nothing).

A VC anti-portfolio

Paul Kedrosky points out a fun bit of candor from the folks at Bessemer Venture Partners:

Most venture firms have mental lists of deals that they shoulda/coulda done, but didn’t, and yet only a vanishingly small number of such firms will ever list the deals that got away. After all, it makes you look intermittently dumb and human and risk-averse, and gunslinging venture sorts could never concede any of those things to be true.

Trust the iconoclasts at Bessemer, however, to maintain at least a partial public list — they call it their “anti-portfolio” — of the deals that got away:

Among the gems on the reject list: Google, eBay, FedEx, Intel, Intuit, Lotus, and Compaq.

To keep things in perspective, Bessemer has a track record of nearly 100 years of successful investments, starting out in 1911, with funds raised from the sale of Carnegie Steel. I’d be pretty pleased to be around long enough to have such a spectacular reject list…

Pros and Cons of Stealth Mode Startups

Point and counterpoint around Mark Fletcher’s (CEO of Bloglines) post last week, “Stealth Startups Suck“.

Here’s a sample of Mark’s post:

Why go fast? Many reasons:

  • First mover advantage is important.
  • There is no such thing as a unique idea. I guarantee that someone else has already thought of your wonderful web service, and is probably way ahead of you. Get over yourself.
  • It forces you to focus on the key functionality of the site.
  • Being perfect at launch is an impossible (and unnecessary and even probably detrimental) goal, so don’t bother trying to achieve it. Ship early, ship often.
  • The sooner you get something out there, the sooner you’ll start getting feedback from users.

  • Some people think that they need to stay in stealth mode as long as possible to protect their exciting new idea. I hate to break the news to you, but unless you’re Einstein or Gallileo, your idea probably isn’t new. I have this theory. The success of a web service is inversely proportional to the secrecy that surrounded its development. There are exceptions of course. But I also think this can be applied to other things. Segway, anyone?

    Paul Kedrosky (Ventures West and UCSD) has written a good counterpoint, “Stealth Mode Startups Don’t Suck“.

    But you have to keep the role of stealth in context. It is a rational response to a marketplace with too much risk capital, low barriers to entry, and many entrepreneurial teams looking for ideas. Saying that many people will come to variants of the same idea at the same time is not the same thing as saying you should ring a bell and invite everyone and their favorite VCs to come and feast on your nascent startup.

    More from Mark Fletcher here, also see Russell Beatty’s Yeah, They Are Nice People

    Anyway, it’s not like 24 Hour Laundry needed any more buzz. But the discussion about the value of collaborative development, marketing and validating with early users, vs. handing over precooked plans to a competing team illustrates some tradeoffs that are especially pronounced for new web businesses.

    Stealth mode can be a lame excuse for not shipping to real customers, but it can also keep your worked-out user web interaction model from being used as the engineering model for a team of offshore coders that otherwise wouldn’t be able to put together the design spec. On the marketing and alliances side, it’s less useful to be in true stealth mode, since you’ve pretty much got to tell your prospective partners and customers what you’re about if you want to sell with them.

    update 2005-06-20 14:34 comments from Jeff Clavier, plus the Slashdot crowd weighs in.

    update 2005-10-04 17:12 PDT 24 Hour Laundry is Ning

    Small steps versus theorizing, Reboot7

    Lot of interesting posts and presentations coming from last week’s Reboot7 conference in Copenhagen. The attendees are predominantly involved with new internet applications such as blogging, tagging, peer-to-peer, voice over IP, social software, and collaborative development, all of which are new, fluid, evolving, and somewhat incompatible with existing business and social models. Progress in new and evolving fields can sometimes get bogged down in “Vision” or “Strategy”, so I’m happy to see this observation about the need and value of small steps from Johnnie Moore:

    A theme that seemed to run through Reboot7 was the advocacy of taking small steps over theorising. David Heinemeier Hansson, who built web application Ruby on Rails, stressed the advantage of getting something basic up and running fast. In a presentation on The Skype Brand, Malthe Sigurdsson talked about getting out frequent, small revisions.

    Along similar lines, Scoble writes:

    I’m stuck with some images coming out of the Reboot conference last week: the power of being small.

    Lots of people were talking about the shipping power of small teams. Mostly due to Jason Fried’s talk.

    He’s turning out to be one influential developer. Why? Cause he, and two other coworkers, are churning out new features at a torrid pace. Here’s an example of his thinking about development teams: don’t write a functional spec. Whoa. I love his idea for what to do instead: write a one-page story.

    In a emerging, largely undefined area, taking small, concrete steps (albeit sometimes at a rapid pace) in a general direction can often uncover more “ground truth” more quickly, with less resource, than a fully investigated, heavily staffed program. Unfortunately, it’s often easier to explain a more comprehensive program, even though the size and overhead of the activity may place a fundamental handicap on it, making it less likely to succeed. There’s also a tendency to want to systematize everything at the outset, to try for the “grand unified theory of everything”, which can become crippling (the early days of XML and CORBA comes to mind). In a new or emerging market, the “Great” can easily become the enemy of the “Good”, or “Useful”. Bear in mind, if it really is new, there’s a good chance it’s not going to be right on the first few tries, so best spend your resources wisely rather than making a wild bet that you’ve found the One True Answer.

    Within various corporate R&D and business planning settings, I’ve repeatedly seen that small, motivated teams (1-10 people) can often make substantial headway in new business areas by finding equally motivated customers and solving their needs quickly, frequently without official support (or oversight) from their management. These efforts are often crippled when they do gain “official” status, thus adding the need to be externally explainable in the team’s decision making process, and sometimes also gaining a requirement for a roadmap for world domination. If they survive this stage, most of these small, fast teams are crushed by the subsequent addition of dozens or hundreds of new people and the associated management overhead, organizational empire building, and huge burn rate, all added in an effort to staff up and implement the premature plan for world domination. The team is no longer fast and burns through huge resources committed to an inflexible and obsolete plan in an emerging market space. Oops.

    See also: Seth Godin’s Small is the New Big

    Caveat: Established markets really do need scale and structure. Sometimes Big is the New Big, too.

    Update 2005-06-16 10:16: Great Enough! (more from Seth Godin)

    If you don’t ship, it’s not really worth doing. More important, we’ve only got a finite amount of time and resources to invest in anything (thanks, Chris Morris). The real issue is this: when do we stop working on something (because it’s good enough) and work on some other element of the offering.

    Proxim Wireless assets sold to Moseley Associates

    Proxim has been struggling financially for a while, and today announced the sale of all assets to Moseley Associates.

    Proxim is the current home of the former Lucent / Agere / Orinoco 802.11 product line, which were ubiquitous a few years ago as wireless LANs became popular and before “WiFi” was a marketing buzzword for a notebook computer feature. They also own the former Western Multiplex Tsunami point-to-point wireless product line, after merging with them a few years ago.

    I’ve always liked their gear, but the WLAN market is totally commoditized now (Linksys, D-Link, and assorted white label manufacturers), the enterprise solutions seem to be moving toward solutions such as Aruba and Trango, and the longer haul point-to-point market hasn’t really taken off, partially due to all the noise about WiMax (which has yet to become a deployable solution).

    Here’s what Proxim had to say to their customers about Moseley on their web site:

    Moseley, the parent company of Microwave Data Systems (MDS), Axxcelera Broadband Wireless, CarrierComm, and Moseley Broadcast, provides industry-leading wireless solutions for both point-to-point and point-to-multipoint applications for the industrial (SCADA), broadcast, broadband enterprise and carrier marketplaces. Combining our product lines will enable us to offer a differentiated portfolio of products covering spectrum from 900 MHz to 38 GHz, bringing us much closer to number one position in the market with both licensed and unlicensed broadband, Wi-Fi, and WiMAX technology for an extremely broad spectrum range.

    Well, that plus not totally going out of business. Hope they find a niche with some traction.

    update 2005-07-20 15:47 Hmm. Terabeam is ending up with Proxim instead.

    The Missing Mobile Device: A GPS Camera Phone

    Continuing on the topic of converged GPS/camera/phone devices, here’s a post from Wade Roush (writer for Technology Review) calling for the cellular operators to open up location information for 3rd party applications, and detailing some of the business and cultural reasons why this is taking a while.

    There are a lot of interesting technical hacks being strung together to cobble together location-aware and geotagged services, but the wireless phone carriers already have a lot of the infrastructure for this, and a near-total absence of applications.

    Coins of the Realm – graphic essay on comics and micropayments by Scott McCloud

    I enjoyed reading this, which is one of a series called “I Can’t Stop Thinking “written / drawn by Scott McCloud on the future of comics, micropayments, and business models for publishing creative works online in general.

    These essays are relatively “old” now, written in 2001, but the presentation is fresh and many of the issues are still open.

    Also worth reading is a followup post by Sean Barret.

    (Via Seth Godin)

    World DSL lines reach 107m in early 2005, China #1

    Top 10 DSL countries by number of lines: Q1 2005 source: Point Topic

    From PointTopic, via Om Malik:

    This graph shows the top 10 countries by number of DSL lines deployed. China now has more than 19 million DSL lines in service, adding more than 2 million lines in the last quarter alone. Also worth nothing that line growth in South Korea, Hong Kong, and Taiwan are negligible due to already-high penetration rates. Worldwide, 10.1 million lines were added in 1Q05.

    Sic transit gloria mundi – Clearing the rubble of the Internet Boom from my garage

    IMG_1648
     
    Tomorrow is trash day. Out on the curb in the paper recycling bin are a fine assortment of Red Herring, Industry Standard, Wired, and other print artifacts circa 1996-1997. The headlines bring back some memories…the Browser Wars, the rise of Java, Kim Polese on the cover of Red Herring, endless retrofits onto Windows 95, and telecom and server hosting were booming businesses…

    I’d forgotten just how thick the magazines used to be, back when their ad revenues were growing without bound. They weigh a ton, all on nice clay coated glossy paper — the trash guys probably won’t be happy about it in the morning. Unfortunately, this hardly makes a dent in the pile of technorubble still out in the garage.

    China is run by engineers (really)

    This month’s IEEE Spectrum just turned up, and features an in-depth look at the state of technology in China. Excellent reading, whether you have a current interest in China or not.

    Among the interesting sidebars, a note that all 9 members of the Chinese Politburo have an engineering background:

    • Hu Jintao – Tsinghua University, water conservancy engineering
    • Huang Ju, Tsinghua University, electrical engineering
    • Jia Qinglin, Hebei Engineering College, department of electrical power
    • Li Changchun, Harbin Institute of Technology, department of electrical machinery
    • Luo Gan, Freiberg University of Mining and Technology
    • Wen Jiabao, Beijing Institute of Geology, department of geology and minerals
    • Wu Bangguo, Tsinghua University, radio engineering
    • Wu Guanzheng, Tsinghua University, power department
    • Zeng Qinghong, Beijing Institute of Technology, automatic control department

    I had a suspicion that the US Cabinet is mostly lawyers, so I thought I would take a quick look:

    • Mike Johanns, Dept of Agriculture – Creighton University – lawyer
    • Carlos Gutierrez, Dept of Commerce – Monterrey Institute of Technology – Business
    • Donald Rumsfeld, Dept of Defense – Princeton, A.B.
    • Margaret Spellings, Dept of Education – Univ of Houston, political science, journalism
    • Samuel W. Bodman, Dept of Energy – Cornell BS ChemEng, MIT ScD
    • Michael Leavitt, Dept of HHS – Southern Utah University, economics, business
    • Michael Chertoff, Dept of Homeland Security – Harvard – lawyer
    • Alphonso Jackson, Dept of HUD – Washington University – lawyer
    • Gale A. Norton, Dept of Interior – University of Denver – lawyer
    • Alberto Gonzales, Dept of Justice – Harvard – lawyer
    • Elaine Chao, Dept of Labor – Harvard – MBA
    • Condoleezza Rice, Dept of State – Univ of Denver – PhD, international studies
    • Norman Y. Mineta, Dept of Transportation – UC Berkeley – MBA
    • John W. Snow, Dept of Treasury – Univ of Virginia – PhD, economics, George Washington University law degree
    • Jim Nicholson, Dept of VA – West Point
    • Dick Cheney, Vice President – Univ of Wyoming, BA, MA
    • Joshua Bolten, Office of Management and Budget – Stanford – lawyer
    • Stephen L. Johnson, EPA – George Washington University – MS – Pathology
    • Andrew H. Card, Jr, Chief of Staff – Univ of South Carolina – BS Engineering
    • Rob Portman, US Trade Ambassador – lawyer
    • John P. Walters, Office of Drug Control Policy – University of Toledo, MA

    The US political system is vastly different than China’s, so the US cabinet isn’t exactly equivalent to the Chinese Politburo, and everyone working at this level of government has significant political skills, but it’s interesting to observe the emphasis given to lawyering in the US cabinet.

    GMerge shut down by Google

    Well, this is not exactly unexpected. Google appears to have shut down the Gmerge satellite-tile-assembling service I wrote about yesterday, as the assembled imagery is apparently outside the uses allowed by their terms of service.

    I had a quick look at the Google Maps TOS yesterday, as the thought had occured to me that they might have some restrictions. It looks like making “copies” is out of bounds, although for an individual end user, the browser obviously needs to do at least a little copying. This is similar to a potential image licensing issue we encountered with the OpenPix image server a few years ago, which was also tile based.

    I’d like to see Google leave end users the right to use the data for non-commercial applications, under the assumption that commercial users would generally prefer to actually pay for explicit data rights if there were a real need for it, and leaving the data available would alllow the discovery of interesting applications for the data, sort of like what’s going on with GMerge. Perhaps they should add a “Buy Now” button to the Google Maps interface for purchasing a license to use the displayed data.

    Update 2005-06-08 17:24: Discussion on Slashdot

    Overview of a current genomic data center

    Following up on yesterday’s post on biotech/genomics/proteinomics, here’s an overview of a current state-of-the-art data center at the Human Genome Project. (Via Slashdot)

    Computers:

    • Today: The datacenter hosts about 2,000 Alpha processors, originally designed by Digital Equipment (DEC), before its acquisition by Compaq, and later by Hewlett-Packard (HP).
    • Tomorrow: The Sanger Institute is looking at cheaper solutions, especially now that HP has officially stopped any development on the Alpha front.

    Storage

    • Today: Three different computer rooms have a total capacity of about 300 terabytes.
    • Tomorrow: The IT management forecasts about a petabyte within three years — at least.

    Databases

    • Today: There are about 40 different databases, and only two of them are in the 50 terabytes area.
    • Tomorrow: One of the databases, the Trace sequence archive currently contains about 700 million entries, and it doubles every 10 months.

    Power bills

    • Today: The current equipment needs about 0.75 megawatts for a cost of €140,000 per year (about $170K).
    • Tomorrow: The new setup will need about 1.4 megawatts, which will raise the yearly bill to about €500,000 (about $615K today).

    Notes from KINCON 2005, biotech thoughts

    IMG_1647

    Some notes from day one at KINCON 2005 at the Palo Alto Crowne Plaza. Today’s sessions were technology-focused. Although this has traditionally been a Korean IT-related conference, and mostly chips and displays at that, the biotech presentations were the most interesting.

    The first session was on wireless technology, mostly aimed at services for mobile phones, such as ringtones and games. Korea is a good place to try launching these services, with 76% wireless penetration rate, and 90% of the handsets capable of running games and multimedia. My observation — it’s hard to do much in this space with the mobile operators trying to extract fees from the customers and 3rd party service providers (in order to pay back their spectrum license fees). At least two of the speakers commented that most of the fee paid by the customer is to cover the billing costs.

    The second session was on consumer semiconductors. Everyone in this session is a fabless design house. The Xceive presentation on their fully digital TV tuner chipset is interesting. This is a completely silicon RF-to-baseband system, which apparently doesn’t require any external filters or shielding. They didn’t mention how much their first products were going to cost, but getting everything onto silicon means the cost is likely to trend downwards, rather than staying put as the existing analog tuner cards have. It also means that the physical packaging is much smaller, so it could go in a PDA or phone or be used to add a video input feature to an existing digital device. There are already a number of cheap video player widgets starting to turn up, sort of like video MP3 players, and having a silicon-only solution for RF TV-in is going to enable a lot of interesting combinations.

    The third session, on biotech, was the most interesting. Fan Hsu from the UC Santa Cruz Genome project gave a general overview of proteinomics and functional genetics. Lewis Williams from Five Prime spoke about their process for screening thousands of candidate secreted proteins against specific cell functions, vs the old method of testing a single protein at a time. Stuart Kim from Stanford spoke about the need for a systemic view of protein function and the possibility of applying a broader engineering approach to modelling clusters of related gene expression. His project uses Affymetrix GeneChips or something very similar to test aging-related gene expression. I got the impression that it was something like the work being done at Perlegen a few years ago when they were doing their sequencing project, in that they were generating huge quantities of data, without a good method for organizing and modelling the results. Each test chip returns something like 5000 columns x 20,000 rows of results. The last talk was by Christopher Ko at Samsung Advanced Institute of Technology, who hopes to take GeneChip-like technology from a research environment down to something more user friendly, perhaps even to the point where an individual could run a test at home, sort of like a home glucose monitor.

    I noticed that a lot of people left before the biotech session started, perhaps because the usual audience for this conference was interested in “InfoTech” and not so much “Bio”. My observation here — the genomic and proteinomic fields are just reaching the point recently where the information technology and systems engineering can become really useful. Another observation, though — pharma and health care product development are massively capital intensive and yet have very high risk per investment. The size of individual investments makes it difficult to place a lot of bets and plan on the portfolio paying off. On the other hand, it looks like there should be many opportunities to make a contribution toward advancing the state of the art, since the availability of data and computational tools is relatively new.

    This evening I saw the announcement for Maxtor’s latest generation of desktop hard disk drives, which will be 500GB per 3.5″ unit; similar units from Seagate and Hitachi are expected in a similar time frame. The entire human genome database apparently requires something like 3GB, and the annotations from various research projects bring the total up to 8-9GB at the moment. So the current and future generations of desktop (and notebook) computers will have more than enough raw storage to handle the data sets. Doing something useful with the data is another problem altogether, but the size of the genetic/proteinomic database is relatively finite — it isn’t going to get exponentially bigger — and the computational resources continue to get exponentially larger/cheaper/faster. Something good has got to pop out of this somewhere…

    See also: Korea’s plans for Ubicomp City, Korea becomes the largest foreign investor in India

    Using SPA-3000 as Asterisk PSTN Trunk

    Step-by-step article on using the Sipura SPA-3000 for Asterisk PSTN trunking at GeekGazette, via Sineapps:

    For us serious Asterisk PBX geeks out there, the SPA-3000 provides a cost-effective means of bring a PSTN trunk into the PBX while still functioning as an ATA. Not only can you use the SPA-3000 as inbound and/or outbound trunk, you can also easily configure the SPA-3000 as a PSTN failover should the primary trunk into Asterisk fail. Considering what you can buy the SPA-3000 for right now, this is one of the best deals going.

    I see from the GeekGazette site that Slashdot has been here as well.

    This follows a recent firmware upgrade to the SPA-3000, as described at Voxilla a few days ago:

    The enhancements to the SPA-3000, a very popular adaptor among “do-it-yourself” VoIP enthusiasts because of its built-in gateway functionality, includes an often-requested feature allowing PSTN calls to be routed directly to a VoIP destination without the SPA-3000 “answering” the PSTN line until the VoIP destination answers.

    Light Reading notes that today’s Q3 report from Cisco had “disappointing” performance in the advanced technology group (VoIP, wireless, security, and other “new” stuff), but

    Still, the IP telephony group “blew past” the $1 billion run rate, joining security in the billion-dollar club, Chambers said. Orders in storage networking cooled down, to “mid-single digits” sequentially, but that was after a 40 percent boom in the second quarter. Orders in wireless grew double digits sequentially and in the “high teens” compared with last year’s third quarter.

    Cisco is in the process of buying Sipura, which should help grow that $1B run rate as VoIP interfaces sprout in everything on the network.

    Update: 08-16-2005 20:46 – You can convert the SPA-3000 to a PhoneGnome, if you’re interested.

    Microsoft IP Ventures

    Microsoft has started a new site, microsoftipventures.com, giving broader exposure to intellectual property (IP) they are willing to license for commercial development by others.

    A quick look at their catalog shows entry dates starting around April 26th, although the actual items listed look like projects previously available elsewhere on Microsoft Research. It’s convenient to have everything in one place, however, and having a named MS activity with a stated interest in licensing these items might make it easier for a 3rd party to actually have a licensing discussion with MS.

    Article at Infoworld:

    Microsoft said the introduction of the IP Ventures program was a response to demand from venture capitalists for access to the company’s library of technologies.

    Microsoft IP Ventures:

    Our goal is to license these technologies on either an exclusive or non-exclusive basis via a combination of equity, upfront cash, or royalties. Contact IP Ventures to learn more about a particular technology, or browse the catalog.

    The State of Video Search

    Been thinking a bit lately about dealing with video, converged media, and search, came across a couple of interesting pieces on video search and digital content in general, first one on John Batelle’s SearchBlog, which in turn references a longer article by Mark Glaser at the Annenberg Online Journalism Review.

    Ourmedia, SingingFish, and Brightcove are profiled briefly, along with Google Video Upload, Yahoo Video, and Open Media Network.

    From the Glaser article:

    Howe estimates there could be 300 million video streams online, but Singingfish has still only scratched the surface with just under 20 million streams indexed. Singingfish also crawls adult content — literally anything that’s legal — and includes a “Family Filter” with pretty conservative rules for what partner sites or individuals can filter out (including sex education material).

    Finally, Howe believes that there’s been a sea change at media companies when it comes to embracing video search. “There’s been a general recognition that they’re going to have to digitize their content, and if they’re going to digitize it, then they’re going to have to monetize it,” she said. “I think people have sort of gotten over themselves. They used to assume that people would just go to such-and-such site to find this wonderful content. Well, no, because people have so many options.”

    Some related thoughts:

    The metadata problem:
    It’s going to be difficult to make Google-style searches for video work without lots of Flickr- or Technorati-style tagging and other metadata sources. This can work for communities of motivated individuals sharing an interest in a topic or body of work, or for a successful commercial movie or television program. But for hours upon hours of uncut home videos from DV camcorders or the more recent digicams, even the owner probably won’t have the time or interest to tag the content enough to make it usefully searchable for most applications. Another problem — people generally have to see the video content to apply tagging or other metadata — and it’s just hard to get the data there…

    The distribution problem:
    Today’s internet isn’t well suited to moving large data files to and from end user sites (i.e. homes and most businesses). Relatively popular content can make use of peer-to-peer technology like BitTorrent to recruit end users to redistribute the original content and spreading out the bandwidth demand to other parts of the internet, but this only works if clients participate and if the content is popular enough to develop a network of clients with cached copies. Kontiki appears to be building a different peer-to-peer content distribution system.

    Commercial sites sometimes use content delivery networks such as Akamai / Speedera (currently in the process of merging) to move copies of media data such as video, audio, imagery, or multimedia (usually Flash or Java) closer to the expected network clients. The underlying source of the infrastructure funding is often the online advertising dollars spent in marketing campaigns by movie, television, music, automobile, and lifestyle products. This doesn’t mesh well with a grassroots model.

    If grassroots video is to become widely used, it needs to become accessible. On a good day, the DSL line to my home manages 1.5mbps down and 384kbps up. If I want to share a few minute digital video clip from the DV camera, it could easily be hundreds of megabytes, requiring more than an hour to upload to either a peer-to-peer network or a content server. I could also recode the video to make it smaller, which is a common practice for video publishers today, but this requires knowledge, tools, and time.

    Community tagging and metadata for video?:
    Sites like Flickr allow a community of interest to build around tags representing common interests, and also allow a vocabulary of tags to evolve, along with a social network of people who find each others’ photos interesting. However, this presumes that people can actually see the content they’re tagging, which may be difficult for a while in the case of video. I’m assuming that Google Video Upload (and others) will probably do some basic tasks such as segmenting on scene changes, timecode breaks, and perhaps simple scene analysis. But without anything else to work with, search engines aren’t going to help much.

    Enough for now…

    Sipura purchased by Cisco for $68MM

    I have liked the Sipura products since they first came out a few years ago. The SPA products are widely used by VoIP service providers (Vonage, etc) for their feature set, flexibility, and low cost. We have been testing out Sipura adapters on the Kuppam network for the past few months, with good results, and I just received a new SPA-3000 the other day which I haven’t gotten around to setting up for use with Asterisk yet.

    Yesterday Cisco announced they will also acquire Sipura, which will be merged into Linksys.

    SAN JOSE, Calif., April 26, 2005 – Cisco Systems® today announced a definitive agreement to acquire privately-held Sipura Technology, Inc. This represents Cisco’s first acquisition for its Linksys division, the leading provider of wireless and networking hardware for home, Small Office/Home Office (SOHO) and small business environments. Sipura is a leader in consumer voice over internet protocol (VoIP) technology and is a key technology provider for Linksys’ current line of VoIP networking devices. In addition to Sipura’s valuable technology and customer relationships, their experienced team with extensive VoIP expertise will help build a foundation for Linksys’ internal research and development capabilities in voice, video and other markets.

    Under the terms of the agreement, Cisco will pay approximately $68 million in cash and options for Sipura. The acquisition is subject to various standard closing conditions, including applicable regulatory approvals, and is expected to close in the fourth quarter of Cisco’s fiscal year 2005 ending July 30, 2005.

    The Cisco/Linksys VoIP router/firewalls already use Sipura technology. Hopefully, this won’t slow down product innovation by the Sipura team, and also leave them a path forward as VoIP capability becomes an embedded feature of other products rather than being a standalone product itself.

    The founders, Jan Fandrianto (CEO), and Sam Sin (VP Engineering), sold their previous company, Komodo Technology to Cisco, which became the Cisco’s ATA-186 VoIP adapter.

    More at Voxilla, Om Malik

    Page 5 of 6« First...23456