An expensive typo – first day trading in Tulip IT


As financial systems become more automated, there more opportunities for humans to key in an extra zero or transpose a digit or two, with instantaneous results. Last week there was yet another “fat finger” trade, this time in India on the Bombay Stock Exchange, during the first day of trading for Tulip IT Services. Someone offered to sell shares at 25 paise (less than one US penny), a fraction of the market price of 171 rupees (around US $3.80), and found many takers.

It looks like the buyers are going to have to pay the market rate after all, though.

The BSE today said some trades executed at 25 paise in the shares of Tulip IT Services on Thursday will not be settled at this rate.

The exchange said trades executed below Rs 96 would be transacted at Rs 171.15. This means that investors who bought shares at 25 paise will have to pay Rs 171.15 and sellers will get the money at such rate.

A followup article in the Hindu Business Line recaps some notable fat fingered trades, and notes:

However, the record in fat finger trading is still held by a trader of Mizuho Securities, the broking arm of the Mizuho Financial Group of Japan. The trader had managed to sell shares worth £1.6 billion in a local recruitment agency, J-Com, which had just been floated and had a market value of little more than £50 million. The December 8, 2005, “sell” order, was mistakenly placed for 600,000 shares, despite the fact that J-Com had only 14,000 shares in issue. The order had created chaos in the market and had resulted in a 301-point fall on Japan’s main stock market index, the Nikkei 225.

Along these lines, there are some entertaining (staged, I think) cameraphone photos of the Mizuho trading floor over at TripleWitchingFriday (via Paul Kedrosky). More on the Mizuho trade here.

Update: 02-17-2006 12:16 PDT – Various participants in the Mizuho trade are donating 20 billion yen (US $170 million) to a fund for improving Japan’s stock trading system.

VoicePulse – Hasn’t signed new subscribers since November 2005 due to E911?


The VoicePulse signup problem I described earlier today seems both worse and sillier than before. They apparently stopped signing up new subscribers at the end of November 2005, due to non-compliance with the FCC E911 requirements. They’re currently doing integration testing with Intrado for 911 service as well as negotiating with the FCC on what constitutes an acceptable solution, with an expected resolution sometime in January 2006.

Here’s someone who ran into a similar signup problem (although I didn’t get a warning prompt about no E911 today):

It turns out that Voicepulse isn’t selling new service at all right now. Of course it’s all the big bad FCC’s fault (never mind the fact that many other VOIP providers are selling new service at the moment, and many of them are providing usable 911 service.) I’m sure the FCC is making it hard on these providers, since the old-line phone companies are pulling the strings, but a) other companies are currently selling new service (I proved this to myself, I ordered VOIP service from a known-good provider) and b) many of these other companies are providing 911 and E911 services.

I spoke to a Voicepulse representative who did confirm that they’re not selling ANY new service at all, and don’t know when they will be again. Of course, he said it would be “soon” and the delay was entirely because they were waiting for replies from the FCC. When I commented that it might be a good idea to announce that BEFORE potential customers spend 20 minutes filling out information on their site only to be told that they couldn’t buy anything, he said that “had been discussed in meetings and it was decided to put the message where it is because that’s where the 911 disclaimer already was in the ordering process.” I suggested that he start looking at the help-wanted ads, because I didn’t think an inbound phone sales rep was going to have a job very long at a company that isn’t selling anything, and it couldn’t be satisfying to answer calls from irritated potential customers all day.

My existing VoicePulse line has been working fine, and they’ve never asked for E911 location profile data yet. I have been following the news on VOIP E911 requirements over the past few weeks, but was under the (false) impression that most of the US VOIP service providers had gotten various combinations of deadline extensions from the FCC and technical solutions in place.

This thread lists the current E911 status of US VOIP providers as of January 8th:

[VoicePulse] Not Taking New Orders? (DSL Reports)

Tried to order VP today and was rejected because of the 911 fiasco. So I can’t even order it even if I understand and agree to the 911 situation?

Nope, thanks to the FCC they need to get e911 before they can sell service again.

No, applies only to the VOIPs that failed to get their 911 house in order during the time allowed by the FCC. Of the well-known brands that would include Voicepulse, Lingo, Nuvio. The others managed to get it done and are selling right now: Vonage, Sunrocket, Viatalk, Packet 8, Broadvox, ATT CallVantage (in about 70% of their markets.)

I’m astonished that VoicePulse appears to have gone for nearly two months with an known-broken signup process (and presumably no new subscribers) without mentioning that detail on their website. They also appear to have a lot of company.

It looks like I’ll need to do a bit of work to find an alternate provider, assuming that VoicePulse isn’t able to take orders by tomorrow. I’m trying to set up a phone number in the Malibu, California service area, and would prefer to use an existing SPA-2002 or SPA-3000, rather than buying another adapter. The E911 aspect is irrelevant as the physical IP connection will be here in the Bay Area most of the time but forwarded to various other locations.

More Links:

VoicePulse – how not to implement a customer feature transition


I just got off the phone with VoicePulse, my current VOIP service provider. They are demonstrating how not to manage a web service feature transition today, by both turning away new customers and annoying their existing ones.

I’ve been relatively happy with VoicePulse, having signed up with them a few months ago for commercial US PSTN access. The voice quality and stability has been OK, and they also offer IAX access which I was thinking about using for future integration with our Asterisk implementation.

All day today I’ve been trying to add a new device and a new number to my existing account. The sign up process requires entering the serial number and MAC address from the VOIP adapter (in this case, a SPA-2002 I picked up a few days ago), selecting a telephone number, and providing contact and billing information. I noticed that since I signed up for my account a few months ago they’ve started collecting E911 contact information, and added some verbiage explaining the limitations of VOIP’s 911 service (i.e. they don’t really have any idea at all where you are).

The process only takes a few minutes, so I’ve been trying it in between various other tasks today, expecting that it wouldn’t take very long. Each time I’ve tried it, I get an error page at the end.

Sorry!

You have encountered a problem while going through the ordering process. This is usually due to your session expiring if the browser was left unattended for too long.

If you have encountered an error with our ordering system, VoicePulse’s development team has been automatically notified.

Please close this window, go back to www.voicepulse.com in a few minutes and try again. If you continue experiencing problems, please call 732-339-5100 M-F 9am-7pm EST to place your order with a customer service representative.

The first couple of times it seemed vaguely plausible that the session might have timed out, but the third time I went straight through all the forms, now well practiced and fully equipped with all the information. Still got the error message. This time I called the customer service number.

According to the Voicepulse phone rep, their system is unable to accept any new orders at all today. They’re apparently rolling out changes to their order application, related to the E911 service that I observed during the signup process. Here are some observations:

  • The VoicePulse customer service rep I spoke with didn’t learn about their phone order application being out of service until this morning. You’d think that they’d give their own CSR team advance notification about a planned application outage.
  • The VoicePulse web application team didn’t bother to build a page indicating that they were unable to accept new orders, and that customers keying in any user account data (like me) would be wasting their time.
  • The VoicePulse web application team left the existing failed-signup message in place. Although “true”, it’s misleading, since the site failure has absolutely nothing to do with the session timeout, and they know that the order process could never have worked in its current state.
  • It didn’t sound like they had a committed “time to fix” — the CSR said it should be tomorrow afternoon sometime, but the fact that they didn’t tell them about it until this morning makes me think it might not have been planned. They suggested I call back tomorrow to see if it was working before trying to place an order. Ugh.

I can’t think of a good rationale for not blocking new orders on their site and putting up a maintenance message of some sort. Maybe they didn’t want people to know they couldn’t take orders?

I can’t think of a good rationale for not telling the customer service department ahead of time.

I suspect that most customers might be unhappy about keying in the 12-digit MAC, 12-digit serial number, along with their credit card data and having Voicepulse’s order processing application choke on it repeatedly, especially when they already know it won’t work. A lot of them don’t know how to cut and paste from the Sipura’s configuration page, and are vaguely uncomfortable with giving out their credit card numbers online as well.

I am a relatively patient person, but I’m astonished at the poor planning and execution exhibited at Voicepulse today. They either can’t plan and manage basic site upgrades, or they’re trying to hide some unexpected maintenance work.

If anyone has a VOIP carrier that they actually like, as opposed to simply tolerate, let me know. I may be looking for a new service provider soon.

SearchSIG – January 2006

IMG_5794 IMG_5795

This evening’s SearchSIG featured a panel discussion on tagging and social bookmarking.

L-R: Joshua Schachter (del.icio.us), Kevin Rose (Digg), Michael Tanne (Wink), Manish Chandra (Kaboodle)

Charlene Li (from Forrester) moderated.

The room at Yahoo was full — standing room only. A quick show of hands indicated nearly everyone in the room had used tagging services before.

Some discussion about “how can we trust the tags”, tag spam (Charlene’s term was “spag”), discerning intent from user tagging and other actions, and the problems of tagging users and the range of social gestures built into the various systems.

Joshua used the example of receiving LinkedIn connection requests from someone whose name you don’t recognize. You don’t want to accept it, because you don’t know who it is. You don’t want to reject it, because it would be rude, and you might actually know them. So he has a huge backlog of random connection requests piling up in his inbox.

Someone in the audience commented that between keyworded search and tagging, people are starting to lose grammar, and instead come up with “restaurant san francisco cool” instead of complete sentences.

Participation rates: Wink assumed 5-8% of their users would tag, actual is 30-40% active (but they’re just launching and are picking up a lot of knowledgeable early adopters from word of mouth). Digg has around 20% of their traffic from registered users (they don’t exactly tag, just digg). Kevin says Digg has around 140K registered users, generating around 4M pageviews per day.

Charlene wrapped up the Q&A with some predictions for the upcoming year:
1. The rise of some sort of social link and social standing system to “rate” users
2. Some sort of social “disaster” will occur on one of the new services, despite best efforts to prevent social disease from creeping in.
3. Today’s companies are mostly small, smart, startups. In a year there will be a different cast of characters from mainstream media, search engines, bigger players.

Thanks to Jeff Clavier and Dave McClure for organizing another great session.

Tabula Rasa

IMG_5765 IMG_5775

The moving finger writes; and having writ, Moves on…

The Rubaiyat by Omar Khayyam

IBM T60 and X60 will run for 11 hours on a charge?

I’ve been pretty happy with my T42P, but I think nearly everyone wants longer battery life. I’ve been debating switching to a smaller form factor for a while, it might be time to keep an eye out for the X60. Something like an X41 with an 11 hour run time would be really tempting. The best I can manage on the T42 is around 5-6 hours with the 9-cell battery.

CES: Lenovo says new ThinkPads go 11 hours on battery power

Update 1-10-2006 22:01 PST: Specifications and photos of the X60 and T60 from NotebookReview.com. It looks like an antenna sticks out slightly on the right side of the T60 display. Perhaps it’s for the EVDO service?

My new running blog

I’ve signed up for the Big Sur Marathon again. I’m also splitting off the running posts so regular readers here won’t have to wade through my training posts and other running minutiae.

This is also giving me a chance to try out WordPress 2.0 over there before updating the main site.

My new running blog is at hojohnlee.com/running.

Terrorists in Bangalore?

Catching up on the backlog of feeds, some discouraging news from Bangalore:

Last Thursday’s Times of India:

An armed assailant killed a retired IIT Delhi professor and injured four others in a daring assault on delegates of an international conference at the premier Indian Institute of Science (IISc) on Wednesday evening.

The unidentified attacker — police aren’t sure whether more than one person was involved in the strike — fired indiscriminately through his AK-47 rifle from the parking lot at delegates coming out of the auditorium after the second day’s deliberations ended.

One person was killed, two other attendees were shot, and a hand grenade (which misfired) was found in the driveway .

This doesn’t appear to have been a large or politically interesting conference. MC Puri, the retired professor who was killed in the attack, was one of 36 attendees at an operations research symposium.

To put this in context, if Bangalore is the Silicon Valley of India, this would have been sort of like going to an information theory conference at Stanford or a Palo Alto hotel and ending up under attack by guns and hand grenades. It’s not something that we worry about here today, and it hasn’t been something that people have worried about there much up to this point.

In an Times of India update today:

Central security agencies have established links between the mastermind of the attack on Indian Institute of Sciences in Bangalore and Pakistan-based terrorist group Lashkar-e-Taiba, sources said.

Three persons were detained–two in Bangalore and one in Hyderabad–in connection with the attack on the evening of December 28 and security agencies have found evidence of their links with Al Hadees group based in Bangladesh and Saudi Arabia, the sources said.

I wouldn’t make too much of an international or domestic terrorism angle yet, as the situation is evolving, but it’s definitely worth keeping an eye on as there is clearly potential for a lot of disruption.

See also:

Starting the New Year with a Bang

We’re having a nice quiet New Year’s Day here, with traditional Korean duk mandu gook, a beef broth soup with dumplings and sliced rice cake. The custom is that you are one year older when you have duk mandu gook on New Year’s Day. I’m not totally sure about that part, but in any case we usually have soup on New Year’s Day.

I also have some more recent New Year’s traditions, such as cleaning out my e-mail inbox and archiving the previous year’s data, although I’ve mostly managed to take a break from computers during vacation. Outside it’s been wet and windy, making it a good day to stay indoors.

In contrast, my friend Andy has had a more exciting weekend, in which the latest round of storms here in the Bay Area managed to uproot a huge tree onto his house. Fortunately no one was hurt, and he has an amazing set of photos.

Wow!

Happy New Year!

Most Popular Posts of 2005

As 2005 comes to a close, a look back at some of the top posts this year based on page views, which seems to have been a mix of technology, business, travel, and random.

Go to Sleep!

Go to sleep!

Why Link Farms (used to) Work

I tripped over a reference to an interesting paper on PageRank hacking while looking at some unrelated rumors at Ian McAllister’s blog. The undated paper is titled “Faults of PageRank / Something is Wrong with Google’s Mathematical Model”, by Hillel Tal-Ezer, a professor at the Academic College of Tel-Aviv Yaffo.

It points out a fault in Google’s PageRank algorithm that causes ‘sink’ pages that are not strongly connected to the main web graph to have an unrealistic importance. The author then goes on to explain a new algorithm with the same complexity of the original PageRank algorithm that solves this problem.

After a quick read through this, it appears to describe one of the techniques that had been popular among some search engine optimizers a while back, in which link farms would be constructed pointing at a single page with no outbound links, in an effort to artificially raise the target page’s search ranking.

This technique is less effective now than in the past, because Google has continued to update its indexing and ranking algorithms in response to the success of link spam and other ranking manipulation. Analysis of link patterns (SpamRank, link mass) and site reputation (Hilltop) can substantially reduce the effect described here. Nonetheless, it’s nice to see a quantitative description of the problem.

See also: A reading list on PageRank and Search Algorithms

The Return of Vinyl


It’s been a long time since I’ve had a working turntable at home. This evening I suddenly have lots of new old stuff to listen to.

There’s a divide in the music I’ve been listening to for the past ten years or so. I packed away the records and turntable around the time our daughter was born, thinking that I’d put it back together when she was old enough not to destroy the records. So, ten years later, I have a fairly large collection of digital music, and a large collection of analog recordings which don’t overlap much, but which have languishing in storage.

I’m happy to find that the turntable still works. Modern stereos don’t have phono inputs, so I ended up rummaging in the garage to dig up an old amplifier, which makes for a large but serviceable preamp. Right now I’m listening to Brian Eno’s Music For Airports.

Looking through the boxes I’ve hauled out so far is like receiving a musical time capsule from myself. There are a lot of albums I haven’t heard in a while and that Emily’s never heard at all. Tomorrow I think I’ll see how she likes J. Geils Live or The Roches. The plan is to gradually migrate the vinyl to digital and put it on the server with everything else, but this evening I’m just enjoying a bit of analog technology and album artwork the way it was meant to be.

I haven’t started researching the best solution for digitizing the albums and possibly cleaning up scratches, pops, clicks, and surface noise. Anyone have a favorite method they’d like to recommend?

Random Dreamhost issues

In case you were wondering where the site went, the past 24 hours or so has been a day of random issues with Dreamhost.

Yesterday afternoon they were having connectivity problems, which took all their customers offline for a few hours.

This morning, I discovered that this site was running, but all Dreamhost sites were unreachable via SBC/PacBell here in the Bay Area. From the logs it looks like Comcast and a variety of overseas networks were still able to connect. The Google proxy hack mentioned this morning on O’Reilly provided another quick path for looking at the web site from a different network to verify that connectivity was still working, at least from the Google data center.

A couple of hours ago I got what I thought was a response to my e-mail regarding the network connectivity problem, but which turned out to be one of the CPU utilization warning letters that have been going out lately:

[your] CPU minute usage for today is 56.15. The daily limit is 60 CPU minutes. You will continue to receive these notifications as long as your resource consumption is over 50 CPU minutes.

A little mysterious, since traffic to the site was off because of the network outage, and spam traffic hasn’t spiked either.

There aren’t any resource utilization logs posted yet. I wonder if the flaky networking over the past day contributed to the high CPU use by leaving a lot of processes around waiting for I/O that was coming in slowly or never.

Anyway, the site seems to be running normally as of this afternoon (or at least, I can get to it now).

See also: Dreamhost load average = 1004.16?

Googlepark: the battle for AOL


More business comics – the latest installment of Googlepark is up at Channel 9 (via Google Blogoscoped)

If you haven’t seen the previous episodes of Googlepark, here are links to the other installments: Googlepark.

Dilbert VC comics


Dilbert meets Vijay, the world’s most desperate venture capitalist.

See also: VC Comic Strips, GooglePark

Filtering, aggregating, searching, and monetizing the Long Tail

David Hornik asks: Where’s the Money in the Long Tail?

It is certainly the case that in the aggregate, Long Tail content is extraordinarily valuable. The question for VCs and entrepreneurs is “for whom?”

The real money is in aggregation and filtering and those will continue to be interesting businesses for the foreseeable future.

He points out that aggregators are building convenient one-stop shopping for people looking for topically-focused content, and derive economic value even when the content publishers do not.

David Beisel follows the money a little further:

…in the long run, the value of the network is not only determined by the number of nodes in it, but in the ability for the network to monetize those nodes.
…in calculating the value of a network, any equation describing it should contain a variable with the monetization rate (or proxied by the value to the user which can be monetized in the future). So while the number of nodes in a network surely is a fundamental (if not the majority, in many cases) driver of value, the value of the network itself to the user is also a very important component to the overall total.

Being the provider of a filtered view of online content is somewhat analogous to being an editor at a magazine or newspaper, a program director at a radio station, or an A&R rep at a record label. It usually doesn’t make sense to pursue some topics or styles as there’s either no audience, or a very low value audience, or an audience that’s too hard to reach.

Conversely, some publications do well on a very small base (financial newsletters and independent musicians come to mind). When the individual publisher (writer, musician, artist) develops their own audience, they are able to capture more of the value placed on their content by the consumers of content (readers, listeners, viewers) than when they are simply one of many aggregated content producers. People seek out their favorite writers in newspapers and magazines, talk show hosts on television, or musicians in local concerts. The content producers gain relative power over the distributors and a few can become their own branded media empire. (Think “Oprah”.)

From an investment point of view, it’s difficult to justify betting on any particular content producer becoming an online media star, for the same reasons aspiring writers/musicians/actors don’t get VC investment. (How are you going to know when you’ve got the next J.K. Rowling or Dan Brown on your doorstep looking for seed funding to write their book? )

In contrast, search, filtering and aggregation services can be built for specific audiences. The trick though is not just to find an audience, but to provide a service that is valuable over time to the audience, service provider, and content publishers. The Alexa Web Search Platform announcement this week is interesting not because it’s the best general purpose search engine, but because it may drop the effective cost of building some targeted filtering and aggregation services low enough to uncover some new interesting niches, in addition to the areas that are already being addressed by vertical search startups. Many of these niches may be profitable short term projects for a small team (or single person) but not durable enough to be investable, though.

Greg Linden adds:

Massive selection isn’t enough. To make the long tail accessible, irrelevant items should be hidden. Interesting items should be emphasized. Millions of poor choices should be reduced to tens of good ones. The value is in surfacing the gems from the sea of noise.

David Beisel has some suggestions:

Where’s my “social portal” for me as a skier enthusiast? Better yet, where’s the “About.com of social portals?” Or why isn’t About.com more social?

I suspect that someone will have that social portal for skiing enthusiasts in limited beta somewhere real soon now…

See also: The Home Pages of this New Era

Greenfuel – producing biofuel from smokestack emissions

algae biofuel reactor

Greenfuel Technologies creates bio-fuels or bio-diesel from the emissions of power plants and industrial facilities. The company’s system is being tested at MIT’s 20-megawatt power plant and it has an open invitation to other power plants. Its system produces raw oil stock from smokestack gases, reducing carbon dioxide emissions by 40% and nitrogen oxide emissions by 86%.

The system works by passing the smokestack emssions through an algae cultivation system which captures the carbon dioxide and also break down NOx. The algae can eventually be processed into biodiesel fuel.

via alarm:clock

See also: How Algae Clean the Air (Business 2.0, October 2005), Is Algae in your future? (Boston Museum of Science)

Deconstructing search at Alexa

Wow! Although the basic idea is straightforward, crawling and indexing for a general purpose search engine requires huge resources. Web crawlers are effectively downloading copies of the entire internet over and over, turning them over to indexing applications which scan the contents for structure and meaning.

The sheer scale of the task is a substantial barrier to entry for anyone wanting to develop a new indexing or retrieval application. Some projects have narrowed the problem domain, which can reduce the problem scope to a manageable level, but this announcement from Alexa looks like it may offer an exciting alternative for building new search applications.

John Batelle writes:

Alexa, an Amazon-owned search company started by Bruce Gilliat and Brewster Kahle (and the spider that fuels the Internet Archive), is going to offer its index up to anyone who wants it (details are not up yet, but soon). Alexa has about 5 billion documents in its index – about 100 terabytes of data.

Anyone can also use Alexa’s servers and processing power to mine its index to discover things – perhaps, to outsource the crawl needed to create a vertical search engine, for example. Or maybe to build new kinds of search engines entirely, or …well, whatever creative folks can dream up. And then, anyone can run that new service on Alexa’s (er…Amazon’s) platform, should they wish.

The service will be priced on a usage basis: $1 per CPU hour, $1 per GB stored or uploaded, $1 per 50GB data processed.

There’s no announcement posted on the Alexa or Amazon sites yet, it’s apparently due out overnight. (Updated 12-13-2005 00:25 – the site is up now)

Not every search and retrieval application is necessarily going to fit onto the way Alexa has built their crawler and indexing infrastructure, or onto any other search engine platform, for that matter. But opening up access to more of the platform should make it possible for a lot of new ideas to be tried out quickly without having to build yet another crawler for each project. Up to this point, many search ideas can’t be evaluated without working at one of the major search engines. I suspect most development teams would prefer to get access to Google’s crawl and index data, but I’m certainly looking forward to seeing what’s available at Alexa when they get their documentation online in the morning.

More from Om Malik, TechCrunch, ReadWrite Web

Bangalore to be renamed Bengaluru

Looks like Bangalore is in line for an official renaming to either “Bengaluru” or “Bengalooru”. Times of India:

Chief minister N Dharam Singh told reporters in Gulbarga on Sunday: “We will rename Bangalore as Bengaluru on November 1, 2006, to mark the launch of Karnataka’s Golden Jubilee year – Suvarna Karnataka – on that day. I have issued a directive to chief secretary B K Das in this regard.”

The name, however, may undergo another change, for Ananthamurthy told The Times of India: “The name should be Bengal-oo-ru.” The CM spelt it out as Bengal-u-ru.

See also: Bangalore boom, traffic congestion

Page 10 of 27« First...89101112...20...Last »