Bookmarks for January 20th through January 23rd

These are my links for January 20th through January 23rd:

  • Data.gov – Featured Datasets: Open Government Directive Agency – Datasets required under the Open Government Directive through the end of the day, January 22, 2010. Freedom of Information Act request logs, Treasury TARP and derivative activity logs, crime, income, agriculture datasets.
  • All Your Twitter Bot Needs Is Love – The bot’s name? Jason Thorton. He’s been humming along for months now, sending out over 1250 tweets to some 174 followers. His tweets, while not particularly creative, manage to be both believable and timely. And he’s powered by a single word: Love.

    Thorton is the creation of developer Ryan Merket, who built him as a side project in around three hours. Merket has just posted the code that powers him, and has also divulged how he made Thorton seem somewhat realistic: the bot looks for tweets with the word “love” in them and tweets them as its own.

  • Building a Twitter Bot – "Meet Jason Thorton. To people who know Jason, he is a successful entrepreneur in San Francisco who tweets 4-5 times a day. But Jason has a secret, he’s not really a human, he’s the product of my simple algorithm in PHP

    Jason tweets A LOT about the word “love” – that’s because Jason actually steals tweets from the public timeline that contain the word “love” and posts them as his own

    Jason also @replies to people who use the word “love” in their tweets, and asks them random questions or says something arbitrary

    It took me about 3 hours to code Jason, imagine what a real engineer could do with real AI algorithms? Now realize that it’s already a reality. Sites like Twitter are full of side projects, company initiatives, spambots and AI robots. When the free flow of information becomes open, the amount of disinformation increases. Theres a real need for someone to vet the people we ‘meet’ on social sites – will be interesting to see how this market grows in the next year

  • Website monitoring status – Public API Status – Health monitor for 26 APIs from popular Web services, including Google Search, Google Maps, Bing, Facebook, Twitter, SalesForce, YouTube, Amazon, eBay and others
  • PG&E Electrical System Outage Map – This map shows the current outages in our 70,000-square-mile service area. To see more details about an outage, including the cause and estimated time of restoration, click on the color-coded icon associated with that outage.

Bookmarks for December 31st through January 17th

These are my links for December 31st through January 17th:

  • Khan Academy – The Khan Academy is a not-for-profit organization with the mission of providing a high quality education to anyone, anywhere.

    We have 1000+ videos on YouTube covering everything from basic arithmetic and algebra to differential equations, physics, chemistry, biology and finance which have been recorded by Salman Khan.

  • StarCraft AI Competition | Expressive Intelligence Studio – AI bot warfare competition using a hacked API to run StarCraft, will be held at AIIDE2010 in October 2010.
    The competition will use StarCraft Brood War 1.16.1. Bots for StarCraft can be developed using the Broodwar API, which provides hooks into StarCraft and enables the development of custom AI for StarCraft. A C++ interface enables developers to query the current state of the game and issue orders to units. An introduction to the Broodwar API is available here. Instructions for building a bot that communicates with a remote process are available here. There is also a Forum. We encourage submission of bots that make use of advanced AI techniques. Some ideas are:
    * Planning
    * Data Mining
    * Machine Learning
    * Case-Based Reasoning
  • Measuring Measures: Learning About Statistical Learning – A "quick start guide" for statistical and machine learning systems, good collection of references.
  • Berkowitz et al : The use of formal methods to map, analyze and interpret hawala and terrorist-related alternative remittance systems (2006) – Berkowitz, Steven D., Woodward, Lloyd H., & Woodward, Caitlin. (2006). Use of formal methods to map, analyze and interpret hawala and terrorist-related alternative remittance systems. Originally intended for publication in updating the 1988 volume, eds., Wellman and Berkowitz, Social Structures: A Network Approach (Cambridge University Press). Steve died in November, 2003. See Barry Wellman’s “Steve Berkowitz: A Network Pioneer has passed away,” in Connections 25(2), 2003. It has not been possible to add the updating of references or of the quality of graphics that might have been possible if Berkowitz were alive. An early version of the article appeared in the Proceedings of the Session on Combating Terrorist Networks: Current Research in Social Network Analysis for the New War Fighting Environment. 8th International Command and Control Research and Technology Symposium. National Defense University, Washington, D.C June 17-19, 2003
  • SSH Tunneling through web filters | s-anand.net – Step by step tutorial on using Putty and an EC2 instance to set up a private web proxy on demand.
  • PyDroid GUI automation toolkit – GitHub – What is Pydroid?

    Pydroid is a simple toolkit for automating and scripting repetitive tasks, especially those involving a GUI, with Python. It includes functions for controlling the mouse and keyboard, finding colors and bitmaps on-screen, as well as displaying cross-platform alerts.
    Why use Pydroid?

    * Testing a GUI application for bugs and edge cases
    o You might think your app is stable, but what happens if you press that button 5000 times?
    * Automating games
    o Writing a script to beat that crappy flash game can be so much more gratifying than spending hours playing it yourself.
    * Freaking out friends and family
    o Well maybe this isn't really a practical use, but…

  • Time Series Data Library – More data sets – "This is a collection of about 800 time series drawn from many different fields.Agriculture Chemistry Crime Demography Ecology Finance Health Hydrology Industry Labour Market Macro-Economics Meteorology Micro-Economics Miscellaneous Physics Production Sales Simulated series Sport Transport & Tourism Tree-rings Utilities"
  • How informative is Twitter? » SemanticHacker Blog – "We undertook a small study to characterize the different types of messages that can be found on Twitter. We downloaded a sample of tweets over a two-week period using the Twitter streaming API. This resulted in a corpus of 8.9 million messages (”tweets”) posted by 2.6 million unique users. About 2.7 million of these tweets, or 31%, were replies to a tweet posted by another user, while half a million (6%) were retweets. Almost 2 million (22%) of the messages contained a URL."
  • Gremlin – a Turing-complete, graph-based programming language – GitHub – Gremlin is a Turing-complete, graph-based programming language developed in Java 1.6+ for key/value-pair multi-relational graphs known as property graphs. Gremlin makes extensive use of the XPath 1.0 language to support complex graph traversals. This language has applications in the areas of graph query, analysis, and manipulation. Connectors exist for the following data management systems:

    * TinkerGraph in-memory graph
    * Neo4j graph database
    * Sesame 2.0 compliant RDF stores
    * MongoDB document database

    The documentation for Gremlin can be found at this location. Finally, please visit TinkerPop for other software products.

  • The C Programming Language: 4.10 – by Kernighan & Ritchie & Lovecraft – void Rlyeh
    (int mene[], int wgah, int nagl) {
    int Ia, fhtagn;
    if (wgah>=nagl) return;
    swap (mene,wgah,(wgah+nagl)/2);
    fhtagn = wgah;
    for (Ia=wgah+1; Ia<=nagl; Ia++)
    if (mene[Ia]<mene[wgah])
    swap (mene,++fhtagn,Ia);
    swap (mene,wgah,fhtagn);
    Rlyeh (mene,wgah,fhtagn-1);
    Rlyeh (mene,fhtagn+1,nagl);

    } // PH'NGLUI MGLW'NAFH CTHULHU!

  • How to convert email addresses into name, age, ethnicity, sexual orientation – This is so Meta – "Save your email list as a CSV file (just comma separate those email addresses). Upload this file to your facebook account as if you wanted to add them as friends. Voila, facebook will give you all the profiles of all those users (in my test, about 80% of my email lists have facebook profiles). Now, click through each profile, and because of the new default facebook settings, which makes all information public, about 95% of the user info is available for you to harvest."
  • Microsoft Security Development Lifecycle (SDL): Tools Repository – A collection of previously internal-only security tools from Microsoft, including anti-xss, fuzz test, fxcop, threat modeling, binscope, now available for free download.
  • Analytics X Prize – Home – Forecast the murder rate in Philadelphia – The Analytics X Prize is an ongoing contest to apply analytics, modeling, and statistics to solve the social problems that affect our cities. It combines the fields of statistics, mathematics, and social science to understand the root causes of dysfunction in our neighborhoods. Understanding these relationships and discovering the most highly correlated variables allows us to deploy our limited resources more effectively and target the variables that will have the greatest positive impact on improvement.
  • PeteSearch: How to find user information from an email address – FindByEmail code released as open-source. You pass it an email address, and it queries 11 different public APIs to discover what information those services have on the user with that email address.
  • Measuring Measures: Beyond PageRank: Learning with Content and Networks – Conclusion: learning based on content and network data is the current state of the art There is a great paper and talk about personalization in Google News they use content for this purpose, and then user click streams to provide personalization, i.e. recommend specific articles within each topical cluster. The issue is content filtering is typically (as we say in research) "way harder." Suppose you have a social graph, a bunch of documents, and you know that some users in the social graph like some documents, and you want to recommend other documents that you think they will like. Using approaches based on Networks, you might consider clustering users based on co-visitaion (they have co-liked some of the documents). This scales great, and it internationalizes great. If you start extracting features from the documents themselves, then what you build for English may not work as well for the Chinese market. In addition, there is far more data in the text than there is in the social graph
  • mikemaccana’s python-docx at master – GitHub – MIT-licensed Python library to read/write Microsoft Word docx format files. "The docx module reads and writes Microsoft Office Word 2007 docx files. These are referred to as 'WordML', 'Office Open XML' and 'Open XML' by Microsoft. They can be opened in Microsoft Office 2007, Microsoft Mac Office 2008, OpenOffice.org 2.2, and Apple iWork 08. The module was created when I was looking for a Python support for MS Word .doc files, but could only find various hacks involving COM automation, calling .net or Java, or automating OpenOffice or MS Office."

Bookmarks for June 6th through June 8th

These are my links for June 6th through June 8th:

  • Latin motto generator: make your own catchy slogans! – Create your own life mottos and slogans in Latin! (Learning Latin not required, some vague idea for a desired motto a plus)
  • A Map Of Social (Network) Dominance – Using Alexa and Google Trend data, Cosenza color-coded the map based on which social network is the most popular in each country. All of the light green countries belong to Facebook. But there are still pockets of resistance in Russia (where V Kontakte rules), China (QQ), Brazil and India (Orkut), Central America, Peru, Mongolia, and Thailand (hi5), South Korea (Cyworld), Japan (Mixi), the Middle East (Maktoob), and the Philippines (Friendster).
  • Microsoft Releases Bing API – With No Usage Quotas – Updated search API, with no quotas and some improvements.
    * Developers can now request data in JSON and XML formats. The SOAP interface that the Live Search API required has also been retained.
    * Requested data can be narrowed to one of the following source types: web, news, images, phonebook, spell-checker, related queries, and Encarta instant answer.
    * It is now possible to send requests in OpenSearch-compliant RSS format for web, news, image and phonebook queries.
    * Client applications will be able to combine any number of different data source types into a single request with a single query string.
  • Twitter Limits Getting Ridiculous! « Verwon’s Blog – Anecdotal reports of Twitter users running into problems with rate limiting, either API or max posts/tweets/follows/directs.
  • flot – Google Code – Flot is a pure Javascript plotting library for jQuery. It produces graphical plots of arbitrary datasets on-the-fly client-side. The focus is on simple usage (all settings are optional), attractive looks and interactive features like zooming and mouse tracking. The plugin is known to work with Internet Explorer 6/7/8, Firefox 2.x+, Safari 3.0+, Opera 9.5+ and Konqueror 4.x+. If you find a problem, please report it. Drawing is done with the canvas tag introduced by Safari and now available on all major browsers, except Internet Explorer where the excanvas Javascript emulation helper is used.

Bookmarks for June 1st through June 2nd

These are my links for June 1st through June 2nd:

  • jqPlot – Pure Javascript Plotting – jqPlot is a plotting plugin for the jQuery Javascript framework. jqPlot produces beautiful line and bar charts with many features including: Numerous chart style options. Date axes with customizable formatting. Rotated axis text. Automatic trend line computation. Tooltips and data point highlighting. Sensible defaults for ease of use.
  • New Twitter Research: Men Follow Men and Nobody Tweets – Conversation Starter – HarvardBusiness.org – "Although men and women follow a similar number of Twitter users, men have 15% more followers than women. Men also have more reciprocated relationships, in which two users follow each other. This "follower split" suggests that women are driven less by followers than men, or have more stringent thresholds for reciprocating relationships. This is intriguing, especially given that females hold a slight majority on Twitter: we found that men comprise 45% of Twitter users, while women represent 55%."
  • Shirky: Power Laws, Weblogs, and Inequality – 2003 article on popularity / traffic on blogs, which was then the latest emerging social media format. "Once a power law distribution exists, it can take on a certain amount of homeostasis, the tendency of a system to retain its form even against external pressures. Is the weblog world such a system? Are there people who are as talented or deserving as the current stars, but who are not getting anything like the traffic? Doubtless. Will this problem get worse in the future? Yes. "
  • well-formed.eigenfactor.org : Visualizing information flow in science – Some nice visualization ideas using hierarchical clustering to explore patterns in citation networks.
  • Bing API, Version 2.0 – Updated API documentation for Microsoft Bing (formerly Live Search) web services.

Bookmarks for May 29th from 05:17 to 12:45

These are my links for May 29th from 05:17 to 12:45:

Bookmarks for May 20th from 19:50 to 22:03

These are my links for May 20th from 19:50 to 22:03:

Bookmarks for May 13th from 06:26 to 22:36

These are my links for May 13th from 06:26 to 22:36:

Bookmarks for May 4th through May 5th

These are my links for May 4th through May 5th:

Bookmarks for April 9th through April 10th

These are my links for April 9th through April 10th:

Bookmarks for March 9th through March 12th

These are my links for March 9th through March 12th:

Bookmarks for February 25th through February 26th

These are my links for February 25th through February 26th:

Amazon aStore – custom storefronts for Amazon affiliates

Amidst the speculation about the Amazon Unbox video download service, Amazon has quietly launched aStores, a service providing custom online storefronts for Amazon affiliates. (You may not be able to view the link unless you’re an Amazon affiliate.)

aStore by Amazon is a new Associates product that gives you the power to create a professional online store, in minutes and without the need for programming skills, that can be embedded within or linked to from your website.

Here’s a link to their demo store.

You get to pick up to nine “featured items” to put on the home page of the store, choose product categories, and add reviews and editorial content. The shopping cart and fulfillment are handled by Amazon, with standard referral fees going back to the affiliate. There’s a browser based interface for building a store on the Amazon Affiliates site. The resulting store can be hosted by Amazon or on your own site.

This sort of functionality has been available for a while for those will and able to customize their site using Amazon’s web services API, but the aStores program will make custom stores broadly accessible to all of the Amazon affiliates base (just in time for the holiday shopping season). I suspect we’ll see an explosion of niche shopping sites in short order, it looks pretty easy to set one up.

Deconstructing search at Alexa

Wow! Although the basic idea is straightforward, crawling and indexing for a general purpose search engine requires huge resources. Web crawlers are effectively downloading copies of the entire internet over and over, turning them over to indexing applications which scan the contents for structure and meaning.

The sheer scale of the task is a substantial barrier to entry for anyone wanting to develop a new indexing or retrieval application. Some projects have narrowed the problem domain, which can reduce the problem scope to a manageable level, but this announcement from Alexa looks like it may offer an exciting alternative for building new search applications.

John Batelle writes:

Alexa, an Amazon-owned search company started by Bruce Gilliat and Brewster Kahle (and the spider that fuels the Internet Archive), is going to offer its index up to anyone who wants it (details are not up yet, but soon). Alexa has about 5 billion documents in its index – about 100 terabytes of data.

Anyone can also use Alexa’s servers and processing power to mine its index to discover things – perhaps, to outsource the crawl needed to create a vertical search engine, for example. Or maybe to build new kinds of search engines entirely, or …well, whatever creative folks can dream up. And then, anyone can run that new service on Alexa’s (er…Amazon’s) platform, should they wish.

The service will be priced on a usage basis: $1 per CPU hour, $1 per GB stored or uploaded, $1 per 50GB data processed.

There’s no announcement posted on the Alexa or Amazon sites yet, it’s apparently due out overnight. (Updated 12-13-2005 00:25 – the site is up now)

Not every search and retrieval application is necessarily going to fit onto the way Alexa has built their crawler and indexing infrastructure, or onto any other search engine platform, for that matter. But opening up access to more of the platform should make it possible for a lot of new ideas to be tried out quickly without having to build yet another crawler for each project. Up to this point, many search ideas can’t be evaluated without working at one of the major search engines. I suspect most development teams would prefer to get access to Google’s crawl and index data, but I’m certainly looking forward to seeing what’s available at Alexa when they get their documentation online in the morning.

More from Om Malik, TechCrunch, ReadWrite Web

Amazon Mechanical Turk – Putting Humans in the Loop

I came across a cryptic link to mturk.com on supr.c.ilio.us, asking “Isn’t that how the Matrix came to be?”

Amazon Mechanical Turk provides a web services API for computers to integrate “artificial, artificial intelligence” directly into their processing by making requests of humans. Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications. To the application, the transaction looks very much like any remote procedure call: the application sends the request, and the service returns the results. In reality, a network of humans fuels this artificial, artificial intelligence by coming to the web site, searching for and completing tasks, and receiving payment for their work.

All software developers need to do is write normal code. The pseudo code below illustrates how simple this can be.

 read (photo);
 photoContainsHuman = callMechanicalTurk(photo);
 if (photoContainsHuman == TRUE) {
   acceptPhoto;
 }
 else {
   rejectPhoto;
 }

Given the source of the link, I was a little skeptical at first read, but it appears to be a legitimate beta project that just launched yesterday at Amazon. At least, the documentation links point back into Amazon Web Services, and at least one person seems to know someone there.

This is an interesting idea that should find some useful applications. Spammers have supposedly been doing something like this to defeat the image-based Turing tests used to screen comment posting systems, offering access to porn in exchange for solving the puzzles, and there are other anecdotes of using low cost offshore labor for similar tasks. Having a simpler web service interface for finding a human key operator somewhere will probably allow smaller and more experimental applications to emerge.

Update 11-04-2005 08:09 PST – Slashdot, TechDirt, Google Blogoscoped on Mechanical Turk, pointer to BoingBoing on porn puzzles and spam, captcha.net

Whizzy update to Yahoo Maps

Yahoo has a major update to Yahoo Maps this evening, bringing it back on par with Google Maps, and with a full set of web APIs for building mapping applications.

From the Yahoo Maps API overview:

Building Block Components

Several Yahoo! APIs help you create a powerful and useful Yahoo! Maps mashups. Use these together with the Yahoo! Maps APIs to enhance the user experience.

  • Geocoding API – Pass in location data by address and receive geocoded (encoded with latitude-longitude) responses.
  • Map Image API – Stitch map images together to build your own maps for usage in custom applications, including mobile and offline use.
  • Traffic APIs – Build applications that take dynamic traffic report data to help you plan optimal routes and keep on top of your commute using either our REST API or Dynamic RSS Feed.
  • Local Search APIs – Query against the Yahoo! Local service, which now returns longitude-latitude with every search result for easy plotting on a map. Also new is the inclusion of ratings from Yahoo! users for each establishment to give added context.

They also spell out their free service restrictions:

Rate Limit

The Simple API that displays your map data on the Yahoo! Maps site has no rate limit, thought it is limited to non-commercial use. The Yahoo! Maps Embeddedable APIs (the Flash and AJAX APIs are limited to 50,000 queries per IP per day and to non-commercial use. See the specific terms attached to each API for that API’s rate limit. See information on rate limiting.

This restriction is more interesting:

Sensor-Based Location Limit

You may use location data derived from GPS or other location sensing devices in connection with the Yahoo! Maps APIs, provided that such location data is not based on real-time (i.e., less than 6 hours) GPS or any other real-time location sensing device, the GPS or location sensing device that derives the location data cannot automatically (i.e. without human intervention) provide the end user’s location, and any such location data must be uploaded by an end-user (and not you) to the Yahoo! Maps APIs.

So uploading a track log after running or hiking is OK, but doing a live GPS ping from your notebook, PDA, or cell phone to show where you are isn’t? I think this is intended to exclude traffic and fleet tracking applications, but it seems to include geocoded blog maps by accident. I don’t think they’d actually mind that.

There are several sample applications to look at. The events map seems nicely done, pulling up locations, images, and events for venues within the search window.

To display appropriate images for events, local event output was sent into the Term Extraction API, then the term vector was given to the Image Search API. The results are often incredibly accurate.

I’ve been meaning to take a look at the Term Extraction service, it looks like it might be a handy tool for building some quick-and-dirty personal meme engines or other filters for wrangling down my ever growing list of feeds.

Announcement at Yahoo Search Blog

More from TechCrunch, Jeremy Zawodny, Chad Dickerson

Alexa Web Information Service

Alexa Web Information Service has been in beta for a year and is officially launched this week.

The Alexa Web Information Service provides the following operations:
URL Information
Examples of information that can be accessed are site popularity, related sites, detailed usage/traffic stats, supported character-set/locales, and site contact information. This is most of the data that can be found on the Alexa Web site and in the Alexa toolbar, plus additional information that is being made available for the first time with this release.
Web Search
The Web Search operation is a brand new search index based on Alexa’s extensive Web crawl. The search query format is similar to a Google query and allows up to 1,000 results per page.
Browse Category
This service returns Web pages and sub-categories within a specified category. The returned URLs are filtered through the Alexa traffic data and then ordered by popularity.
Web Map
The Web Map operation gives developers access to links-in and links-out information for all pages in the crawl. For example, given a URL as an input, the service returns a list of all links-in and links-out to or from that URL. This Web map information can be used as inputs to search-engine ranking algorithms such as PageRank and HITS, and for Internet research.
Crawl Meta Data
The Crawl Meta Data operation gives developers access to metadata collected in Alexa’s Web Crawl. For example, a developer can get pages size, checksum, total links, link text, images, frames, and any Javascript-embedded URLs for any page in the crawl.
Pricing
First 10,000 requests per month are free
additional requests are $0.00015 per request ($0.15 for 1,000 requests)

via Paul Kedrosky