Tag: ai

Loebner 2010

When the scores are tallied, Suzette ties with Rollo Carpenter’s Cleverbot for 2nd-3rd. Yet, it turns out, the 3rd round judge got the human subject from hell. Poetic justice! The human was all over the place — confusing, vague. The judge voted irritated/angry/bored Suzette as human. Instant win since no other program swayed the judges.

heh, the first domino in the turing test falls.

The Future Of Astronomy

Fundamental changes are taking place in the way we do astronomy. In 20 years time, it is likely that most astronomers will never go near a cutting-edge telescope, which will be much more efficiently operated in service mode. They will rarely analyse data, since all the leading-edge telescopes will have pipeline processors. And rather than competing to observe a particularly interesting object, astronomers will more commonly group together in large consortia to observe massive chunks of the sky in carefully designed surveys, generating petabytes of data daily.
We can imagine that astronomical productivity will be higher than at any previous time. PhD students will mine enormous survey databases using sophisticated tools, cross-correlating different wavelength data over vast areas, and producing front-line astronomy results within months of starting their PhD. The expertise that now goes into planning an observation will instead be devoted to planning a foray into the databases. In effect, people will plan observations to use the Virtual Observatory.

New developments in AI

What to expect in the near-term is less clear. While strong AI still lies safely beyond the Maes-Garreau horizon1 (a vanishing point, perpetually 50 years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. They are not unambiguously better, cheaper, or faster. They are something new.

What are the implications of a car that adjusts its speed to avoid collisions … but occasionally mistakes the guardrail along a sharp curve as an oncoming obstacle and slams on the brakes? What will it mean when our computers know everything — every single fact, the entirety of human knowledge — but can only reason at the level of a cockroach?

autonomous driving and weak omniscience closer than you think. a joy to read.

IBM Watson

This will be fun to watch. And to witness the drama and handwringing of the dilettante press.

For the last 3 years, IBM scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

2014-05-05: Watson Debater

In a canned demo, Kelly chose a sample debate topic: “The sale of violent video games to minors should be banned.” The Debater was tasked with presenting pros and cons for a debate on this question. Speaking in nearly perfect English, Watson/The Debater replied: Scanned 4 million Wikipedia articles, returning 10 most relevant articles. Scanned all 3000 sentences in top 10 articles. Detected sentences which contain candidate claims. Identified borders of candidate claims. Assessed pro and con polarity of candidate claims. Constructed demo speech with top claim predictions. Ready to deliver. It then presented 3 relevant pros and cons.

2014-10-07: If the process of science itself can be changed from the current miasma of writing 19th century style papers, lack of negative results etc towards a process of discovery where all knowledge is like wikipedia, and AI infers new things, that’d be quite something.

Scientists demonstrated a possible new path for generating scientific questions that may be helpful in the long term development of new, effective treatments for disease. In a matter of weeks, biologists and data scientists using the Baylor Knowledge Integration Toolkit (KnIT), based on Watson technology, accurately identified proteins that modify p53, an important protein related to many cancers, which can eventually lead to better efficacy of drugs and other treatments. A feat that would have taken researchers years to accomplish without Watson’s cognitive capabilities, Watson analyzed 70k scientific articles on p53 to predict proteins that turn on or off p53’s activity. This automated analysis led the Baylor cancer researchers to identify 6 potential proteins to target for new research. These results are notable, considering that over the last 30 years, scientists averaged 1 similar target protein discovery per year.

2015-07-05: Chef Watson

Enter blueberries as the essential ingredient, click dessert and Watson recommends similar ingredients based on food pairing chemistry. After you’ve narrowed preferences down, Watson recommends brand-new cooking ideas, based on recipes Watson has studied. And users can go more deeply into the app too, modifying Watson’s modifications. Its website calls the human-computer partnership a tool to “amplify human creativity.”