CNN Europe blazes broad trail

CNN Europe is about to launch an interactive service that promises to live up to the convergence hype. The new broadband service is currently being demonstrated to broadband operators throughout Europe, with plans for an official launch sometime later this fall.

With the help of speech-recognition software from Dremedia, CNN Europe will be able to automate the process of matching Web text and video content to the live stream of CNN Europe's feed. This will allow broadband users to watch the video portion on one part of the screen while being able to drill into Web content on another part of the screen. For example, a live news report on unrest in Israel can bring up the latest Web content that matches the names and places mentioned in the video stream.

According to a CNN Europe spokesperson, there are no current plans for deployment over digital set-top boxes to TV viewers, but the Dremedia software is capable of that function as well. For now, the service will be available to broadband operators who wish to differentiate their product offering from that of other providers. Typically, the only tangible difference from one broadband operator to another is price.

Dremedia's software uses speech recognition to create keywords that can be used to match the other Web and VOD content. It then uses contextual modeling to home in on the proper content.

"People talk about interactive TV all the time, but we found that executives weren't that interested in investing in it because there weren't any editorial models that were compelling," says Dremedia President Matthew Karas. "This allows for content-sensitive linking to live and scheduled programs."

The promise of the Internet has been that it can allow a more personalized experience. The challenge, however, has been automating that process for a mass audience. If only 2% of broadband users want more information on a story, the process of finding and delivering that additional information requires as much manpower as if 75% want more information. By automating the process of finding and delivering that content, Dremedia's technology moves CNN Europe closer to its goal of personalization.

Dremedia's technology was born out of work done at Cambridge University and uses statistical methods for name recognition. It uses technology developed by Autonomy (based on pattern-matching and Bayesian probabilistic techniques) to form a conceptual understanding of text in any format and to automate certain tasks.

The system works by ingesting the video into a server running the Autonomy software. The feed is delayed by 45 seconds so the software can match keywords and phrases in the audio portion of the broadcast and find relevant matches from Internet and on-demand video clips located on another server. The matched content is then made available to the viewer for viewing and interacting.

"Everyone has their niche interest," says Karas. "If you're watching a standard news program, 90 seconds on a story on Nigeria may do it for most of the audience. But there will be others that will want more information."

That could include encyclopedia-type information, recent text links of related news and even video clips from the previous two weeks.

"Those links can be from anywhere on CNN.com, but there are a few rules about dates so it isn't outdated," Karas says. On-demand video clips will also be part of the offering, making the broadband experience truly nonlinear. Karas believes that, eventually, that nonlinearity will be delivered to cable viewers.

"CNN Europe wants to create new, compelling ways of delivering the news without having to rebuild the newsroom and hire new people," he says. "So they're actually repurposing three things they already have: the standard TV output, the last two weeks of on-demand video clips, and any Web page."

According to Karas, a TV network can use the Dremedia software in two ways. In the first, a network that wants to control the content can combine the text and video at its facility and send out two streams: one with XML links to URLs, the other with a compressed video stream. The other way involves delivering the equipment to the cable operator or broadband provider and allowing the distributor to combine the content. The second method makes it easier for providers to brand the content in a way more to their liking. "It also might be easier for the broadband operator to pull down a broadcast-quality stream from satellite and then encode it themselves," adds Karas.

The software, which costs $200,000, requires one server ingest box for text and another ingest box to index the VOD content. Karas says the servers are connected by HTTP over standard networks.

"The speech-recognition part transcribes the broadcast output in real time," Karas explains, "and the text created in that way is conceptually matched against any available Web and VOD content.

"We don't like the term keyword," he adds. "That's because our engine is capable of noticing that two documents are about the same thing, even when they don't have words in common."

As an example, he notes that a document with the words paleontology
and brontosaurus
might be identified as being conceptually similar to one containing fossil
and dinosaur. "This is highly effective with larger archives," he says, "especially of news, which consists largely of names and places."

The system, he explains, has an internal representation of similarity, referred to as a concept, which is a probability table saying that a certain list of words commonly exists in the similar contexts and so documents containing several words from the same list are likely to be about the same subject.