SpeechTEK.com home
Biographical Information
Dr. Deborah Dahl
Principal
Conversational Technologies

Dr. Deborah Dahl is the Principal at Conversational Technologies, where she assists her clients in developing speech product strategies and in designing innovative applications of speech technology. She chairs the W3C Multimodal Interaction Working Group and is a frequent speaker at industry conferences. She received Speech Luminary Awards in 2012 and 2014 from Speech Technology Magazine.

Conference Sessions By Dr. Deborah Dahl
SpeechTEK 2014
Sunday, August 17, 2014
1:30 p.m. - 4:30 p.m. STKU-2: Natural Language Processing


Wednesday, August 20, 2014
10:45 a.m. - 11:30 a.m. D301: Develop Multimodal Applications With Free and Open Source Tools

SpeechTEK 2013
Sunday, August 18, 2013
1:30 p.m. - 4:30 p.m. STKU-3: Introduction to Natural Language Processing

Thursday, August 22, 2013
1:00 p.m. - 4:00 p.m. STKU-9: Using W3C Standard Languages to Develop Multimodal Applications

Monday, August 19, 2013
8:00 a.m. - 8:45 a.m. SD104: Industry Standards
10:15 a.m. - 11:00 a.m. D101: Powering the Connected Home With Voice

SpeechTEK 2012


Wednesday, August 15, 2012
1:45 p.m. - 2:30 p.m. D303: Understanding Natural Language Processing

Monday, August 13, 2012
3:15 p.m. - 4:00 p.m. D105: Advanced Speech Technologies

SpeechTEK 2011
Thursday, August 11, 2011
9:00 a.m. - 12:00 p.m. STKU-5: Introduction to Natural Language
1:30 p.m. - 4:30 p.m. STKU-6: Designing and Building Multimodal Applications

Tuesday, August 9, 2011
8:00 a.m. - 8:45 a.m. SD202: Opportunities for Speech in the Developing World

SpeechTEK Europe 2011
Wednesday, 25 May 2011
2:45 p.m. - 3:30 p.m. B104: Speech organisations speak out


SpeechTEK 2010
Monday, August 2, 2010
1:00 p.m. - 2:45 p.m. LAB 2: Natural Language Processing & Machine Translation

Tuesday, August 3, 2010
2:45 p.m. - 3:30 p.m. D204: Multimodal Systems for Seniors

SpeechTEK 2009

Thursday, August 27 2009
9:00 a.m. - 12:00 p.m. STKU-4: Natural Language Processing

Wednesday, August 26 2009
1:45 p.m. - 2:30 p.m. C303: New Development Languages

SpeechTEK 2008
Thursday, August 21
9:00 a.m. - 12:00 p.m. STKU-5: Natural Language

Wednesday, August 20
1:45 p.m. - 2:30 p.m. C303: Multimodal Standards and Applications

Tuesday, August 19
8:00 a.m. - 8:50 a.m. Discussion 5: W3C Multimodal Working Group

SpeechTek 2007
Wednesday, August 22
10:30 a.m. - 11:30 a.m. C301: Natural Language Processing
2:00 p.m. - 3:00 p.m. C303: Using Multimodal Technology to Improve Language Skills
3:15 p.m. - 4:15 p.m. C304: Video & Speech

Thursday, August 23: SpeechTEK University
9:00 a.m. - 12:00 p.m. Natural Language Processing

Articles By Dr. Deborah Dahl

Media Standards for the Web: WebRTC and WebAudio

WebRTC and WebAudio add speed and simplicity.
Posted 10 Nov 2014 - Winter 2014 Issue - by Deborah Dahl

Remembering Scott McGlashan

May 4, 1963-February 12, 2014
Posted 30 Apr 2014 - Summer 2014 Issue - by Deborah Dahl

EMMA Success Leads to New Challenges

GPS, data analysis top innovations list.
Posted 15 Nov 2013 - Winter 2013 Issue - by Deborah Dahl

Talking to the World

Speech applications don't exist in a vacuum.
Posted 01 May 2013 - Summer 2013 Issue - by Deborah Dahl

Discovering Multimodal Components

Wider use of apps offers broad potential
Posted 10 Sep 2012 - September/October 2012 Issue - by Deborah Dahl

New Directions in Natural Language Understanding

Watson and Siri are only the beginning.
Posted 10 Jul 2012 - July/August 2012 Issue - by Deborah Dahl

Enter EMMA 1.1

Getting more out of multimodal inputs
Posted 01 May 2012 - May/June 2012 Issue - by Deborah Dahl

Speaking to Web Pages

New speech integration ideas show promise for 2012.
Posted 01 Jan 2012 - January/February 2012 Issue - by Deborah Dahl

Giving a Voice to the Developing World

Standards, mobile phones help bring the Web to resource-constrained areas
Posted 01 Sep 2011 - September/October 2011 Issue - by Deborah Dahl

Making Modalities Play Nicely Together

The Multimodal Architecture and Interfaces specification opens door to collaborative multimodal apps
Posted 01 May 2011 - May/June 2011 Issue - by Deborah Dahl

W3C Launches HTML Speech Incubator Group

Ultimate goal is to develop tools to better integrate speech with the Web.
Posted 05 Jan 2011 - January/February 2011 Issue - by Deborah Dahl

Hands-On: An Interactive Display

Lab sessions gave companies the opportunity to showcase their latest products.

Standards Need a New Pair of Eyes

Now is the time to revisit and update some of the early voice standards.
Posted 01 Sep 2010 - September/October 2010 Issue - by Deborah Dahl

Accessibility in Voice and Multimodal Applications

Multimodal interfaces can make or break the user experience.
Posted 03 May 2010 - May/June 2010 Issue - by Deborah Dahl

Updating the Standard for Spoken Dialogues

VoiceXML 3.0 should be out by the end of this year.
Posted 10 Jan 2010 - January/February 2010 Issue - by Deborah Dahl

Standards in the Voice User Interface

Why we need them, and where we can get them.
Posted 01 Oct 2009 - October 2009 Issue - by Deborah Dahl

Standards Make the World Smaller

As standards advance, things just work together better.
Posted 14 Jul 2009 - July/August 2009 Issue - by Deborah Dahl

Controlling Speech and Multimodal Applications

SCXML lets users travel through many states without leaving the phone
Posted 01 May 2009 - May 2009 Issue - by Deborah Dahl

CCXML: A Standard for Managing Calls

Markup language makes it easier to develop telephony applications.
Posted 06 Feb 2009 - January/February 2009 Issue - by Deborah Dahl

Submitted for Your Approval

SpeechTEK attendees conduct hands-on evaluations.

Opening the World of Multimodality

Standards can help bring more applications to bear.
Posted 01 Oct 2008 - October 2008 Issue - by Deborah Dahl

A Framework for Multimodal Apps

W3C drafts the standard in multimodal architectures.
Posted 15 Jul 2008 - July/August 2008 Issue - by Deborah Dahl

How Do You Say That?

New W3C standard promises to improve pronunciation.
Posted 01 May 2008 - May 2008 Issue - by Deborah Dahl

Introducing EMMA

The new standard for representing what the user said
Posted 01 Mar 2008 - March 2008 Issue - by Deborah Dahl

Eleven Tips to Improve IVR Effectiveness

There's been a lot of negative press recently about poorly designed touchtone and speech-enabled interactive voice response (IVR) systems. I'm sorry to say that most of the problems that I've heard, read about, or personally experienced are real. To make matters worse, the situation is inexcusable because the underlying technology that powers these applications is very flexible and can do significantly more than what it is being used for today. Poor implementations are giving these systems a bad reputation, as has long been the case.
Posted 12 Sep 2006 - September/October 2006 Issue - by Deborah Dahl

Natural Language Processing: The Next Steps

Speech interfaces in which users respond in their own words to open-ended prompts like "How may I help you?" are becoming more and more widely deployed. They are most frequently used in routing applications where the application's task is to identify the topic of the users' requests and transfer them to another part of the system where their requests can be addressed.
Posted 12 Sep 2006 - September/October 2006 Issue - by Deborah Dahl

Revisiting the ROI of Speech

A good voice user interface (VUI) is central to any successful speech application. Although VUI's are made up of many components, if the persona is very memorable, users' perceptions of it can dominate their opinions about the entire system, overwhelming all other aspects of the system in the users' minds. As such, a good or bad persona can have major consequences for the success of a system. …
Posted 01 Jan 2006 - January/February 2006 Issue - by Deborah Dahl

Point/Counter Point on Personas

A good voice user interface (VUI) is central to any successful speech application. Although VUI's are made up of many components, if the persona is very memorable, users' perceptions of it can dominate their opinions about the entire system, overwhelming all other aspects of the system in the users' minds. As such, a good or bad persona can have major consequences for the success of a system. …
Posted 01 Jan 2006 - January/February 2006 Issue - by Deborah Dahl

The Battle for Speech Recognition Market Dominance

The contact center speech recognition market is maturing, but it is far from slowing down. On the contrary, it’s experiencing an upswing in sales that it hasn’t seen for at least three or four years. This market is consolidating, making room for a variety of new entrants and is finally growing in port size. According to Steve Cramoysan of Gartner DataQuest, “preliminary analysis of the 2004 speech recognition market reveals an overall growth in port…
Posted 07 Nov 2005 - November/December 2005 Issue - by Deborah Dahl

Capitalize on Customer Conversations with Speech Analytics

For years, speech analytics have been used worldwide by security organizations to help government agencies identify potential risks and threats. In the past two years, contact centers have begun to use speech analytics applications to capture and structure customer communications. The applications analyze the structured data to identify customer trends and insights for the purpose of improving service quality, customer satisfaction, and generating new revenue. …
Posted 30 Aug 2005 - September/October 2005 Issue - by Deborah Dahl

Technical Standards Facilitate Innovation

Rarely does a technical standard directly benefit end users. However, in the world of speech technologies they do. Standards facilitate innovation and reduction in the total cost of ownership of speech applications, but have been slow to market. Standards allow programmers to create platform-independent (and ideally vendor-independent) speech applications.
Posted 08 Jul 2004 - July/August 2004 Issue - by Deborah Dahl

The Role of Speech in Multimodal Applications

The visually-oriented graphical user interface (GUI) is a powerful, familiar, and highly functional approach to interacting with computers. But, as speech technology becomes increasingly available, it’s natural to think about how speech could be used in GUI interfaces as well as voice-only interfaces.
Posted 05 May 2003 - May/June 2003 Issue - by Deborah Dahl

The Role of Speech in Multimodal Applications

The visually-oriented graphical user interface (GUI) is a powerful, familiar, and highly functional approach to interacting with computers. But, as speech technology becomes increasingly available, it's natural to think about how speech could be used in GUI interfaces as well as voice-only interfaces.
Posted 01 May 2003 - - by Deborah Dahl

Speech on the go

Many of speech recognition’s most important contributions to productivity have to do with mobility. For example, speech allows telephone users to simply say the name of the person they are calling and be connected, a big advantage for cellular phone users in the car.
Posted 01 Jan 1999 - December/January 1999 Issue - by Deborah Dahl

STRATEGIC ALLIANCE: Will Microsoft's Stake in Lernout & Hauspie Drive Growth in Speech?

Microsoft and Lernout & Hauspie Speech Products have announced a broad strategic alliance designed to accelerate development of the next generation of voice-enabled computing on the Microsoft Windows platform. <@SM>
Posted 31 Jan 1998 - January/February 1998 Issue - by Deborah Dahl
 
Gold Sponsors
Silver Sponsor
Media Sponsors