SpeechTEK.com home
Monday Tuesday Wednesday
Sunrise Discussions Customer Case Studies SpeechTEK Labs
Keynotes Register Now SpeechTEK University
SpeechTEK 2010 - Tuesday, August 3, 2010
SUNRISE DISCUSSIONS
SD201 – What’s New in the Driver’s Seat?
8:00 a.m - 8:45 a.m
MODERATORS:
Susan Boyce, Principal User Experience Manager - Microsoft Tellme
Dr. Thomas Schalk, Vice President, Voice Technology - Agero
SPEAKER:

In this session, we’ll continue last year’s discussion on the desirability and challenges of speech user interfaces in the car. Join us for a discussion on what we’ve learned, obstacles to user adoption, and the impact of recent media and legislative attention to the issue of distracted driving. We’ll continue our discussion on core automotive design principles with respect to prompting, balancing multiple modalities, and the impact of form factor (cell phone versus in-car solutions).

SD202 – Ask a Linguist!
8:00 a.m - 8:45 a.m

Did you ever wonder why alphanumeric recognition is so hard? Ever wonder why callers say “yes” when you ask if they want A or B? And why do text-to-speech engines mangle all of our names? The field of linguistics hold the answers to these and many other perplexing questions in voice interaction. Bring your questions and chat with two card-carrying linguists for an informal discussion of the role of linguistics in speech technology.

SD203 – Patent Protection and Risk for Speech Solutions
8:00 a.m - 8:45 a.m
MODERATOR:
Mr. Steven M. Hoffberg, Of Counsel - Tully Rinckey, PLLC

Join us for an informative discussion with a patent attorney on obtaining, maintaining, enforcing, and monetizing patents in the speech industry. We will discuss handling claims of patent infringement, the value of patents, and how patents can make your business more successful.

SD204 – New Opportunities for Using Speech Technologies
8:00 a.m - 8:45 a.m
MODERATOR:
Dr. Moshe Yudkowsky, President - Disaggregate Corporation

Speech technologies are widely used in call centers, telephone answering/routing systems, and, more recently, mobile devices. Where does speech go from here? Will speech be used in automated agents that guide users to debug and repair products, to help users navigate the complexity of new functions on hand-held devices, to command and control  household appliances? This out-of-the-box brainstorming session will try to identify new opportunities for speech technologies.

TRACK A: BUSINESS STRATEGIES
KEYNOTE: The New Era of Natural Language Processing
9:00 a.m - 10:00 a.m
James F. Allen, Author, Natural Language Understanding

Language is one of the fundamental aspects of human behavior and is a crucial component of our lives. How can natural language processing (NLP) improve our interactions with computer systems and help us to better understand each other? In the past decade, there has been significant progress in many application areas of NLP, but hard problems remain. What are the current benefits and limitations of NLP? How can you determine if your applications can benefit from NLP? Join James F. Allen, Ph.D., author of the seminal book Natural Language Understanding, as he explains the state of natural language processing today and how close we are to automatic natural language understanding, summarization, translation, and information retrieval. A particular focus will be spent on the use of natural language technologies in dialogue systems.

Break in the Exhibit Hall
10:00 a.m - 10:45 a.m
A201 – Alignment in the Speech World
10:45 a.m - 11:30 a.m
MODERATOR:
Bruce Pollock, Vice President, Strategic Growth and Planning - West Interactive
Dr. Ahmed Bouzid, Co-founder & President - The Ubiquitous Voice Society
Martin C Dove, Managing Director, Customer Interactive Solutions - Dimension Data
David C Martin, Managing Principal/Portfolio Leader - Avaya
Darla Tucker, Director, Strategic Customer Solutions - Convergys

Do we really know what customers think about speech self-service? Do the views of organizations and speech vendors match — or conflict with — the customer perspective? For organizations deploying speech solutions, alignment between these perspectives is vital in providing self-service that is not only useful, but a preferred mode of interaction. Attend this panel for a report on the latest data about how to align your speech strategy with customer opinions.

A202 – The Value of Advanced Analytics
11:45 a.m - 12:30 p.m
MODERATOR:
Aaron Fisher, Director of Speech Services - West Interactive
Anna Convery, Chief Marketing Officer - ClickFox
Matt Storm, Director, Americas Marketing - NICE Systems

Analytics give organizations the ability to understand past and current usage of their automated self-service solutions, but can they do more? Experts in this session say “Yes!” and explore ways that you can use analytics to extrapolate from historical data to plan for the future. Attend this session to learn how your organization can benefit from advanced analytics as you set your future customer experience strategy.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
A203 – Speech Deployments: Is Hosting Right for You?
1:45 p.m - 2:30 p.m
Elaine Cascio, Vice President - Vanguard Communications Corp
Maryann Walsh Wolff, Managing Partner - ProMark Solutions

Many organizations are struggling with the range of choices available when selecting a partner to implement, develop, and maintain their speech applications. This training session discusses the pros and cons of the range of possibilities, from hosted solutions to managed services and customerpremises equipment. Learn how you can sort through the different options and make the best long-term decision for your company.

A204 – Speech in a Recovering Economy
2:45 p.m - 3:30 p.m
MODERATOR:
Dr Melanie Polkosky, Human Factors Psychologist/Consultant - IBM Corporation
Keith Ward, CTO - Product Support Solutions (PSS)
Darla Tucker, Director, Strategic Customer Solutions - Convergys

Organizations are keeping tight control on resources as they navigate the recovering economy, which sometimes means hesitating to commit to new speech technology projects. Attend this informative session to learn how speech can be an affordable and valuable tool to include, even when budgets are tight. Learn how to get the most from your current technology infrastructure and methods for designing and deploying speech applications.

Break in the Exhibit Hall
3:30 p.m - 4:15 p.m
A205 – The Value of Personalization
4:15 p.m - 5:15 p.m
MODERATOR:
Phillip Hunter, Head of User Experience - Amazon Alexa Skills Kit
Dr. Silke Witt-Ehsani, Vice President, VUI Design Center, Design Center - TuVox
Steve Chirokas, Executive Director, Marketing - VoltDelta
Cliff Bell, Product Line Manager, Product Management - Genesys

Personalization has become a speech industry buzzword this year, but what does it really mean for your organization? These presentations will give you valuable insight into what it takes to build a personalized interaction for customers and how to effectively plan and deliver a personalized solution. Learn how personalized interactions can help your business build interactions that increase customer delight and loyalty.

Networking Reception
5:30 p.m - 7:00 p.m
TRACK B: CUSTOMER EXPERIENCE
KEYNOTE: The New Era of Natural Language Processing
9:00 a.m - 10:00 a.m
James F. Allen, Author, Natural Language Understanding

Language is one of the fundamental aspects of human behavior and is a crucial component of our lives. How can natural language processing (NLP) improve our interactions with computer systems and help us to better understand each other? In the past decade, there has been significant progress in many application areas of NLP, but hard problems remain. What are the current benefits and limitations of NLP? How can you determine if your applications can benefit from NLP? Join James F. Allen, Ph.D., author of the seminal book Natural Language Understanding, as he explains the state of natural language processing today and how close we are to automatic natural language understanding, summarization, translation, and information retrieval. A particular focus will be spent on the use of natural language technologies in dialogue systems.

Break in the Exhibit Hall
10:00 a.m - 10:45 a.m
B201 – Optimizing Voice Self-Service
10:45 a.m - 11:30 a.m
MODERATOR:
Karen Owens, Senior User Experience Designer - LogicTree Corporation
Vicki Broman, Manager of User Interface and Research Teams, CTI and Speech Solutions - eLoyalty® a TeleTech Company
Todd Schmeer, Director-Speech Application Services - VoltDelta

The best way to create voice self-service interactions that customers are willing and able to use is to design them based on data collected from customers. In this session, you will hear results of the latest research on customer preferences and expectations for naturalness in voice interactions and how to use customer interaction data from very large deployments to inform design decisions.

B202 – The Hosted Customer Experience
11:45 a.m - 12:30 p.m
MODERATOR:
Larry Baldwin, Manager, Voice Services - IBI Group
Daniel O'Sullivan, CEO - Gyst
Laura Marino, Sr. Director of Product Management- Nuance On-Demand, Enterprise - Nuance
Terry Saeger, Senior Vice President and General Manager - VoltDelta
Dr. Elizabeth A. Strand, Director of UX Strategy - Microsoft Tellme

Hosted speech solutions can be an attractive option for many organizations to include speech technologies in their overall self-service portfolio. This panel goes beyond financial benefits to explore ways in which hosted solutions can impact the overall customer experience as well. Come to learn panelists’ perspectives on new ways to optimize the service you provide to customers via a hosted speech solution.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
B203 – The Outbound Customer Experience
1:45 p.m - 2:30 p.m
MODERATOR:
Catherine Zhu, Principal Consultant - SpeechUsability
Walter Rolandi Ph.D., Principal Usability Scientist - West Interactive
John Tallarico, VP, Product Management - SoundBite Communications

Outbound IVR campaigns are gaining in popularity as organizations discover the benefits of proactive communication with customers, but organizations will not reap these benefits unless customers are receptive to outbound interactions. Join us in this session as two experts explain the factors that matter in crafting outbound messages that will be meaningful and engaging to customers.

B204 – Multilingual Experiences
2:45 p.m - 3:30 p.m
MODERATOR:
Alexandra Auckland, Voice Interaction Designer - Sotto Voce Consulting
Sondra Ahlén, Principal VUI Consultant/Owner - SAVIC
Ramón Solórzano Jr. Ph.D., Principal - Parasol Communication Services
Janet Cahn, Senior Speech Technology Consultant - Versay Solutions, LLC

Many organizations today face challenges of a global customer base, providing service not just for English-speaking consumers in the U.S., but for other languages and cultures. Speech technology offers the opportunity to speak with customers in their own language, but there are many pitfalls moving beyond monolingual English. Experts in this session offer guidance on sensibly designing and deploying multilingual speech solutions.

Break in the Exhibit Hall
3:30 p.m - 4:15 p.m
B205 – Analytics for the Whole Customer Experience
4:15 p.m - 5:15 p.m
MODERATOR:
Dr. William Meisel, President - TMA Associates
Tim Moynihan, Senior Analyst and Project Leader - Opus Research
Aviad Abiri, Vice President, Global Solution Design and Delivery, NICE Interaction Business Applications - NICE Systems
Dan Burke, Vice President - HP Autonomy

Historically, analytics have offered organizations the opportunity to gain an understanding of customer interactions in the call center. As organizations include additional communication channels, it is increasingly important to monitor their experience across all channels. Join us to learn how today’s analytics can provide a view of the experience you are providing customers across all touchpoints and how this impacts your business goals.

Networking Reception
5:30 p.m - 7:00 p.m
TRACK C: SPEECH DEPLOYMENTS
KEYNOTE: The New Era of Natural Language Processing
9:00 a.m - 10:00 a.m
James F. Allen, Author, Natural Language Understanding

Language is one of the fundamental aspects of human behavior and is a crucial component of our lives. How can natural language processing (NLP) improve our interactions with computer systems and help us to better understand each other? In the past decade, there has been significant progress in many application areas of NLP, but hard problems remain. What are the current benefits and limitations of NLP? How can you determine if your applications can benefit from NLP? Join James F. Allen, Ph.D., author of the seminal book Natural Language Understanding, as he explains the state of natural language processing today and how close we are to automatic natural language understanding, summarization, translation, and information retrieval. A particular focus will be spent on the use of natural language technologies in dialogue systems.

Break in the Exhibit Hall
10:00 a.m - 10:45 a.m
C201 – SCXML: A New Language for Specifying Control Flow
10:45 a.m - 11:30 a.m
MODERATOR:
Davide Bonardo, Senior TTS Software Architect, Loquendo Technologies - Loquendo
James Barnett, Director - Alcatel Lucent
Rahul Akolkar - IBM Research

State Chart XML (SCXML) is a general-purpose, event-based state machine language used to specify voice-only and multimodal dialogue flow control. Learn about the basic constructs of SCXML, including nested and parallel states, event handling, and communications and profiles. The second talk will introduce SCXML profiles and extension points and show how application developers can leverage these in building real-world applications. If you feel that VoiceXML’s Forms Interpretation Algorithm limits what your application can do, SCXML may be the development language for you.

C202 – Care and Feeding of SLM
11:45 a.m - 12:30 p.m
MODERATOR:
Paolo Baggia, Director of International Standards - Loquendo
David Suendermann, Principle Speech Scientist - Synchronoss
Pranav Chadha, Manager, Software Engineering - Versay

Statistical grammars, also called statistical language models (SLMs), are used by call routing systems to convert a spoken utterance into one of several predefined categories. Learn how to use partial automation to transcribe and annotate the large number of utterances necessary to generate a statistical grammar and how to append new tags to an existing utterance. This session will discuss the most important factors to consider in statistical grammar maintenance.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
C203 – Implementing and Deploying Statistical Grammars
1:45 p.m - 2:30 p.m
MODERATOR:
James A. Larson, Vice President - Larson Technical Services

Statistical grammars can be labor-intensitive to create: Numerous answers to the initial prompt must be captured and categorized. Changing one or more of the categories may also be labor-intensive. Is all this work worthwhile? In this panel, experts who have developed and deployed call routing systems using SMLs describe the do’s, the don’ts, and their key learnings based on their experiences.

Chris Passaretti, Senior Programmer/Analyst - Cablevision Systems Corporation
Greg Johnston, Senior IT Manager - IVR, Information Technologies - DISH Network
Kevin D. Husk, Director of Customer Care Operations - Charter Communications
C204 – Text-to-Speech Synthesis: Create Humanlike Voice From Text
2:45 p.m - 3:30 p.m
MODERATOR:
Dr. Daniel C Burnett, President - StandardsPlay
Davide Bonardo, Senior TTS Software Architect, Loquendo Technologies - Loquendo
Mat Wilson, Senior Software Developer - Speech Applications - Intelligent Mechatronic Systems (IMS)

Users judge speech applications by the quality and understandability of their synthesized speech. Learn how to apply the latest language standards to develop quality TTS for multilanguage applications, mixed-language text, and languages using non-Latin characters. Learn how to use various text preprocessing strategies to tailor email; SMS; and other text containing acronyms, abbreviations, and poorly written content into a format that is natural, intelligible, and easy to understand.

Break in the Exhibit Hall
3:30 p.m - 4:15 p.m
C205 – Testing Strategies
4:15 p.m - 5:15 p.m
MODERATOR:
Mr. David L Thomson, VP Speech Technology - CaptionCall
Jonathan Bloom, Voice User Interface Designer - Jibo, Inc.
Dave Pelland, Global Director, IVR Practice - Genesys
Gilles Hurteau, Project Manager - Nü Echo Inc.

Full quality assurance testing of speech applications is expensive and timeconsuming. How can the writing and execution of tests be automated? Can this reduce the testing efforts and development costs? These three presentations describe frameworks and methodologies for various testing techniques, discussing their strengths and weaknesses and showing how and when to best use each of them in order to deliver rock-solid applications on time and on budget.

Networking Reception
5:30 p.m - 7:00 p.m
TRACK D: VOICE INTERACTION DESIGN
KEYNOTE: The New Era of Natural Language Processing
9:00 a.m - 10:00 a.m
James F. Allen, Author, Natural Language Understanding

Language is one of the fundamental aspects of human behavior and is a crucial component of our lives. How can natural language processing (NLP) improve our interactions with computer systems and help us to better understand each other? In the past decade, there has been significant progress in many application areas of NLP, but hard problems remain. What are the current benefits and limitations of NLP? How can you determine if your applications can benefit from NLP? Join James F. Allen, Ph.D., author of the seminal book Natural Language Understanding, as he explains the state of natural language processing today and how close we are to automatic natural language understanding, summarization, translation, and information retrieval. A particular focus will be spent on the use of natural language technologies in dialogue systems.

Break in the Exhibit Hall
10:00 a.m - 10:45 a.m
D201 – The State of Speech-Based Customer Experience
10:45 a.m - 11:30 a.m
Phillip Hunter, Head of User Experience - Amazon Alexa Skills Kit
Walter Rolandi Ph.D., Principal Usability Scientist - West Interactive

In recent years, the voice interaction design community has come together and challenged each other to raise our standards and design interfaces that are not only usable, but promote customer delight and loyalty. We invite the VUI community to weigh in on where we stand in 2010. Have we succeeded in providing better interactions? Join us for an inclusive group discussion as we review our successes and identify opportunities for further growth.

D202 – Using Speech on the Go
11:45 a.m - 12:30 p.m
MODERATOR:
Judi Halperin, Principal Consultant and Team Lead, Global Speech Engineering - Avaya Inc.
Brigitte Mora Richardson, Global Voice Control Technology/Speech Systems Lead Engineer, Electrical / Electronics Systems Engineering - Ford Motor Company
Weiye Ma, Senior Speech Scientist - Telvent Worldwide

Speech technologies have moved beyond the call center and now thrive in mobile applications. Attend this session to hear two success stories. Hear how Ford Motor Co. is providing cutting-edge services to drivers via the Ford Sync system, and learn about a 511 system that uses speech and visual modalities to provide timely, location-specific information to travelers.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
D203 – Lessons in Multimodal Interaction Design
1:45 p.m - 2:30 p.m
MODERATOR:
Dr. Silke Witt-Ehsani, Vice President, VUI Design Center, Design Center - TuVox
Eduardo Olvera, Sr. Manager & Global Emerging Technology Lead, UI Design, Professional Services - Nuance
Ms. Karen Kaushansky, User Experience Strategist - Microsoft Tellme

Designing for multimodal systems requires more than VUI design plus visual design skills. Multimodal interactions include complexities resulting from users moving between different modes of interaction for both input and output of information. Attend this session for lessons from experts who have learned from experience about the methods and guidelines that will allow you to create useful, multimodal applications.

D204 – Multimodal Systems for Seniors
2:45 p.m - 3:30 p.m
MODERATOR:
Mr David Attwater, Senior Scientist - Enterprise Integration Group
Dr. Deborah Dahl, Principal - Conversational Technologies
Michael Greene, Social Worker

The benefits of multimodal systems are well-known for mobile and handheld applications, but they also promise to provide easier and more appealing interactions for older users. Seniors often have limitations with memory, vision, hearing, or mobility that can make unimodal systems less effective than they are for younger users. Learn how multimodal systems can help seniors overcome these limitations and use technology to improve their lives.

Break in the Exhibit Hall
3:30 p.m - 4:15 p.m
D205 – Understanding Mobile and Automotive Users
4:15 p.m - 5:15 p.m
MODERATOR:
David C Martin, Managing Principal/Portfolio Leader - Avaya
Kathy Lee, User Experience Researcher - Microsoft Tellme
Andrew Kun, Associate Professor, UNH ECE Department - University of New Hampshire

For mobile and multimodal solutions to succeed, they must be built to meet the very different needs and desires of mobile users. Attend this session to hear the latest research on mobile users and how the different context of use affects what users want and expect from technology solutions. Also learn how the latest academic research informs design thinking for the design of automotive systems today.

Networking Reception
5:30 p.m - 7:00 p.m
SPEECHTEK LABS
LAB 4 – Voice-Enabled Personal Assistants
10:45 a.m - 12:30 p.m
Dr. Moshe Yudkowsky, President - Disaggregate Corporation

See how users can employ speech to interact with new mobile devices — anything from starting applications to setting preferences — as well as enter and retrieve information. Experience how mobile devices power hands-free, eyes-free operation (in your car or on your factory floor); enable people to enter information without the hunt-and-peck of typing on a tiny keypad; and enable people to control their applications with intuitive voice commands instead of drop-down menus and near-invisible icons.

LAB 5 – Innovative Solutions Lab
1:45 p.m - 3:30 p.m
Dr. Thomas Schalk, Vice President, Voice Technology - Agero

Organizations are relying on innovative solutions to help position themselves for recovery. Speech technology continues to evolve and the applications are reaching new dimensions. Attend this lab session to see some of the latest speech technology innovations and test drive them yourselves. Judge for yourself if these systems contribute to or distract from an organization. And judge for yourself what can be learned from innovative systems that will make other speech technologies easier to use.




Connect with
SpeechTEK 2010

#STEK10

Gold Sponsors
Monday
Lunch Sponsor
Tuesday
Lunch Sponsor
Wednesday
Lunch Sponsor
Media Center
Sponsor
Tuesday
Break Sponsor
Media Sponsors