SpeechTEK.com home
 
Monday Tuesday Wednesday
Sunrise Discussions Customer Case Studies Speakers
Keynotes Registration SpeechTEK University
SpeechTEK 2014 - Wednesday, August 20, 2014
SUNRISE
SD301 – Update on Patents in the Speech Technology Field
(Astor Ballroom (7th floor))
8:00 a.m - 8:45 a.m
Steven M. Hoffberg, Partner - Ostrolenk Faber LLP

Patent law and practice evolve rapidly, and changes in statute, jurisprudence, PTO rules, and deals can alter the landscape rapidly. This sunrise session provides an informational forum to bring attendees up-to-speed with the latest changes with an opportunity to discuss emerging strategies.

SD302 – Skeuomorphism in the Speech Modality?
(Empire Complex (7th floor))
8:00 a.m - 8:45 a.m
Jonathan Bloom, Voice User Interface Designer - Jibo, Inc.

The graphic design community has been heavily debating the necessity of skeuomorphism as of late. Skeuomorphs are essentially visual metaphors; designers use the look and feel of a known and understood thing to make the use of a new thing easier, such as the bookshelves used in iBooks. Join us for a discussion of skeuomorphism and an entirely new way of looking at the voice user interface design, one that discards skeuomorphism and will quite possibly change how people interact with IVRs.

SD303 – Challenges for Multimodal User Interfaces
(Soho/Herald (7th floor))
8:00 a.m - 8:45 a.m
Nava A Shaked, CEO - Brit Business Technologies Ltd (BBT) Holon Institute of Technology, Engineering faculty

The new multimodal reality provides the mobile user with a rich variety of methods for interfacing with a system, including speech, gestures and movements, touch, typing, and more. Therefore, designing multimodal interactions requires the integration of multiple recognition technologies, sophisticated user interfaces, and distinct tools for input and output of data. Join us in a discussion of the technological and usability challenges inherent in multimodality- based UI, the technologies and design which are innovative, and usability challenges inherent in multimodality- based user interfaces.

SD304 – Smart Glasses, Smart Watches, and Other Smart Devices Arrive on the Market
(Gramercy/Olmstead (7th floor))
8:00 a.m - 8:45 a.m
K.W.'Bill' Scholz, President - NewSpeech LLC AVIOS
Bernard Brafman, VP of Business Development - Sensory, Inc.
Ahmed Bouzid, Co-founder & President - The Ubiquitous Voice Society
Jonathon Nostrant, Founder & CEO - ivee

Wearables, including smart glasses, watches, and other smart objects, will exchange information and access cloud-based databases and processing. Users will interact with wearables to not only control smart objects in the home, the workplace, and everywhere else, but also to obtain help for assembling, testing, fine-tuning, and repairing things. With their shrinking or nonexistent displays, wearables require new kinds of user interfaces using speech and natural language understanding. Come see a short demonstrations of today’s wearables and discuss how they will be used.

KEYNOTE
The Evolution of Computers and Society
(Broadway Ballroom (6th floor))
9:00 a.m - 10:00 a.m
MODERATOR: Judith Markowitz, President - J. Markowitz Consultants AVIOS, IEEE, ACM, LSA, W3C, ANSI/INCITS M1 (biometrics), ISO/JTC1/SC37 (biometrics)
William Meisel, President - TMA Associates Editor, LUI News
Michael Karasick, VP, InnovationsGroup, IBM, Watson Group - IBM
Juan E. Gilbert, Andrew Banks Family Preeminence Endowed Professor & Chairman, Computer & Information Science & Engineering Department - University of Florida

Computers are becoming smarter, faster, and more ubiquitous. How will humans and computers connect to achieve more than either can currently do by themselves? How will the roles of humans and computers change as computers evolve within the next 5 years. How will our society and culture change? What are the economic risks caused by automation moving faster than people’s ability to adapt to the change? What can you do to prepare for these dramatic changes?

Break in the Customer Solutions Expo
10:00 a.m - 10:45 a.m
TRACK A: BUSINESS STRATEGIES
Astor Ballroom (7th floor)
A301 – Explaining Complex Recognition Tasks
10:45 a.m - 11:30 a.m
Judi Halperin, Principal Consultant and Team Lead, Global Speech Engineering - Avaya Inc.

Many client organizations have similar questions about why certain tasks are difficult for speech recognition systems and why particular techniques or constraints are needed. Why is alphanumeric so difficult? Why can’t we just recognize email addresses the way people naturally speak them? What’s so hard about personal or business names? Attend this session for explanations of many common recognition challenges from a top speech scientist, and learn strategies for communicating what you learn to others in your organization.

A302 – Enterprise Virtual Assistants: Empowering Employees and Customers
11:45 a.m - 12:30 p.m
William Meisel, President - TMA Associates Editor, LUI News

Enterprise virtual assistants allow employees to more efficiently use enterprise applications and be more effective in their jobs. They reduce the burden on IT departments by making both internal applications and outside resources available as one personal assistant. That same virtual assistant can be available on multiple devices, including mobile devices, easing the bring-your-own-device complexities that face companies today.

Attendee Lunch
12:30 p.m - 1:45 p.m
A303 – A Comparison of Hosted Virtual Assistants vs. Automation in the IVR
1:45 p.m - 2:30 p.m
Joe Alwan, AVOKE Analytics - Raytheon BBN Technologies

Hosted virtual assistants advertise being able to overcome obstacles to more accurately deliver calls to their intended destinations, but how do they compare with traditional IVR automation? In this session, hear the initial results of an evaluation of an IVR system using both a hosted virtual assistant and automated speech recognition. Join us for an analysis of the benefits and drawbacks of hosted virtual assistants and guidance on deciding if a virtual assistant is right for your organization.

A304 – PANEL: Visual IVR: Fad or Fashion?
2:45 p.m - 3:30 p.m
MODERATOR: Dan Miller, Lead Analyst-Founder - Opus Research, Inc.
Daniel Hong, Senior Director, Product Marketing Strategy - [24]7
Mayur Anadkat, VP of Product Marketing - Five9
Dena Skrbina, Director, Solutions Marketing - Nuance Communications
Kumaran Shanmuhan, Executive Director, Presales - Jacada

The term “Visual IVR” has been used loosely to describe many possible ways of combining the visual display on mobile phones with speech-enabled IVRs. A Visual IVR can blend native mobile app functionality with an IVR or with HTML5 from SMS triggers while in a call, but there are a myriad of other possibilities in the market. This panel will discuss the many different flavors of Visual IVR, the merit and roadblocks of each and how Visual IVRs might evolve over the coming years.

TRACK B: VOICE INTERACTION DESIGN
Empire Complex (7th floor)
B301 – PANEL: Cross-Cultural Usability Testing
10:45 a.m - 11:30 a.m
Jim Milroy, Human Factors Solutions Consultant - West Interactive Services
David Attwater, Senior Scientist, Enterprise Integraton Gru - Enterprise Integration Group
Jennifer Deese, Senior Consulting Manager, Professional Services DEV/QA - Convergys
Marissa Williams, Senior Application Designer - Interactions, LLC
Helen VanScoy, Director, User Interface Design - Performance Technology Partners

Companies in the U.S. have been using IVRs to handle customer calls for years, but where do other countries stand in the use of IVRs? What about languages other than English? Are U.S. companies using the same speech technology they use in the U.S. in other countries? This panel shares their experiences in testing applications outside of the U.S. and in languages other than English. Join us in discussing the difficulties, learnings, and outcomes of a number of usability tests.

B302 – Your Data Says What? Surprising Results From the Field
11:45 a.m - 12:30 p.m
Jenny Burr, Sr. Manager, Speech Science - Convergys

Many IVR designs are based on reasonable assumptions about caller behavior, but before-and-after data from multiple deployments can determine if these assumptions are justified, or if the data actually challenges some long-held beliefs about VUI design. This session presents a recent case study with access to pre- and post-deployment metrics and examines several data-driven findings that raise questions about certain design assumptions, speech versus DTMF usage, global command usage, identification strategies, and more.

Attendee Lunch
12:30 p.m - 1:45 p.m
B303 – The Art & Science of Menu Design
1:45 p.m - 2:30 p.m
Jenni McKienzie, Voice Interaction Designer - SpeechUsability
Kristie Goss-Flenord, Consultant, Human Factors - Convergys

A menu can make or break your IVR. Attend this session for a brief overview of guidelines on menu structure, wording, and recording based on guidelines developed by the AVIxD Design Guidelines Working Group. Next you’ll have the chance to get your hands dirty applying the guidelines in tricky situations. Bring your own menu challenges to the session for the opportunity to work together on solid solutions.

B304 – PANEL: Speech Technology in Pop Culture From HAL to Her
2:45 p.m - 3:30 p.m
MODERATOR: Susan L. Hura, Principal - SpeechUsability
Jonathan Bloom, Voice User Interface Designer - Jibo, Inc.
Dmitry Sityaev, Principal Speech Scientist, Engineering R&D - Genesys
Mary Constance Parks, Principal Experience Designer, Automation and Control Solutions - Honeywell

Perception of speech technology has been shaped by its portrayal in movies, books, and television for decades. Until recently, the average American likely had more exposure to speech technology via the media than through direct personal experience. From the malevolent HAL to the charming intelligence in Her, pop culture has had dramatic effects on the impressions, expectations, and adoption of speech technology. Join us for a look back—and forward—at the impact of pop culture on the speech industry.

TRACK C: CUSTOMER EXPERIENCES
Soho/Herald (7th floor)
C301 – Evangelizing and Adopting VUI in an Organization With a Long GUI-Only History
10:45 a.m - 11:30 a.m
Stephen Gay, Manager & Design Strategist - Intuit

Apple’s Siri and Google Now have ignited consumers’ interest in voice user interfaces (VUI) by delivering valuable and delightful customer experiences. Innovative companies can leverage voice solutions to create a competitive advantage. But how do you drive the adoption of VUI in an organization with a long GUI-only history? In this session, hear about the ups and downs of one team evangelizing and adopting VUI to help your organization move forward faster.

C302 – Cross-Pollination: Creating Great Multimodal Experiences
11:45 a.m - 12:30 p.m
Susan L. Hura, Principal - SpeechUsability
Paul Sherman, Principal - Sherman Group User Experience LLC

For years there has been a divide in the design community between visual designers working on websites and software and voice interaction designers working with speech technologies. Today’s proliferation of multimodal applications and devices is forcing these separate design practices together. Neither visual nor voice designers possess all the expertise needed to create seamless, intuitive, and engaging multimodal interactions. This presentation provides a framework for bridging the gap between visual and voice designers with the goal of cooperatively creating compelling user experiences.

Attendee Lunch
12:30 p.m - 1:45 p.m
C303 – The Omnichannel Experience: Connecting the Dots to Make It a Reality
1:45 p.m - 2:30 p.m
Brooks Crichlow, Vice President of Product Marketing - [24]7
Tajinder Singh, Director of Product Management - [24]7

Omnichannel is the next frontier in customer experiences where connected, consistent, and contextual interactions span multiple channels, devices, locations, and time. A successful omnichannel strategy requires enterprises to take small strides to create the right foundation that can scale to support all channels of customer engagement. This session presents case study results for a strategy of pairing two widely used channels, web and IVR, as a strong area in which initial omnichannel implementations can begin.

C304 – Defending User Experience Against the Bottom Line
2:45 p.m - 3:30 p.m
MODERATOR: Carrie Claiborn, Senior User Experience Consultant - VoxGen
James R. Lewis, Senior Human Factors Engineer - IBM AVIxD
Mark Smolensky, Principal Member of Technical Staff, Product Realization & Architecture - AT&T AVIxD - Vice President
Jim Milroy, Human Factors Solutions Consultant - West Interactive Services
Stephen Snape, Senior Business Analyst - Orlando Utilities Commission

Usability testing, user research, and other methods for capturing user feedback are often the first targets when project budgets and timelines are cut. Most organizations understand the value of user experience in abstract terms, but fail to support the activities that make excellent experiences possible. Panelists in this session discuss specific techniques for defending user experience tasks and methods for demonstrating the value in connection with actual business cases.

TRACK D: TECHNOLOGY ADVANCES
Gramercy/Olmstead (7th floor)
D301 – Develop Multimodal Applications With Free and Open Source Tools
10:45 a.m - 11:30 a.m
Deborah Dahl, Principal - Conversational Technologies

Learn the pros and cons of open source tools for mobile, speech-enabled applications including CMU PocketSphinx JavaScript open source speech recognition with HTML5, Google Android speech recognition (native and PhoneGap with HTML5), Google Chrome Web Speech API, and iOS speech options (such as iSpeech and OpenEars). Issues to be discussed include language models (support for grammars versus dictation), cross-platform capability, accuracy, open source versus proprietary, online versus offline, and speech-recognition- related standards (WebRTC, HTML5 and Web Audio).

D302 – PANEL: What Can WebRTC Do for You?
11:45 a.m - 12:30 p.m
MODERATOR: K.W.'Bill' Scholz, President - NewSpeech LLC AVIOS
Daniel C Burnett, President - StandardsPlay
Tobias Goebel, Director Emerging Technologies - Aspect Software, Inc.
Valentine Matula, Senior Director, Multimedia Research - Avaya

How can companies have their customers connect to speech systems and to customer service agents for assistance directly from within a mobile app or a PC browser session? With no heavy downloads or application installations, WebRTC provides voice, video, and co-browsing so that an automated or live agent can help immediately from within the context of the customer’s mobile or browser activity.

Attendee Lunch
12:30 p.m - 1:45 p.m
D303 – Wake Up Your Device by Speaking
1:45 p.m - 2:30 p.m
Bernard Brafman, VP of Business Development - Sensory, Inc.

A number of factors have combined to make “always on, always listening” or “wake-up words” (which brings a device out of standby mode) and “voice commands” (which control a device) a reality in today’s consumer (particularly wireless) and other products. This presentation enumerates and details these factors that demonstrate always-on, always-listening products and applications available now and in the near future.

D304 – Build a Semantic NLU Application
2:45 p.m - 3:30 p.m
Keith Garr, Sales Engineer - LinguaSys

You will learn how fully semantic natural language understanding (NLU) works, and how it can produce high-quality NLU interfaces in multiple languages in one-tenth of the time and cost of traditional statistically-based NLU. Write a short but powerful NLU interface and learn how to deploy it for use by multiple input modalities and multiple languages. For details, see http://www.linguasys.com/speechtek2014.




Connect with
SpeechTEK 2014

#SpeechTEK

Platinum Sponsors
Gold Sponsors
Corporate Sponsor
Breakfast Sponsor
Tuesday Keynote Lunch Sponsor
Tuesday Break Sponsor
Press Room Sponsor
Media Sponsors
Co-located with: