Workshops
Workshop 1: Sonic Interaction in Intelligent Cars
Justyna Maculewicz, justyna.maculewicz@volvocars.com & Fredrik Hagman, fredrik.hagman@volvocars.com (Sound Interaction Design, Volvo Car Group)
Myounghoon Jeon, philart@gmail.com (Mind Music Machine Lab, Michigan Tech)
Research topics:
1. What are the roles of interactive sounds in future unsupervised Autonomous Drive?
2. Can interactive sounds convey information about Autonomous Driving car performance, car status and traffic in meaningful and desirable way?
Abstract:
Introduction of highly automated cars will require a re-definition of the car-driver interaction. The ongoing technological development will put completely new demands on the design of interaction inside the cars, in order to support the user in his or her role. Sound, due to its properties, is suitable as warning signals and alarms. However, the present efforts go beyond designing single attention-grabbing sounds for specific urgent events. Instead, we would like to consider all auditory elements in the vehicle to shape informative and pleasant soundscapes, which promote trust in the technology as well as proactive and safe user behavior.
Workshop 2: Collaborative Design and Evaluation in Auditory Displays for Interactive Physics Simulations
Mike Winters, mikewinters@gatech.edu & Prakriti Kaini, prakriti.kaini@gatech.edu (Sonification Lab, Georgia Tech)
Outline of Objectives:
- Learn about the development of auditory displays for interactive science learning resources.
- Build connections within the ICAD community and learn new skills through team prototyping.
- Discuss and contribute to community knowledge about strategies for design and evaluation.
Invited Speakers:
Dr. Bruce Walker, bruce.walker@psych.gatech.edu; Sonification Lab, Georgia Institute of Technology
Abstract:
Over the past two years, teams at Georgia Tech and the University of Colorado Boulder have worked together to create auditory displays for the PhET Interactive Simulations project. These popular simulations engage students 1 from elementary school through college in physics learning. Unfortunately, a reliance on visual display makes them inaccessible to students with vision impairments and other disabilities. This workshop will utilize the PhET simulations as a platform to articulate collaborative design and evaluation strategies within the context of physics education technologies. After being introduced to the PhET simulations, goals, and relevant research, specific PhET simulations will be introduced for targeted work. Technically-oriented participants will have already installed the phet-osc-bridge using node package manager, and will be joined by 1-2 others to begin directly prototyping sound designs and 2 proposing evaluation strategies. After iterating on sound designs and evaluation strategies, facilitated by the workshop leads, teams will report to the group on their work, including sound demos, prototype evaluations, and ideas and insights involved. Recordings of auditory displays, evaluation plans, and insights will be archived and made available to all participants. This workshop will foster community connections, bridging the art/sound design and research/science space, and contribute to dialog on the ways design and evaluation can successfully work together in auditory display.
Participants don’t need to program, but technically-minded participants should install the phet-osc-bridge prior to the workshop. There is a demo video, installation instructions, and starter sonification code available online: https://tinyurl.com/phet-osc-
Tutorials
Tutorial 1: Introduction to Pure Data: Beginner and Intermediate Techniques for Data Sonification
Steven Landry, sglandry@mtu.edu (Mind Music Machine Lab, Applied Cognitive Science and Human Factors, Michigan Tech)
This workshop/tutorial is intended for beginner-intermediate level sound designers who wish to explore the possibilities of data sonification. PureData is a free, easy to use, visual programming language for generating/processing sound, data, image, and video. Pure data is ideal for “DIY” sonification designers with little to no programming experience. I will share personal tips, common strategies, and pre-made patches for translating data into sound.
This workshop covers:
- Introduction to the basics of object-based (visual) programming
- “Hello world” of sonification (mapping data to frequency/amplitude of a sin-wave)
- Handling input from USB devices (mice, keyboards, video game controllers, etc.)
- A collection of pre-made instruments and generative music patches
- Communicating with external DAWs and VST’s via MIDI or OSC protocol
- A repository of pre-made patches
- Instruments (additive/subtractive/granular)
- Data handling
- Audio effect modules (delay, reverb, distortion, looping, etc.).
- Audio/MIDI analysis
- Generative/procedural music
Tutorial 2: Introduction to Data Sonification with Python and Csound
David Worrall, dworrall@colum.edu (Audio Arts and Acoustics Department, Columbia College Chicago)
Abstract:
Python is a popular, easily learnt general-purpose programming language which can serve as a glue language to connect together many separate software components in a simple and flexible manner. Widely used in the scientific community, it can also be used as a high-level modular framework for controlling low-level operations implemented by subroutine libraries in other languages.
Csound arguably has the widest, most mature collection of tools for sound synthesis and sound modification. There are few things related to audio-programming that you cannot do with Csound; it can be used in real-time to synthesize sound or process live audio or other control data (including MIDI and OSC) on the fly. It can be used to render sound on hand-held and other mobile devices or, when synthesis needs require it, sound can be rendered to file.
The Python API (Application Programming Interface) to Csound is robust and available on all hardware platforms. The aim of this workshop is to provide a hand’s on introduction to producing software data sonifications using a combination of these most powerful, open-ended, and extensible set of tools. If required, it will be divided into sessions on Python, on Csound individually, and then in combination.
The focus of this workshop will be to enable participants to learn how data sonification can be implemented in Python and Csound on a personal OSX computer. Potential participants who would like to use other configurations should contact the workshop leader [or whatever term you use] David Worrall (dworrall@colum.edu). Preparatory information will be sent to registered participants prior to the conference.
If you have any questions about workshop/tutorial programs, contact the workshop chair, Derek Brock, icad2018workshop@icad.org.