Timothy Langlois
Adobe
801 N 34th St.
Seattle, WA 98103
Bitbucket
Github
About me

I'm currently a research scientist at Adobe. I received my Ph.D. from the Cornell Computer Science Graphics Lab where I was advised by Doug James.

My research interests include physically-based animation and physical simulation, with a focus on acoustics. I aim to make it easier and more efficient to simulate various phenomena, and also to make it easier to create physically realistic animations.

My CV (pdf)

Internships @ Adobe Research: If you are a PhD student interested in doing a research internship at Adobe and/or collaborating on a project with me, send an e-mail. It will be helpful if you include a CV and a summary of your current research interests.

Education

  • Ph.D., Computer Science
    Cornell University
    2011-2016
  • B.S., Computer Engineering
    University of Massachusetts Amherst
    2005-2009
Activities

  • Technical paper reviewer: SIGGRAPH, SIGGRAPH Asia, ACM TON, ECCV, TVCG, IEEE VR, CGF
  • Member of Adobe Employee Community Fund grant committee
  • Volunteer with Expand Your Horizons
    Helped organize an educational workshop for middle school students
    Spring 2012
Work

  • Research Scientist at Adobe Research
    2016-Present
  • Research Intern at Disney Research Boston
    2015 (summer)
  • Software Engineer in the MIT Lincoln Laboratory Weather Sensing Group
    Developed weather prediction algorithms, distributed real-time systems
    2009-2011
  • Software Engineering Intern at Raytheon
    Summer 2008
  • Software Engineering Intern at DEKA Research and Development
    Embedded systems development on several medical devices
    Summers and winters 2006-2008

    One of the main projects running some of my code is the DEKA Arm
    Videos of coverage from 60 Minutes and IEEE Spectrum
Tech Transfer

  • Ambisonics in Premiere Pro
  • Rigid Body Physics in Character Animator
  • Colliding Particles in Character Animator
Publications

Self-Supervised Generation of Spatial Audio for 360° Video
Pedro Morgado, Nuno Vasconcelos, Timothy R. Langlois and Oliver Wang
Neural Information Processing Systems (NIPS 2018)
We introduce an approach to convert mono audio recorded by a 360° video camera into spatial audio, a representation of the distribution of sound over the full viewing sphere. Spatial audio is an important component of immersive 360° video viewing, but spatial audio microphones are still rare in current 360° video production. Our system consists of end-to-end trainable neural networks that separate individual sound sources and localize them on the viewing sphere, conditioned on multi-modal analysis of audio and 360° video frames. We introduce several datasets, including one filmed ourselves, and one collected in-the-wild from YouTube, consisting of 360° videos uploaded with spatial audio. During training, ground-truth spatial audio serves as self-supervision and a mixed down mono track forms the input to our network. Using our approach, we show that it is possible to infer the spatial location of sound sources based only on 360° video and a mono audio track.
PDF Project page
@inproceedings{morgadoNIPS18,
title = {Self-Supervised Generation of Spatial Audio for 360\deg Video},
author = {Pedro Morgado, Nuno Vasconcelos, Timothy Langlois and Oliver Wang},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2018} }
Video
Toward Wave-based Sound Synthesis for Computer Animation
Jui-Hsien Wang, Ante Qu, Timothy R. Langlois, and Doug L. James
ACM Transactions on Graphics (SIGGRAPH 2018)
We explore an integrated approach to sound generation that supports a wide variety of physics-based simulation models and computer-animated phenomena. Targeting high-quality offline sound synthesis, we seek to resolve animation-driven sound radiation with near-field scattering and diffraction effects. The core of our approach is a sharp-interface finite-difference time-domain (FDTD) wavesolver, with a series of supporting algorithms to handle rapidly deforming and vibrating embedded interfaces arising in physics-based animation sound. Once the solver rasterizes these interfaces, it must evaluate acceleration boundary conditions (BCs) that involve model- and phenomena-specific computations. We introduce acoustic shaders as a mechanism to abstract away these complexities, and describe a variety of implementations for computer animation: near-rigid objects with ringing and acceleration noise, deformable (finite element) models such as thin shells, bubble-based water, and virtual characters. Since time-domain wave synthesis is expensive, we only simulate pressure waves in a small region about each sound source, then estimate a far-field pressure signal. To further improve scalability beyond multi-threading, we propose a fully time-parallel sound synthesis method that is demonstrated on commodity cloud computing resources. In addition to presenting results for multiple animation phenomena (water, rigid, shells, kinematic deformers, etc.) we also propose 3D automatic dialogue replacement (3DADR) for virtual characters so that pre-recorded dialogue can include character movement, and near-field shadowing and scattering sound effects.
PDF Project page
@article{Wang:2018:WaveSim,
title = {Toward Wave-based Sound Synthesis for Computer Animation},
author = {Wang, Jui-Hsien and Qu, Ante and Langlois, Timothy R. and James, Doug L.},
journal = {ACM Trans. Graph.},
volume = {37},
number = {4},
year = {2018}, }
Video
Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, and Changxi Zheng
ACM Transactions on Graphics (SIGGRAPH 2018)
Although 360° cameras ease the capture of panoramic footage, it remains challenging to add realistic 360° audio that blends into the captured scene and is synchronized with the camera motion. We present a method for adding scene-aware spatial audio to 360° videos in typical indoor scenes, using only a conventional mono-channel microphone and a speaker. We observe that the late reverberation of a room's impulse response is usually diffuse spatially and directionally. Exploiting this fact, we propose a method that synthesizes the directional impulse response between any source and listening locations by combining a synthesized early reverberation part and a measured late reverberation tail. The early reverberation is simulated using a geometric acoustic simulation and then enhanced using a frequency modulation method to capture room resonances. The late reverberation is extracted from a recorded impulse response, with a carefully chosen time duration that separates out the late reverberation from the early reverberation. In our validations, we show that our synthesized spatial audio matches closely with recordings using ambisonic microphones. Lastly, we demonstrate the strength of our method in several applications.
PDF Project page
@article{Li:2018:360audio,
title = {Scene-Aware Audio for 360\textdegree{} Videos},
author = {Li, Dingzeyu and Langlois, Timothy R. and Zheng, Changxi},
journal = {ACM Trans. Graph.},
volume = {37},
number = {4},
year = {2018}, }
Video
Stochastic Structural Analysis for Context-Aware Design and Fabrication
Timothy R. Langlois, Ariel Shamir, Daniel Dror, Wojciech Matusik, and David I.W. Levin
ACM Transactions on Graphics (SIGGRAPH Asia 2016)
In this paper we propose failure probabilities as a semantically and mechanically meaningful measure of object fragility. We present a stochastic finite element method which exploits fast rigid body simulation and reduced-space approaches to compute spatially varying failure probabilities. We use an explicit rigid body simulation to emulate the real-world loading conditions an object might experience, including persistent and transient frictional contact, while allowing us to combine several such scenarios together. Thus, our estimates better reflect real-world failure modes than previous methods. We validate our results using a series of real-world tests. Finally, we show how to embed failure probabilities into a stress constrained topology optimization which we use to design objects such as weight bearing brackets and robust 3D printable objects.
PDF Project page
@article{Langlois:2016:Stopt,
author = {Langlois, Timothy and Shamir, Ariel and Dror, Daniel and Matusik, Wojciech and Levin, David I. W.},
title = {Stochastic Structural Analysis for Context-aware Design and Fabrication},
journal = {ACM Trans. Graph.},
issue_date = {November 2016},
volume = {35},
number = {6},
month = nov,
year = {2016},
issn = {0730-0301},
pages = {226:1--226:13},
articleno = {226},
numpages = {13},
url = {http://doi.acm.org/10.1145/2980179.2982436},
doi = {10.1145/2980179.2982436},
acmid = {2982436},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {FEM, computational design, structural analysis}, }
Video
Toward Animating Water with Complex Acoustic Bubbles
Timothy R. Langlois, Changxi Zheng, and Doug L. James
ACM Transactions on Graphics (SIGGRAPH 2016)
This paper explores methods for synthesizing physics-based bubble sounds directly from two-phase incompressible simulations of bubbly water flows. By tracking fluid-air interface geometry, we identify bubble geometry and topological changes due to splitting, merging and popping. A novel capacitance-based method is proposed that can estimate volume-mode bubble frequency changes due to bubble size, shape, and proximity to solid and air interfaces. Our acoustic transfer model is able to capture cavity resonance effects due to near-field geometry, and we also propose a fast precomputed bubble-plane model for cheap transfer evaluation. In addition, we consider a bubble forcing model that better accounts for bubble entrainment, splitting, and merging events, as well as a Helmholtz resonator model for bubble popping sounds. To overcome frequency bandwidth limitations associated with coarse resolution fluid grids, we simulate micro-bubbles in the audio domain using a power-law model of bubble populations. Finally, we present several detailed examples of audiovisual water simulations and physical experiments to validate our frequency model.
PDF Project page
@article{Langlois:2016:Bubbles,
author = {Timothy R. Langlois and Changxi Zheng and Doug L. James},
title = {Toward Animating Water with Complex Acoustic Bubbles},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2016)},
year = {2016},
volume = {35},
number = {4},
month = Jul,
doi = {10.1145/2897824.2925904}
url = {http://www.cs.cornell.edu/projects/Sound/bubbles}
}
Video
Eigenmode Compression for Modal Sound Models
Timothy R. Langlois, Steven S. An, Kelvin K. Jin, and Doug L. James
ACM Transactions on Graphics (SIGGRAPH 2014)
We propose and evaluate a method for significantly compressing modal sound models, thereby making them far more practical for audiovisual applications. The dense eigenmode matrix, needed to compute the sound model's response to contact forces, can consume tens to thousands of megabytes depending on mesh resolution and mode count. Our eigenmode compression pipeline is based on nonlinear optimization of Moving Least Squares (MLS) approximations. Enhanced compression is achieved by exploiting symmetry both within and between eigenmodes, and by adaptively assigning per-mode error levels based on human perception of the far-field pressure amplitudes. Our method provides smooth eigenmode approximations, and efficient random access. We demonstrate that, in many cases, hundredfold compression ratios can be achieved without audible degradation of the rendered sound.
PDF Project page
@article{Langlois:2014:EMC,
author = {Timothy R. Langlois and Steven S. An and Kelvin K. Jin and Doug L. James},
title = {Eigenmode Compression for Modal Sound Models},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014)},
year = {2014},
volume = {33},
number = {4},
month = Aug,
doi = {10.1145/2601097.2601177}
url = {http://www.cs.cornell.edu/projects/Sound/modec}
}
Video
Inverse-Foley Animation: Synchronizing rigid-body motions to sound
Timothy R. Langlois and Doug L. James
ACM Transactions on Graphics (SIGGRAPH 2014)
In this paper, we introduce Inverse-Foley Animation, a technique for optimizing rigid-body animations so that contact events are synchronized with input sound events. A precomputed database of randomly sampled rigid-body contact events is used to build a contact-event graph, which can be searched to determine a plausible sequence of contact events synchronized with the input sound's events. To more easily find motions with matching contact times, we allow transitions between simulated contact events using a motion blending formulation based on modified contact impulses. We fine tune synchronization by slightly retiming ballistic motions. Given a sound, our system can synthesize synchronized motions using graphs built with hundreds of thousands of precomputed motions, and millions of contact events. Our system is easy to use, and has been used to plan motions for hundreds of sounds, and dozens of rigid-body models.
PDF Project page
@article{Langlois:2014:IFA,
author = {Timothy R. Langlois and Doug L. James},
title = {Inverse-Foley Animation: Synchronizing rigid-body motions to sound},
journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014)},
year = {2014},
volume = {33},
number = {4},
month = Aug,
doi = {10.1145/2601097.2601178}
url = {http://www.cs.cornell.edu/projects/Sound/ifa}
}
Video
Receptor Arrays
Protein Identification using Receptor Arrays and Mass Spectrometry
Timothy R. Langlois, Ramgopal R. Mettu, and Richard W. Vachet
Advances in Computational Biology (2010)
Mass spectrometry is one of the main tools for protein identification in complex mixtures. When the sequence of the protein is known, we can check to see if the known mass distribution of peptides for a given protein is present in the recorded mass distribution of the mixture being analyzed. Unfortunately, this general approach suffers from high false-positive rates, since in a complex mixture, the likelihood that we will observe any particular mass distribution is high, whether or not the protein of interest is in the mixture. In this paper, we propose a scoring methodology and algorithm for protein identification that make use of a new experimental tech- nique, which we call receptor arrays, for separating a mixture based on another differentiating property of peptides called isoelectric point (pI). We perform extensive simulation experiments on several genomes and show that additional information about peptides can achieve an average 30% reduction in false-positive rates over existing methods, while achieving very high true-positive identification rates.
@incollection{Langlois:2010:IEP,
year={2010},
isbn={978-1-4419-5912-6},
booktitle={Advances in Computational Biology},
volume={680},
series={Advances in Experimental Medicine and Biology},
editor={Arabnia, Hamid R.},
doi={10.1007/978-1-4419-5913-3_39},
title={Protein Identification Using Receptor Arrays and Mass Spectrometry},
url={http://dx.doi.org/10.1007/978-1-4419-5913-3_39},
publisher={Springer New York},
keywords={Receptor; Array; Mass; Spectrometry; Protein; Identification; Isoelectric; Point},
author={Langlois, Timothy R. and Vachet, Richard W. and Mettu, Ramgopal R.},
pages={343-351},
language={English}
}
Project page
Projects

Pool Table Analyzer

My group's Senior Design Project at UMass. We designed and built a system which watched a game of pool using webcams, suggested the best shot to the player, and assisted them with aiming the cue stick, all in realtime. More details here.