Remymuller (talk | contribs) |
Remymuller (talk | contribs) |
||
Line 1: | Line 1: | ||
− | == Ircam real-time musical | + | == Activities == |
+ | The Real-time Applications Team conducts research and development in real-time computer technology for digital signal processing and machine learning for music, sound and gesture. | ||
+ | Over the years, Ircam has developed numerous hardware (4X, ISPW) and software environments (Max, FTS, jMax) for real-time music processing. Today the principal concern of the team remains live performance with computers representing a large field of research and development in man-machine interaction. | ||
+ | The applications addressed by the team cover the direct interaction of a performer with the computer as well as the mutation of techniques traditionally present in music, dance and theater - musical instruments, stage, accessories. The use of digital techniques can correspond to the augmentation or transformation of the performer’s expression – the sound of his intrument, his voice, his gestures, his memory - as well as the creation of a dialogue between the artist and the machine. | ||
+ | In addition to topics regarding live performance itself, such as gestural interfaces or real-time audio analysis and synthesis, the team is also concerned with tools for the composition and representation of works involving real-time processes and interaction. Further fields of applications are found in the larger context of audio and multimedia industry as well as education. | ||
+ | Resources of the team | ||
− | + | == Interest Areas and Associated Projects == | |
+ | The team continues to work on the score following project, a technique for performer – computer interaction developed at IRCAM, enabling the electronic part of a piece to be synchronized with a musician’s performance. Presently, research is centered on the integration of motion capture techniques as well as specific techniques for speech and voice. The project is open to freer musical forms, beyond the realm of the traditional score. | ||
+ | |||
+ | As part of the of the research project into technologies for live performance, the team is working in close collaboration with the Department of Creation on motion capture and analysis technologies for diverse applications such as dance, theatre and multimedia installations. This work also finds application in the development of augmented instruments, in which sensors are integrated into traditional musical instruments, with the aim of expanding our understanding of the relationship between gesture-sound and refining the real-time interaction between the performer and computer. | ||
+ | The FTM software platform developed by the team unites certain elements of the development work mentioned above within a library of extensions for Max/MSP. Based on jMax, FTM enables the graphical and algorithmic manipulation of complex musical data (sequences, timbres descriptions, graphs) integrated within Max’s programming makeup. | ||
+ | FTM is the foundation of Max/MSP’s new object libraries such as Gabor and MnM. Gabor is a tool box for timbre processing incorporating multiple sound representations and analysis/synthesis techniques (granular, additive, LPC, PSOLA, FOF, etc.). The MnM library supplies tools for analysis as well as the recognition of musical, sonic and gesture form (PCA, HMM, neuron networks etc.). | ||
+ | |||
+ | == Interest Areas and Associated Projects == | ||
+ | * Gesture Analysis | ||
+ | * Score following and alignment | ||
+ | * SDIF Format (Sound Description Interchange Format) | ||
+ | |||
+ | == National and european Research Projects == | ||
+ | * Caspar : Cultural, Artistic and Scientific knowledge for Preservation, Access and Retrieval | ||
+ | * Conceptmove : Together, artists imagine a model for an interactive digital performance | ||
+ | * i-Maestro | ||
+ | * Semantic HIFI : Browsing, listening, interacting, performing, sharing on future HIFI systems | ||
+ | * Windset | ||
+ | |||
+ | == Realizations == | ||
+ | * Additive and HMM : Analysis / Synthesis Technologies | ||
+ | * Coney Island | ||
+ | * ELLE and the Voice : Virtual reality installation by Catherine Ikam and Louis-François Fléri, Music by Pierre Charvet | ||
+ | * PAF : Formant Synthesis | ||
+ | * PAGS/SINOLA Vocal Synthesis | ||
+ | |||
+ | == Specialist Areas == | ||
+ | * Real-time, interactivity, signal processing, motion capture, gesture modeling, learning mechanisms. | ||
+ | |||
+ | == Participants == | ||
+ | * Head Researcher : Norbert Schnell | ||
+ | * Chargé de recherche et développement : Diemo Schwarz | ||
+ | * Chargé de recherche : Frédéric Bevilacqua | ||
+ | * Chargé de développement : Riccardo Borghesi | ||
+ | |||
+ | == Collaborations == | ||
+ | * CNMAT (Berkeley University, California), MacGill University (Montreal, Canada), Padoue University (Italy), LAM (University of Paris 6), Cycling’74. | ||
+ | |||
+ | == Publications == | ||
+ | === Conference proceedings articles === | ||
+ | F. Bevilacqua, N. Rasamimanana, E. Fléty, S. Lemouton, F. Baschet « The augmented violin project: research, composition and performance report », 6th International Conference on New Interfaces for Musical Expression (NIME 06), Paris, 2006 | ||
+ | |||
+ | A. Cont « Realtime Audio to Score Alignment for Polyphonic Music Instruments Using Sparse Non-negative constraints and Hierarchical HMMs », IEEE International Conference in Acoustics and Speech Signal Processing (ICASSP), Toulouse, 2006 | ||
+ | |||
+ | A. Cont « Realtime Multiple Pitch Observation using Sparse Non-negative Constraints », International Symposium on Music Information Retrieval (ISMIR), Victoria, 2006 | ||
+ | |||
+ | A. Cont, S. Dubnov, G. Assayag « A framework for Anticipatory Machine Improvisation and Style Imitation », Anticipatory Behavior in Adaptive Learning Systems (ABiALS), Rome, 2006 | ||
+ | |||
+ | F. Bevilacqua, R. Muller, N. Schnell « MnM: a Max/MSP mapping toolbox », New Interfaces for Musical Expression, Vancouver, 2005 | ||
+ | |||
+ | F. Bevilacqua, R. Muller « A Gesture follower for performing arts », Gesture Workshop, 2005 | ||
+ | |||
+ | A. Cont, D. Schwarz, N. Schnell « Training Ircam's Score Follower », IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, 2005 | ||
+ | |||
+ | N. Rasamimanana, E. Fléty, F. Bevilacqua « Gesture Analysis of Violin Bow Strokes (abstract) », Gesture Workshop, 2005 | ||
+ | |||
+ | N. Schnell, D. Schwarz « Gabor, Multi-Representation Real-Time Analysis/Synthesis », COST-G6 Conference on Digital Audio Effects (DAFx), Madrid, 2005 | ||
+ | |||
+ | N. Schnell, R. Borghesi, D. Schwarz, F. Bevilacqua, R. Müller « FTM — Complex data structures for Max », International Computer Music Conference (ICMC), Barcelona, 2005 | ||
+ | |||
+ | D. Schwarz, A. Cont, N. Schnell « From Boulez to Ballads: Training Ircam's Score Follower », International Computer Music Conference (ICMC), Barcelona, 2005 | ||
+ | |||
+ | D. Schwarz « Current Research in Concatenative Sound Synthesis », International Computer Music Conference (ICMC), Barcelona, 2005 | ||
+ | |||
+ | F. Bevilacqua, E. Fléty « Captation et analyse du mouvement pour l'interaction entre danse et musique », Rencontres Musicales Pluridisciplinaires - le corps & la musique, Lyon, 2004 | ||
+ | |||
+ | A. Cont, D. Schwarz, N. Schnell « Training IRCAM's Score Follower », AAAI Symposium 2004 Style and Meaning in Language, Art, Music, and Design, Washington, 2004 | ||
+ | |||
+ | E. Fléty, N. Leroy, J. Ravarini, F. Bevilacqua « Versatile sensor acquisition system utilizing Network Technology », International Conference on New Interfaces for Musical Expression (NIME), Hamamatsu, 2004 | ||
+ | |||
+ | D. Schwarz, N. Orio, N. Schnell « Robust Polyphonic Midi Score Following with Hidden Markov Models », International Computer Music Conference (ICMC), Miami, 2004 | ||
+ | |||
+ | N. Orio, S. Lemouton, D. Schwarz, N. Schnell « Score Following: State of the Art and New Developments », New Interfaces for Musical Expression (NIME), Montreal, 2003 | ||
+ | |||
+ | === Phd Thesis === | ||
+ | D. Schwarz « Data-Driven Concatenative Sound Synthesis », Université Paris 6 - Pierre et Marie Curie, 2004 |
Revision as of 19:15, 16 November 2006
Contents
Activities
The Real-time Applications Team conducts research and development in real-time computer technology for digital signal processing and machine learning for music, sound and gesture. Over the years, Ircam has developed numerous hardware (4X, ISPW) and software environments (Max, FTS, jMax) for real-time music processing. Today the principal concern of the team remains live performance with computers representing a large field of research and development in man-machine interaction. The applications addressed by the team cover the direct interaction of a performer with the computer as well as the mutation of techniques traditionally present in music, dance and theater - musical instruments, stage, accessories. The use of digital techniques can correspond to the augmentation or transformation of the performer’s expression – the sound of his intrument, his voice, his gestures, his memory - as well as the creation of a dialogue between the artist and the machine. In addition to topics regarding live performance itself, such as gestural interfaces or real-time audio analysis and synthesis, the team is also concerned with tools for the composition and representation of works involving real-time processes and interaction. Further fields of applications are found in the larger context of audio and multimedia industry as well as education. Resources of the team
Interest Areas and Associated Projects
The team continues to work on the score following project, a technique for performer – computer interaction developed at IRCAM, enabling the electronic part of a piece to be synchronized with a musician’s performance. Presently, research is centered on the integration of motion capture techniques as well as specific techniques for speech and voice. The project is open to freer musical forms, beyond the realm of the traditional score.
As part of the of the research project into technologies for live performance, the team is working in close collaboration with the Department of Creation on motion capture and analysis technologies for diverse applications such as dance, theatre and multimedia installations. This work also finds application in the development of augmented instruments, in which sensors are integrated into traditional musical instruments, with the aim of expanding our understanding of the relationship between gesture-sound and refining the real-time interaction between the performer and computer. The FTM software platform developed by the team unites certain elements of the development work mentioned above within a library of extensions for Max/MSP. Based on jMax, FTM enables the graphical and algorithmic manipulation of complex musical data (sequences, timbres descriptions, graphs) integrated within Max’s programming makeup. FTM is the foundation of Max/MSP’s new object libraries such as Gabor and MnM. Gabor is a tool box for timbre processing incorporating multiple sound representations and analysis/synthesis techniques (granular, additive, LPC, PSOLA, FOF, etc.). The MnM library supplies tools for analysis as well as the recognition of musical, sonic and gesture form (PCA, HMM, neuron networks etc.).
Interest Areas and Associated Projects
- Gesture Analysis
- Score following and alignment
- SDIF Format (Sound Description Interchange Format)
National and european Research Projects
- Caspar : Cultural, Artistic and Scientific knowledge for Preservation, Access and Retrieval
- Conceptmove : Together, artists imagine a model for an interactive digital performance
- i-Maestro
- Semantic HIFI : Browsing, listening, interacting, performing, sharing on future HIFI systems
- Windset
Realizations
- Additive and HMM : Analysis / Synthesis Technologies
- Coney Island
- ELLE and the Voice : Virtual reality installation by Catherine Ikam and Louis-François Fléri, Music by Pierre Charvet
- PAF : Formant Synthesis
- PAGS/SINOLA Vocal Synthesis
Specialist Areas
- Real-time, interactivity, signal processing, motion capture, gesture modeling, learning mechanisms.
Participants
- Head Researcher : Norbert Schnell
- Chargé de recherche et développement : Diemo Schwarz
- Chargé de recherche : Frédéric Bevilacqua
- Chargé de développement : Riccardo Borghesi
Collaborations
- CNMAT (Berkeley University, California), MacGill University (Montreal, Canada), Padoue University (Italy), LAM (University of Paris 6), Cycling’74.
Publications
Conference proceedings articles
F. Bevilacqua, N. Rasamimanana, E. Fléty, S. Lemouton, F. Baschet « The augmented violin project: research, composition and performance report », 6th International Conference on New Interfaces for Musical Expression (NIME 06), Paris, 2006
A. Cont « Realtime Audio to Score Alignment for Polyphonic Music Instruments Using Sparse Non-negative constraints and Hierarchical HMMs », IEEE International Conference in Acoustics and Speech Signal Processing (ICASSP), Toulouse, 2006
A. Cont « Realtime Multiple Pitch Observation using Sparse Non-negative Constraints », International Symposium on Music Information Retrieval (ISMIR), Victoria, 2006
A. Cont, S. Dubnov, G. Assayag « A framework for Anticipatory Machine Improvisation and Style Imitation », Anticipatory Behavior in Adaptive Learning Systems (ABiALS), Rome, 2006
F. Bevilacqua, R. Muller, N. Schnell « MnM: a Max/MSP mapping toolbox », New Interfaces for Musical Expression, Vancouver, 2005
F. Bevilacqua, R. Muller « A Gesture follower for performing arts », Gesture Workshop, 2005
A. Cont, D. Schwarz, N. Schnell « Training Ircam's Score Follower », IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, 2005
N. Rasamimanana, E. Fléty, F. Bevilacqua « Gesture Analysis of Violin Bow Strokes (abstract) », Gesture Workshop, 2005
N. Schnell, D. Schwarz « Gabor, Multi-Representation Real-Time Analysis/Synthesis », COST-G6 Conference on Digital Audio Effects (DAFx), Madrid, 2005
N. Schnell, R. Borghesi, D. Schwarz, F. Bevilacqua, R. Müller « FTM — Complex data structures for Max », International Computer Music Conference (ICMC), Barcelona, 2005
D. Schwarz, A. Cont, N. Schnell « From Boulez to Ballads: Training Ircam's Score Follower », International Computer Music Conference (ICMC), Barcelona, 2005
D. Schwarz « Current Research in Concatenative Sound Synthesis », International Computer Music Conference (ICMC), Barcelona, 2005
F. Bevilacqua, E. Fléty « Captation et analyse du mouvement pour l'interaction entre danse et musique », Rencontres Musicales Pluridisciplinaires - le corps & la musique, Lyon, 2004
A. Cont, D. Schwarz, N. Schnell « Training IRCAM's Score Follower », AAAI Symposium 2004 Style and Meaning in Language, Art, Music, and Design, Washington, 2004
E. Fléty, N. Leroy, J. Ravarini, F. Bevilacqua « Versatile sensor acquisition system utilizing Network Technology », International Conference on New Interfaces for Musical Expression (NIME), Hamamatsu, 2004
D. Schwarz, N. Orio, N. Schnell « Robust Polyphonic Midi Score Following with Hidden Markov Models », International Computer Music Conference (ICMC), Miami, 2004
N. Orio, S. Lemouton, D. Schwarz, N. Schnell « Score Following: State of the Art and New Developments », New Interfaces for Musical Expression (NIME), Montreal, 2003
Phd Thesis
D. Schwarz « Data-Driven Concatenative Sound Synthesis », Université Paris 6 - Pierre et Marie Curie, 2004