From ftm
(catart link) |
(→Real-time corpus-based concatenative synthesis) |
||
Line 14: | Line 14: | ||
statistical analysis (mean values of descriptors) and lookup of matching | statistical analysis (mean values of descriptors) and lookup of matching | ||
grains. | grains. | ||
+ | |||
+ | === Music and movement research === | ||
+ | |||
+ | We use FTM in the [http://www.fourms.uio.no/ FourMs] lab for many different applications: | ||
+ | |||
+ | * Storing and playing back recordings of motion capture data from electromagnetic trackers (Polhemus), video analysis (Musical Gestures Toolbox), infrared systems (Optitrack and Qualisys), and various types of accelerometer-based systems. This is mainly done using the SDIF recording/playback functionality in FTM, with a focus on developing a set of [http://www.gdif.org GDIF] descriptors. | ||
+ | * Sound synthesis and control, using GABOR and [http://imtr.ircam.fr/index.php/CataRT CataRT]. | ||
+ | * Machine learning using the MnM objects. |
Revision as of 10:04, 27 October 2009
Here you can put a link to your research or artistic project and a short word on how you use FTM in it.
Sound Synthesis
Real-time corpus-based concatenative synthesis
CataRT, by Diemo Schwarz and many collaborators, is a sound synthesis system completely based on FTM that analyses and plays back snippets of sound controlled by their sonic characteristics. FTM is used here to store various information of the sound grains, and a dynamic number of waveforms themselves, in a mat table of dicts with info and fmat sound matrics. Gabor is used for granular synthesis with unlimited polyphony and pitch and spectral analysis. MnM is used for statistical analysis (mean values of descriptors) and lookup of matching grains.
Music and movement research
We use FTM in the FourMs lab for many different applications:
- Storing and playing back recordings of motion capture data from electromagnetic trackers (Polhemus), video analysis (Musical Gestures Toolbox), infrared systems (Optitrack and Qualisys), and various types of accelerometer-based systems. This is mainly done using the SDIF recording/playback functionality in FTM, with a focus on developing a set of GDIF descriptors.
- Sound synthesis and control, using GABOR and CataRT.
- Machine learning using the MnM objects.