(→Music and movement research) |
(→Music and movement research @ FourMs) |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Here you can put a link to your research or artistic project and a short word on how you use FTM in it. | Here you can put a link to your research or artistic project and a short word on how you use FTM in it. | ||
− | === Real- | + | === Real-Time Corpus-Based Concatenative Synthesis === |
[http://imtr.ircam.fr/index.php/CataRT CataRT], by Diemo Schwarz and many collaborators, is a sound synthesis | [http://imtr.ircam.fr/index.php/CataRT CataRT], by Diemo Schwarz and many collaborators, is a sound synthesis | ||
Line 15: | Line 15: | ||
=== MindBox Sound Installation === | === MindBox Sound Installation === | ||
− | The [http://mindbox.humatic. | + | The [http://mindbox.humatic.net MindBox] project uses FTM & Co for the real-time interactive audio processing of the installation. |
− | === Music and | + | === Music and Movement Research @ FourMs === |
We use FTM in the [http://www.fourms.uio.no/ FourMs] lab at the University of Oslo for many different applications: | We use FTM in the [http://www.fourms.uio.no/ FourMs] lab at the University of Oslo for many different applications: |
Latest revision as of 09:18, 11 June 2010
Here you can put a link to your research or artistic project and a short word on how you use FTM in it.
Real-Time Corpus-Based Concatenative Synthesis
CataRT, by Diemo Schwarz and many collaborators, is a sound synthesis system completely based on FTM that analyses and plays back snippets of sound controlled by their sonic characteristics. FTM is used here to store various information of the sound grains, and a dynamic number of waveforms themselves, in a mat table of dicts with info and fmat sound matrics. Gabor is used for granular synthesis with unlimited polyphony and pitch and spectral analysis. MnM is used for statistical analysis (mean values of descriptors) and lookup of matching grains.
MindBox Sound Installation
The MindBox project uses FTM & Co for the real-time interactive audio processing of the installation.
Music and Movement Research @ FourMs
We use FTM in the FourMs lab at the University of Oslo for many different applications:
- Storing and playing back recordings of motion capture data from electromagnetic trackers (Polhemus), video analysis (Musical Gestures Toolbox), infrared systems (Optitrack and Qualisys), and various types of accelerometer-based systems. This is mainly done using the SDIF recording/playback functionality in FTM, with a focus on developing a set of GDIF descriptors.
- Sound synthesis and control, using GABOR and CataRT.
- Machine learning using the MnM objects.