From ftm
Revision as of 17:24, 18 December 2009 by Norbert (talk | contribs) (Music and movement research)
Jump to: navigation, search

Here you can put a link to your research or artistic project and a short word on how you use FTM in it.

Real-time corpus-based concatenative synthesis

CataRT, by Diemo Schwarz and many collaborators, is a sound synthesis system completely based on FTM that analyses and plays back snippets of sound controlled by their sonic characteristics. FTM is used here to store various information of the sound grains, and a dynamic number of waveforms themselves, in a mat table of dicts with info and fmat sound matrics. Gabor is used for granular synthesis with unlimited polyphony and pitch and spectral analysis. MnM is used for statistical analysis (mean values of descriptors) and lookup of matching grains.

MindBox Sound Installation

The MindBox project uses FTM & Co for the real-time interactive audio processing of the installation.

Music and movement research @ FourMs

We use FTM in the FourMs lab at the University of Oslo for many different applications:

  • Storing and playing back recordings of motion capture data from electromagnetic trackers (Polhemus), video analysis (Musical Gestures Toolbox), infrared systems (Optitrack and Qualisys), and various types of accelerometer-based systems. This is mainly done using the SDIF recording/playback functionality in FTM, with a focus on developing a set of GDIF descriptors.
  • Sound synthesis and control, using GABOR and CataRT.
  • Machine learning using the MnM objects.