Neuroplanets

Conceived and directed by Novi_sad

• Sound analysis on recordings, transformation of the results and application of data by Novi_sad

• Produced by Novi_sad

• Original audio from:

» BJ Nilsen | Sweden [www.bjnilsen.com]

» Daniel Menche | U.S.A [www.esophagus.com/htdb/menche]

» Francisco  López | Spain [www.franciscolopez.net]

» Mika Vainio | Finland [www.phinnweb.org/vainio]

The tracks have been manipulated and produced by using the following info / data:

• Track 1 : BJ Nilsen

» Audio analysis on: Winds, sandstorms, and dust devils, which are little tornadoes caused by local weather patterns in Mars

» Applied data from: Functional anatomy of schizophrenic patients with auditory hallucinations

• Track 2 : Daniel Menche

» Audio analysis on: Ghostly planetary plasma waves from NASA

» Applied data from: Patterns of music agnosia associated with middle cerebral artery infarcts

• Track 3 : Francisco López

» Audio analysis on: Whistle of ultra-cold liquid helium-3 that changes volume relative to the North Pole and Earth’s rotation

» Applied data from: : Rightward and leftward bisection biases in spatial neglect

• Track 4 : Mika Vainio

» Audio analysis on: Decametric noise storms and radio storms on Jupiter

» Applied data from : Neuroimaging with bipolar disorder and children with serious emotional disturbances

NEURO_PIC_1

‘Neuroplanets’ is an audio project which explores the aesthetics of information on sound. Initially, I worked in commissioned tracks from other artists, by transmitting on them sound analysis results from extremely rare sonic phenomena in other planets. After that, I manipulated these tracks by applying on them numerical/quantitative data and statistical elements from Neurosciences research in serious diseases. My aim was to ‘visualize’ on sound the diseases characteristics and impact on human nature.

The analysis of sounds includes methods enabling the permanent extraction or automatic structuring of diverse sorts of information given off by the signal, such as the fundamental frequency or the spectral evolution determining the pitch and timbre of a perceived sound. The methods used are based on signal processing, statistical analysis, information theory, machine learning and recognition techniques, but also on knowledge of auditory perception and acoustic system sound production.

Contemporary Neurosciences suggest the existence of fundamental algorithms by which all sensory transduction is translated into an intrinsic, brain-specific code. One of ‘Neuroplanets’ main goals was to directly simulate these codes within the human audible range.

At the first stage:

I worked with FFT, Spectragram, Sonogram and Convolution analysis in recordings from:

– Winds, sandstorms and dust devils, which are little tornadoes caused by local weather patterns in Mars

– Ghostly planetary plasma waves from NASA

– Whistle of ultra-cold liquid helium-3 that changes volume relative to the North Pole and Earth’s rotation. Sound don’t travel in a vaquum, They couldn’t hear it’s very low frequency and they accelerated it 40000 times.

– Decametric noise storms and radio storms on Jupiter

NEURO_PIC_3

At the second stage:

I used the analysis information to manipulate the tracks and reach an equivalent sonic level, considering the above described analysis and its results. If for instance, the analysis showed me that the highest frequency of a ‘decametric noise storm in Jupiter’ is 12335 Hz, I manipulated the track to make it peak at exactly this frequency. The musical result and the compositional aspect is highly important for me, therefore I worked towards exploring which specific frequency -or other sound element- makes the piece sounds delicate, solid and dynamic. If Sonogram analysis showed me that there is some intense action at the left part of the waveform, I transmitted this to the track. Additionally, I worked with time and I simulated on scale the analysis from the recordings to the tracks. When I realized, for instance, that the sound from a wind in Mars gets denser at 4’ 34”, I applied this timeline on the track in its original or in a simulated form.

* These are just typical and simplified examples. The project and its concept is much more complex.

At the third stage:

I used statistical elements from Neurosciences to manipulate – following percentages and numerical data – the sound pieces. The following diseases have been analyzed/used:

– Functional anatomy of schizophrenic patients with auditory hallucinations

– Rightward and leftward bisection biases in spatial neglect

– Patterns of music agnosia associated with middle cerebral artery infarcts

– Neuroimaging with bipolar disorder and children with serious emotional
disturbances

NEURO_PIC_2

Posted September 5, 2011 in