Wednesday, March 31, 2010

Teaching Children Subtractive Synthesis

As anyone who reads my blog knows, I am an avid fan of synthesizers.. I suppose that when I was new to the field, I was more impressed by the further reaches of the realms of synthesis such as additive synthesis which held a certain fascination for me because I understood the mathematics and it made music fit into a neat mathematical realm where music in some sense became a giant equation.

Now, some years latter, my belief about synthesis has changed and I believe that music plays us more than we play music. I have had a recent fascination with music therapy and those doctors and others who write about how the brain processes music. I am also ware of my own abilities as a musician and composer and how I got there as well as how music plays me, how I am influenced by all types of music from rock, Celtic, jazz and classical.

What I have realized over the years is how near music is to us. This latest blog is actually an attempt encourage therapists to buy their families at least a rudimentary analog synthesizer. Now that I think about it, a Doepfer Dark Energy might be nice or a micro Korg:

http://www.doepfer.de/Dark_Energy_e.htm - Dark Engery

http://www.korg.com/Product.aspx?pd=128 - MicroKorg

The Dark Energy is better for teaching subtractive synthesis in depth but the MicroKorg teaching music. With the Dark Energy you would also need a MIDI controller of some sort.

Kids are wonderfully open to ideas. When we get older we develop significant filters over time but childhood is a great time of discovery. Kids love to play X-Box and games like Guitar Hero but I thought about it and why not, why not teach children analog synthesis or even learn it together as a family. I also think that analog synthesis (and digital) have a lot to offer the music therapy world but I am still working on convincing others to look outside their box (the filters I am speaking of).

But I do believe that music is very near to us and in fact, infants learn subtractive synthesis from an early age and indeed, music. From 30 weeks a fetus can hear. And what does a fetus hear, the beating of the mothers heart around 70 beats a minute. When the mother is at rest, the fetus hears an adagio tempo. Isn't it interesting that 40 bpm is "grave" which in Latin can mean sick which certainly corresponds to the rate of the human heart which would be nearly dead at 40 bpm. I digress but the child before even leaving the womb experiences an LFO, a low frequency pulse.

After the baby is born at about 9 weeks it becomes aware (and delighted I might add) with the world of sound around it. Is it any wonder that those like Pauline Oliveros, a composer, would also be interested in what she coined "deep listening" which is in a sense an attempt to return to our very early childhood and remember the wonder of the sounds we first heard.

First, the child coos and becomes aware of it's vocal chords (let's call it the oscillator). The the begins to use filters. baby phrases like "da da" and "ma ma" are simple exercises in using filters. Then he consonants are used (the white noise generator). The child also begins to form full words by shaping the sounds (envelope).

OK, I could go on here but I think I have made my point. A child at a very early age learns subtractive synthesis. We don't remember how we learned language, in those early formative stages, but we do learn to use the synthesizer that is the human voice. In fact, the child uses the same techniques of feedback that are used in many ways by a musician called muscle memory. Learning to connect movements of the muscles with musical phrases. Children learn music much easier at a younger age (and language as well) because their filter cutoff in their brain is high. The are open to the many connections and discoveries that are part of the process of making music.

So, my hypothesis? That the human person at any stage, understand subtractive synthesis which interestingly enough remains the most common form of synthesis in synthesizers today.

So, to my music therapist friends with kids (or not). Buy a synthesizer. You can get some cheap ones (well, for little more than video game maching and a few games) and you can introduce your children and yourselves to a wonderful musical world of notes and sounds.

Friday, March 26, 2010

Why I am just not that into modulars

Its funny for as long as I have contemplated getting a modular that I have not bought one. I have all the foogers (with the exception of the MURFs - I have the MIDI MURF - my latest purchase of a fooger - don't see the need to buy the others). OK, sure, if I got myself a bunch of modules and a cabinet I could spend hours happily connecting modules and sure, it would be fun but the truth is that I am a musician and composer at heart. Some of my most happy musician moments are sitting down at an upright piano (not mine) and playing or playing my guitars.

I have always agreed with Robert Moog that music is about the musician connecting with the instrument. I guess I am less into technically complicated instruments now and more into connecting to the instrument I am playing. For this reason, I see my studio not as a studio, a bunch of collected parts, but a whole. With the money I just spent on a "Switchblade" matrix router and a MOTU 828, I could have bought a nice modular but the reason I did this (and the reason I bought am MP 201 pedal from Moog) is that I want to integrate my equipment. I want it to be easy to get to the sound rather than rejoicing over technical specs. I have changed in this light.

So what does the Switchblade do? It lets my program all the complicated cable connections between synths and then just hit a foot pedal (I can do this the the MP-201) to change patches or the mix/crossfade/ect. I want a setup that is like that of a large pipe organ with everything either in front of me or at my feet. It also like Live because its organic. It works with the musician rather than trying to fit the musician into a mixer paradigm of creating a musician work.

I also don't see any clear dividing line between composition, performance and recording. I get an idea and then I try to make it happen and ultimately create a recording. How I get there is part of the creative process.

Thursday, March 18, 2010

Dropping the ball

OK, I got on a Twitter rant this morning and I have to finish it but to save my Twitter followers a twitter storm, I will use a blog. Why is it that companies come up with great ideas and then drop the ball.

Example.

Korg OASYS

The whole idea of this flagship (Korg's word) synth is to create a platform that will remain state of the art by supplying a line of new synths for the future. Where are they Korg? I love this idea and frankly, an expandable hard synth is a great idea but Korg clearly lost interest in this one.

Roland V-Synth

I think everyone has forgetten this but when the V-synth came out there was a promise of more V Cards beyond the D50 and vocal processor right? Again, the idea being expandability. Then, they just integrated these into the two synths in one idea and dropped the ball.

Yamaha

Another company that has offered expansion cards but for only part of their line. FM is still a useful form of synthesis but Yamaha has not built on it.

VirSyn

Making a additive soft synth was a great idea. Now don't' get me wrong, I have gone from being a fan of additive synthesis to a skeptic but I don't think this need be a dead issue. But VirSyn dropped the ball at Cube 2 being on the cutting edge and now makes ho hum effect plug ins. sad but I guess that is what sells.

Native Instruments

I have become a bigger fan on this company over the years because for the most part, with the exception of the B4 and "Spectral Delay" (big mistake), Native Instrument continues to develop there soft synths.

I am sure there are others but my point is, why come up with a great idea and then not develop it but this pattern seems to happen again and again. I have provided examples here but there are more.

Anyway, just had to let that rant finish for anyone who wants to listen.

Thanks for reading.

Wednesday, March 10, 2010

Pychomantiums, Music and Music Therapy

I am currently reading "Musicophilia", a fascinating book on music and the brain. It is interesting that this book discusses how the innate appeal of music to most people seems to defy both the notion that all human traits can be traced back to an evolutionary purpose. Music of couse can't help us to survive so it would seem to fall outside the Darwinian framework which so often in the realm of scientism, seems to claim to be a univesal explaination for all that is alive and indeed, all that is human.

Recent studies on where music comes from in the brain also seems to refute this in that music does not come from one single part of the brain and in fact, is both a right and left brained activity.

What we do know about music if we speak outside of the scientific realm, is that it seems to speak to our soul, to what is most human in us. Not a biological collection of evolved functions but what is human. It speaks to hour hope, our fears, our dreams, our anger and perhaps at times, our nightmares as well. Music in effect acts as a mirror on our soul. As I have said many times, we don't play music it plays us.

So what interests me, and why I sometimes frequent music therapy web sites, is that they seem to be attune to the healing aspects of music but also its strong psychological effects, negative and positive. What I am interested in is if there are universal Jungian type archetypes of sound? R Murray Schafer speaks about this in his "The Tuning of the World". Consider for example the power of the sound of the bell in many cultures. Is there anything universal about these sounds?

And if so, then where does this put the synthesizer. Before it, we were limited to fixed instrument sounds but now, the possibilities are greatly expanded. We can produced sounds that nobody has heard before.

So the synthesizer can in a sense act as a psychomantium to illicit emotional responses in us before not possible or so it might seem. What I am interested in doing is trying to learn the hidden language of the mind so as to use a synthesizer as a tool to speak that very language.

Sunday, February 14, 2010

The Changing Face of Musical Controllers

I once suggested in a post of a board that current controllers for musical expression on synthesizers and other electronic instruments have been woefully lacking. I also suggested that musical controllers had been held back by the pitch bend and mod wheels. To my suprise, on shortsighted response was that these have been good enough for a long time and product developers should just stay he course. I recently posted the important and sometimes negative effects of paradigms. There are many in music and I have posted on some of these before. But the pitch bend and mod wheels seemed to have dominate all keyboards.

A recent innovation that I really like is the Hakem Continuum. It was a good product before but expensive which is why I passed over it despite my interest in it. But two recent changes in the product line and the software itself have got me interested again. The change in the product line is the offering of a half keyboard size controller. Probably about the size of a Voyager keyboard. The price is now around $2,000 US which brings it more in range for me. The other innovation, which makes me want to have one even more, is the upgrade to the firmware which includes several physical models and apparently uses part of the KYMA engine technology.

Another aspect of the product that I really like is the ability to interface the product with control voltages polyphonically. Combine this with a modular synthesizer or moogerfoogers and you have a very powerful combination. Some of the samples of this product using the Moog Voyager are fantastic. Hearing the Voyager sounding like a violin with realistic vibrato is a delight.

What impresses me with the Continuum is that it seems to have the feel of a real instrument. All instrumental expression involves a kind of feedback loop with the brain. Let me give you a practical example. I play keyboard in part, because I want to have access to the universe of hardware and software synths. However, I played guitar long before keyboards except for a brief two year period as a child. Two of the tricks I learned on guitar are tapping notes with the right hand to create arpeggios and using my thumb to slightly mute a note and create a harmonic. I learned how to do both from practicing over and over again and soon, I began to effectively use them without having to think about technique.

One day I was playing the piano and I stared to play an arpeggio which I did really rapidly. I never could do that before and then I realized that my fingers learned to do that on a guitar and did not know the difference between a guitar and a keyboard.

In both cases, I learned techniques by feeling and listening and the feedback loop, repeated enough times, created a type of image in the brain, what I believe neurologists call mirror neurons. Which is why I have an interest in neurology and music therapy as well as psychology. All come into play in music.

I also learned to use vibrato on the guitar and use it with most of my sustained notes. At some point I must have consciously thought about it but now, its just part of a an expressive feel that is now programmed into my brain.

Now on a keyboard, vibrato has been chained to use of the mod wheel (again, old paradigms). Now ok, this can sound decent and I have to admit that the Articulative Phrase Synthsesis or the Roland V-Synth is a step forward, but still, that neurological feedback loop is not there because pushing a mod wheel just does not feel all that musical (or at least for a guitar player).

Now, with a Continuum, you can create vibrato very naturally and with the use of physical models as well, the effect is fantastic and makes the Continuum something more like a real physical instrument but the with the huge difference of having a universe of sounds available.

So it would certainly seem that the dry paradigm of the pitch bend and mod wheel is slowly being transformed by innovative products. I only hope that the trend continues.

Tuesday, February 9, 2010

Prosody and Music

Some may wonder who follow me on Twitter why I re-tweet posts from music therapists like Kimberly Moore and others who study the brain. The simple truth is that I am convinced that to speak about music is to talk about how our brain interprets the music it hears. In a sense, if a tree falls in the forest it really doesn't make a sound. It will vibrate air molecules but in order for sound to be heard, it must be perceived by our minds and this is especially true of music. Music also has a great deal to do with the language processing of our brains. Not exclusively (studies have shown this) but in part.

Prosody is a study of the rhythm, stress and intonation of speech. If you have ever read Trevor Wishearts work:

http://www.trevorwishart.co.uk/

You will see that much of what he does is using concepts of linguistics (i.e. of language). In language, meaning is conveyed not only by the basic building blocks called "phonemes" but also the intonation, inflection of speach which when you think about it in terms of synthesis, is really just the pitch bend wheel when applying these principles to music.

Certainly genres like the blues have made extensive but specific use of note bending to create a certain feel to blues. One example that convinced me of the important of inflection was when I wanted to create music with a Celtic sound. I actually used a Middle Eastern instrument but bend the notes upward which is a technique often used in Celtic music. Surprisingly, one comment on that song was that it sounded Celtic. Yes, it was intended to but what is surprising is that it had nothing to do with the notes but the way I bent the pitch.

Getting back to the brain, this has a lot to do with how mirror neurons. The brain mirrors what it thinks of as Celtic by hearing pitch. Perhaps, this is because the brain is also hard wired for this in terms of language.

This characteristic of how notes are perceived in terms of there intonation can be heard especially well in the AP-Synthesis of the Roland V-Synth which can take sounds that are not all that close in timbre sound like another instrument by basically borrowing that instruments pitch phrasing, a simply but powerful idea. If I am leaving something about about AP Synthesis, I leave those who know more to comment here.

So that all on this for now but I just wanted to blog about this while the idea of fresh in my mind.

Saturday, January 30, 2010

NAMM 2010 - Whatever happened to physical modelling?

Of all the more modern techniques of synthesis that I have seen, it's physical modelling that I feel holds the most promise for futher development. Not physical modelling of analogue synthesizers which I am not that crazy about, but physical modelling of instruments. Of all the synth makers out there, its Yamaha that has been the greatest advocate of physical modelling with Korg following with the Korg OASYS (although no new physical models have come out of the OASYS. On the software side we have Applied Acoustic Systems with Tasman and String Studio and an analogue modeller. Native Instruments Reaktor also has some physical modelling synthesizers.

I guess the reason that you don't see a lot of physical modelling synthesizers is that they take a lot of work and complex mathematics. The benefit is that you have a natural sounding and responding instrument that can be changed as easy as turning a knob which in some ways, is like having a whole warehouse full of instruments from more traditional sounds to other exotic sounds.

Of course, the actual feel of the instrument itself is lost. Controllers are getting better but I think we are a long way from a controller that is as expressive as the real thing. Controllers like the Eigenharp, while a good start, don't impress me as they are little more than MIDI triggering devices. Eigenharp does not even support OSC. ON the percussive side, the Korg Wavedrum seems to do a little better and for violins, the K-Bow is also an improvement.

So at NAMM 2010 we find the CP series of keyboards. I like these and if I had a lot of money and space I would get one. Clearly, there is some degree of physical modelliing going on here but how much is sampled and how much is modelled is not clear. Nice, but not really groundbreaking.

Korg also has the wavedrum which for percussion is clearly a step in the right direction considering this this drum responds differently depending on if you use brushes, sticks, mallets, ect.

But a quick look at the physical modelling offerings at winter NAMM 2010 clearly shows a reluctance on the part of developers to develop physical modelling synths soft or hard beyond what is already out there last year or beyond that.