Some may wonder who follow me on Twitter why I re-tweet posts from music therapists like Kimberly Moore and others who study the brain. The simple truth is that I am convinced that to speak about music is to talk about how our brain interprets the music it hears. In a sense, if a tree falls in the forest it really doesn't make a sound. It will vibrate air molecules but in order for sound to be heard, it must be perceived by our minds and this is especially true of music. Music also has a great deal to do with the language processing of our brains. Not exclusively (studies have shown this) but in part.
Prosody is a study of the rhythm, stress and intonation of speech. If you have ever read Trevor Wishearts work:
http://www.trevorwishart.co.uk/
You will see that much of what he does is using concepts of linguistics (i.e. of language). In language, meaning is conveyed not only by the basic building blocks called "phonemes" but also the intonation, inflection of speach which when you think about it in terms of synthesis, is really just the pitch bend wheel when applying these principles to music.
Certainly genres like the blues have made extensive but specific use of note bending to create a certain feel to blues. One example that convinced me of the important of inflection was when I wanted to create music with a Celtic sound. I actually used a Middle Eastern instrument but bend the notes upward which is a technique often used in Celtic music. Surprisingly, one comment on that song was that it sounded Celtic. Yes, it was intended to but what is surprising is that it had nothing to do with the notes but the way I bent the pitch.
Getting back to the brain, this has a lot to do with how mirror neurons. The brain mirrors what it thinks of as Celtic by hearing pitch. Perhaps, this is because the brain is also hard wired for this in terms of language.
This characteristic of how notes are perceived in terms of there intonation can be heard especially well in the AP-Synthesis of the Roland V-Synth which can take sounds that are not all that close in timbre sound like another instrument by basically borrowing that instruments pitch phrasing, a simply but powerful idea. If I am leaving something about about AP Synthesis, I leave those who know more to comment here.
So that all on this for now but I just wanted to blog about this while the idea of fresh in my mind.
Tuesday, February 9, 2010
Saturday, January 30, 2010
NAMM 2010 - Whatever happened to physical modelling?
Of all the more modern techniques of synthesis that I have seen, it's physical modelling that I feel holds the most promise for futher development. Not physical modelling of analogue synthesizers which I am not that crazy about, but physical modelling of instruments. Of all the synth makers out there, its Yamaha that has been the greatest advocate of physical modelling with Korg following with the Korg OASYS (although no new physical models have come out of the OASYS. On the software side we have Applied Acoustic Systems with Tasman and String Studio and an analogue modeller. Native Instruments Reaktor also has some physical modelling synthesizers.
I guess the reason that you don't see a lot of physical modelling synthesizers is that they take a lot of work and complex mathematics. The benefit is that you have a natural sounding and responding instrument that can be changed as easy as turning a knob which in some ways, is like having a whole warehouse full of instruments from more traditional sounds to other exotic sounds.
Of course, the actual feel of the instrument itself is lost. Controllers are getting better but I think we are a long way from a controller that is as expressive as the real thing. Controllers like the Eigenharp, while a good start, don't impress me as they are little more than MIDI triggering devices. Eigenharp does not even support OSC. ON the percussive side, the Korg Wavedrum seems to do a little better and for violins, the K-Bow is also an improvement.
So at NAMM 2010 we find the CP series of keyboards. I like these and if I had a lot of money and space I would get one. Clearly, there is some degree of physical modelliing going on here but how much is sampled and how much is modelled is not clear. Nice, but not really groundbreaking.
Korg also has the wavedrum which for percussion is clearly a step in the right direction considering this this drum responds differently depending on if you use brushes, sticks, mallets, ect.
But a quick look at the physical modelling offerings at winter NAMM 2010 clearly shows a reluctance on the part of developers to develop physical modelling synths soft or hard beyond what is already out there last year or beyond that.
I guess the reason that you don't see a lot of physical modelling synthesizers is that they take a lot of work and complex mathematics. The benefit is that you have a natural sounding and responding instrument that can be changed as easy as turning a knob which in some ways, is like having a whole warehouse full of instruments from more traditional sounds to other exotic sounds.
Of course, the actual feel of the instrument itself is lost. Controllers are getting better but I think we are a long way from a controller that is as expressive as the real thing. Controllers like the Eigenharp, while a good start, don't impress me as they are little more than MIDI triggering devices. Eigenharp does not even support OSC. ON the percussive side, the Korg Wavedrum seems to do a little better and for violins, the K-Bow is also an improvement.
So at NAMM 2010 we find the CP series of keyboards. I like these and if I had a lot of money and space I would get one. Clearly, there is some degree of physical modelliing going on here but how much is sampled and how much is modelled is not clear. Nice, but not really groundbreaking.
Korg also has the wavedrum which for percussion is clearly a step in the right direction considering this this drum responds differently depending on if you use brushes, sticks, mallets, ect.
But a quick look at the physical modelling offerings at winter NAMM 2010 clearly shows a reluctance on the part of developers to develop physical modelling synths soft or hard beyond what is already out there last year or beyond that.
Friday, January 29, 2010
NAMM 2010 - Where's the Beef - Part I
I hate to be negative about things musical, but I have to say that I was expecting a bit more innovation out of NAMM 2010.
First, let's take a look back at past NAMMs and promises made and not kept:
The Roland V-Synth
This synth has caught my eye many times and I must admit, that at times I have thought about expanding my already cluttered collection of synths to included at least the rack mount version. But let's look at NAMM. Roland releases the Roland VP-7. The VP-7 looks like a nice little box and it was somewhat attractive to me but let's face it, Roland is repackaging their vocodor technology along with the vocal side of the V-Synth. And let's no forget that that side of the V-Synth was a card and that more of those cards were going to come and expand the V-Synth. Hmm, those seemed to be strangely missing from NAMM.
One of the reasons that I liked the V-Synth was that it was expandable. That and the phrasing technology and the time trip pad where alll nice innovations. That and some old COSM technology made this a nice package with a lot to offer but the expandability was key and that, seems to have gone by the wayside.
Dave Smith
More repackaging. Following along the footsteps of other mini synths, the Morpho now has a keyboard. OK, nice to have if one is looking for a synth on a budget but can you really call this new technology?
Korg
How the mighty flagships have fallen. OK, Korg introduced some new stuff most notably a better Kaossilator (lest anyone forget, still old technology) and the Wavedrum (old technology returned). What was missing where new upgrades to the expandable (?) flagship synth, the OASYS. To thing that I though of investing huge $s in this synth hoping that it would be a do it all synth that would continue to expand. I thought better and got myself a real analogue synth (the Moog Voyager) and foogers and have a system that is almost modular and real (no emulators for me). For FM I have the FM8 (digital technology anyway so why get a hardware synth for this) and I am hoping the Applied Acoustic Systems might come out with some new physical modellers. Granted, at the movement the OASYS is better for this but of course I have never played one.
More latter. I have to go make the donuts but part II will follow.
First, let's take a look back at past NAMMs and promises made and not kept:
The Roland V-Synth
This synth has caught my eye many times and I must admit, that at times I have thought about expanding my already cluttered collection of synths to included at least the rack mount version. But let's look at NAMM. Roland releases the Roland VP-7. The VP-7 looks like a nice little box and it was somewhat attractive to me but let's face it, Roland is repackaging their vocodor technology along with the vocal side of the V-Synth. And let's no forget that that side of the V-Synth was a card and that more of those cards were going to come and expand the V-Synth. Hmm, those seemed to be strangely missing from NAMM.
One of the reasons that I liked the V-Synth was that it was expandable. That and the phrasing technology and the time trip pad where alll nice innovations. That and some old COSM technology made this a nice package with a lot to offer but the expandability was key and that, seems to have gone by the wayside.
Dave Smith
More repackaging. Following along the footsteps of other mini synths, the Morpho now has a keyboard. OK, nice to have if one is looking for a synth on a budget but can you really call this new technology?
Korg
How the mighty flagships have fallen. OK, Korg introduced some new stuff most notably a better Kaossilator (lest anyone forget, still old technology) and the Wavedrum (old technology returned). What was missing where new upgrades to the expandable (?) flagship synth, the OASYS. To thing that I though of investing huge $s in this synth hoping that it would be a do it all synth that would continue to expand. I thought better and got myself a real analogue synth (the Moog Voyager) and foogers and have a system that is almost modular and real (no emulators for me). For FM I have the FM8 (digital technology anyway so why get a hardware synth for this) and I am hoping the Applied Acoustic Systems might come out with some new physical modellers. Granted, at the movement the OASYS is better for this but of course I have never played one.
More latter. I have to go make the donuts but part II will follow.
Thursday, January 28, 2010
Prosidy, Music, Pitch and Language
I know that I blog a lot about Tangerine Dream but one of the reasons that I listen to them so much is to pick on on their techniques. There are many I could mention but the one that fascinates me is there use of pitch to create often complex phrasing of individual notes. One of the writers who I also have learned a great deal from is Treavor Wishheart sho speaks of the relationship of music and speech. Change in pitch or intonation is used much in speech. In English and many western languages it is used to express additional information not necessary contained in the words themselves. In eastern languages, it is part of the meaning of individual words which is often what makes it difficult for a westerner to learn languages such as Mandarin.
But musical phrasing using variation in pitch is also significant in music. As a guitarist I often will bend notes and use vibrato. However, more recently, in the music of bands like Tangerine Dream, I have found that the fluid movement of pitch within a phrase can add much to an electronic composition. One of the unfortunate consequences of musical notation is that it has relegated phrasing of pitch within the confines of musical style. While this is not entirely true because certainly there are certain elements within the notational system to alert the musician to phrase passages or individual notes a certain way, it is a limited system.
Many 20th century composers have expanded on notational systems and invented their own. Karlheintz Stockhasen certainly comes to mind for me but there are many others. I also can think of Olivier Messiaen who took the phrasing of birds and try to express them in rapid phrasing expressed in individual notes and in fact, transcribed if you will the songs of many birds.
However, I often wonder what benefit might be derived from actually recording the actual variation in pitch and using these to phrase other sounds. In some ways, this is what the Roland V-Synth does which I always thought was a nice idea and could be used more. Morton Subotnick also did with with ghost tracks as he called them.
It also seems to be that the use of analogue synthesizers to explore the realm of phrasing of pitch is largely unexplored territory. Using control voltages rather than MIDI CCs creates a far more robust environment not subject to the limitations of finicky CPUs.
Anyway, just a few thoughts on this to perhaps spur further conversation.
But musical phrasing using variation in pitch is also significant in music. As a guitarist I often will bend notes and use vibrato. However, more recently, in the music of bands like Tangerine Dream, I have found that the fluid movement of pitch within a phrase can add much to an electronic composition. One of the unfortunate consequences of musical notation is that it has relegated phrasing of pitch within the confines of musical style. While this is not entirely true because certainly there are certain elements within the notational system to alert the musician to phrase passages or individual notes a certain way, it is a limited system.
Many 20th century composers have expanded on notational systems and invented their own. Karlheintz Stockhasen certainly comes to mind for me but there are many others. I also can think of Olivier Messiaen who took the phrasing of birds and try to express them in rapid phrasing expressed in individual notes and in fact, transcribed if you will the songs of many birds.
However, I often wonder what benefit might be derived from actually recording the actual variation in pitch and using these to phrase other sounds. In some ways, this is what the Roland V-Synth does which I always thought was a nice idea and could be used more. Morton Subotnick also did with with ghost tracks as he called them.
It also seems to be that the use of analogue synthesizers to explore the realm of phrasing of pitch is largely unexplored territory. Using control voltages rather than MIDI CCs creates a far more robust environment not subject to the limitations of finicky CPUs.
Anyway, just a few thoughts on this to perhaps spur further conversation.
Tuesday, January 26, 2010
Psychomantiums
I recently listened to Tangerine Dream's Zeit for about the 10th time. I like Tangerine Dream but this album is unique and, I also believe it to be by far their best. For some reason, when I listened to this album, it brought back a strong stream on consciousness. The slow evolving drones and other worldly sounds that seems to come from out of some otherworldly sonic landscape made me think of the word pscyhomanteum. This word is a very powerful archetype of sorts celebrated in many cultures, in literature, in movies. It is the other side of the mirror, the flip side and also perhaps, a reflection of our subconscious brought out of the effective use of mirrors or as I will suggest in perhaps a series of blogs, sound and music.
A brief introduction can be found on wiki but no doubt a quick google search will yield a plethora of interesting sites on the subject.
http://en.wikipedia.org/wiki/Psychomanteum
One site I found has a drone. I believe there is a strong connection here. For example, Tibetan bowls are a form of sonic psychomanteum inducing meditative states. Many cultures use rhythms and drones to create a mirror image world of the world we live in and perhaps, a landscape to explore the inner world.
The psychomanteum is found in places you might not expect it such as Tolkien's middle earth as Gadreil reveals to froto who might occur should he fail his mission. Even the bible speaks of it:
"In front of the throne was something that resembled a sea of glass like crystal. " - Rev 4:6
There is much spoken of it in Greek mythology and literature.
Modern psychology will often use a mirror in a dark room.
Anyway, I thought the idea was worth exploring musically to see connection between the psychomateum and music. More to come so stay tuned...
A brief introduction can be found on wiki but no doubt a quick google search will yield a plethora of interesting sites on the subject.
http://en.wikipedia.org/wiki/Psychomanteum
One site I found has a drone. I believe there is a strong connection here. For example, Tibetan bowls are a form of sonic psychomanteum inducing meditative states. Many cultures use rhythms and drones to create a mirror image world of the world we live in and perhaps, a landscape to explore the inner world.
The psychomanteum is found in places you might not expect it such as Tolkien's middle earth as Gadreil reveals to froto who might occur should he fail his mission. Even the bible speaks of it:
"In front of the throne was something that resembled a sea of glass like crystal. " - Rev 4:6
There is much spoken of it in Greek mythology and literature.
Modern psychology will often use a mirror in a dark room.
Anyway, I thought the idea was worth exploring musically to see connection between the psychomateum and music. More to come so stay tuned...
Wednesday, January 13, 2010
False Expectations - The Problem with Surround Sound
This blog is NOT plagiarized but I must admit that another blog inspired me to write it. I can't even remember which one it is but much thanks to inspiring this tirade against to many speakers.
Lately, there have been an explosion of surround systems and production programs that produce surround sound recordings. Beyond that, there have been musical exhibits that utilize several speakers and in fact, this practice goes back to those like Alvin Lucier in the early days of electronic music. While interesting, ultimately the artist has to get his sound to the listener. So the relevant question, perhaps the most relevant of all if, where is the listener.
Now perhaps some see their listeners in front of great surround systems distracted by nothing and sitting down to a night of critical listening. I am a musician/composer and I listen to most of my music on the stereo system that came with my car. To be honest, a great Bach Fugue or even great Tangerine Dream music sounds pretty good on at least a decent system. Now I don't use an MP3 player, yes I know I am backward, but how many people listen to music while jogging or on the job somewhere? So sure, you can compose a work to be played on the most high tech system in the world with speakers hanging from the rafters but that is not what the listener has.
So sure, surround is interesting but for me, I will stick with stereo, its worked for decades now and while spoil a good thing.
Lately, there have been an explosion of surround systems and production programs that produce surround sound recordings. Beyond that, there have been musical exhibits that utilize several speakers and in fact, this practice goes back to those like Alvin Lucier in the early days of electronic music. While interesting, ultimately the artist has to get his sound to the listener. So the relevant question, perhaps the most relevant of all if, where is the listener.
Now perhaps some see their listeners in front of great surround systems distracted by nothing and sitting down to a night of critical listening. I am a musician/composer and I listen to most of my music on the stereo system that came with my car. To be honest, a great Bach Fugue or even great Tangerine Dream music sounds pretty good on at least a decent system. Now I don't use an MP3 player, yes I know I am backward, but how many people listen to music while jogging or on the job somewhere? So sure, you can compose a work to be played on the most high tech system in the world with speakers hanging from the rafters but that is not what the listener has.
So sure, surround is interesting but for me, I will stick with stereo, its worked for decades now and while spoil a good thing.
A Brief History of Sound - Analogue Vs. Digital
A while back I ordered several vactrols from an electronics company. Many may not know what these are but those familiar with the synthesizers of Buchla will probably recognize the name. Another name for them is optocoupler. Simply, vactrols are and LED combined with a photocell encased in a shell so that no light gets in from the outside. The practical reason for these is to isolate a higher voltage circuit from a lower one. The musical reason is that vactrols sing. That is, they tend to chirp when used to control filters and have a rather distinctive sound.
I used to be fascinated by waveforms because I fell into what I call the fourier trap and believed that all sound is represented by fourier series. If you don't know what this is don't worry. It's a staple of additive synthesis but it defines static waveforms. Truth is, sound is dynamic. When we hear and instrument, much of what we really identify with is the attack transient, the first part of a note. Some might thing of a vactrol as only a switch, LED on/LED off but it's that period of less than a second that makes all the difference.
Stephen Hawking made physics popular with his "A Brief History of Time". In many ways, Road's "Microsound" does this for sound. We realize, the importance of the attack transient. Analogue synthesizers lend themselves to using this transient by controlling sound with voltages. Voltage is well, electrons which are very very quick to say the least. Simply put, this means that analogue circuits respond quickly and can create all sorts of interesting transients. This is often done with pitch. Think of how musicians as well use variations of pitch in the early transient of a note to add expression.
Digital electronics have trouble with transients at times because they require CPU cycles where analogue circuits don't. Until we have much faster programs, I think this is where analogue circuits have distinctive advantage and when we are speaking of a brief history of sound, analogue is king.
I used to be fascinated by waveforms because I fell into what I call the fourier trap and believed that all sound is represented by fourier series. If you don't know what this is don't worry. It's a staple of additive synthesis but it defines static waveforms. Truth is, sound is dynamic. When we hear and instrument, much of what we really identify with is the attack transient, the first part of a note. Some might thing of a vactrol as only a switch, LED on/LED off but it's that period of less than a second that makes all the difference.
Stephen Hawking made physics popular with his "A Brief History of Time". In many ways, Road's "Microsound" does this for sound. We realize, the importance of the attack transient. Analogue synthesizers lend themselves to using this transient by controlling sound with voltages. Voltage is well, electrons which are very very quick to say the least. Simply put, this means that analogue circuits respond quickly and can create all sorts of interesting transients. This is often done with pitch. Think of how musicians as well use variations of pitch in the early transient of a note to add expression.
Digital electronics have trouble with transients at times because they require CPU cycles where analogue circuits don't. Until we have much faster programs, I think this is where analogue circuits have distinctive advantage and when we are speaking of a brief history of sound, analogue is king.
Subscribe to:
Posts (Atom)