Thursday, April 7, 2011

Short Review and Reflection on the Jupiter 80

It seems that what is old is new again. Certainly Roland's new Jupiter 80 is no exception. Roland seems to be trying to take advantage of some of the programming they have already done for the V-Synth and their new "super natural" pianos (whatever that means). I sense that they are using something likened to the V-Synth "AP Synthesis" and combined it with the new "Super Natural" modelling. It's hard to tell. This synth seems is strong in two areas: Layering (massive layering) Instrument Sounds OK, nice, but in a way the two are contradictions. If you want to do arranging then sure, having a lot of layers is a plus but then get one of the "Vianna" packages or another set of instrument samples and let loose. Yes, but you might respond: "It's a performance instrument!". OK, sure, but then don't layer. I see the two as a contradiction. This is like having a symphony orchestra with every instrument trying to be the soloist. Perhaps I am wrong here but just saying. I do like the touch screen. Some have a problem with it but having a Korg M3, I can tell you that I prefer the way Roland creates screens that look more like a massive collection on knobs and sliders. By the way, that is what it would take and why this instrument does not have them. The preset keys are nice for performance and after all, that is what this instrument is about except if you really used all the layers it would drown every other member of the band with its wall of sound approach. Nice collection of ins and outs including digital. A slight improvement over an M3. Effects - yawn, clearly imported from other keyboards, nothing new here. Might I also point out that Korg M3 effects let you modulate many of the parameters which can be very powerful. I don't see that here but this is a cursury review. Corrections welcome. D-Beam - old Roland tech - yawn Stereo recorder - why bother when lots of nice little portable units are available - Icing on the cake - lots of sugar. OK, I don't mean to be so harsh here and if someone dropped one of these in studio for free I would have lots of fun with it but I have come to the following conclusion. That synthesis is about modulation and expression. I am thinking particulary about the Eigenharp. Not really an instrument per say but much more than a MIDI controller. Find a more integrated way to combine a controller like that and an expressive model like AP synthesis or whatver is in the Jupiter 80 and it would peak my interest. That also said, the new Korg Kronos is far more of a heavy weight (an OASYS for the poor). So what does the JP 80 add to the mix of lack luster offerings in the synth word. Nothing much other than what is old is new again. Perhaps one day, something that is new will in fact truly be new but until then. P.S. Roland - still waiting for you to come out with new expansion cards for the V-Synth. Or did you ever intend to do that? Sure, packaging old tech is a lot cheaper when you can sell it for the cost of a workstation.

Wednesday, February 2, 2011

The Hidden Agenda

As many of the music therapists who follow me know, I am always trying to encourage you to use synthesizers in your therapy. There are a few reasons why. First, I respect what music therapists do and I feel that music can have as much healing power as medicine does just in a different way. The second reason is that friends always share what they love. Before I was in high school I stumbled across a Morton Subotnick album "Sidewinder". At that time I did not have the slightest idea what the difference between a Moog and a Buchla was but years latter I can talk about models and modules with barely a thought. I know the difference between music box and mini Moog and many other sonic tid bits. I have knowledge that I want to share not only with other electronic artists who understand all the technical details of what I am talking about but to those who may be new to electronic music and synthesizers. I love to teach and share what I know. The third reason, and perhaps the most important is that I really do believe that synthesis has a lot to bring to music therapy for reasons that I will discuss. It is also my hope that my attempts are not annoying. I am not trying to be pushy (well, maybe a little) but it's only in the interest of trying to show some friends something new and perhaps helpful. Up until know you have been silent on my electronic music tweets but things can change right?

So why use synthesizers when there are drums, pianos and guitars (which from what I have observed are the tools of the music therapy trade)? Well, first, they are more flexible. I know that some of music therapists use what I would call arranger keyboards which are small versions of the old home organs. These can be nice but there are a whole universe of more powerful synthesizers that can open up a sonic universe to the listener and musician.

I realize that the first limitation might be price. I can understand that but a piano is not cheap nor a guitar (at least a good one) There are also software synthesizers that are sometimes only 1/10 of the price of a hardware synthesizer. I have some hardware synths but for a specific reason. For a starter synth for those who already own a laptop a soft synth might be a better chooice. I use soft synths as well. Other than a Kurzweil K2000 which I sold, my first softsynth was Native Instrument Absynth (an amazing synth by the way but perhaps not a good starter synth). I notice that MTs seem to be into laptops. A low cost audio interface and what is called a DAW is the price of admission into the soft synth world. Some MTs also use garage band (a low costs DAW - digital audio workstation - which comes with Macs for free). That will get you started and before long you will be using Ableton Live :) Sort of an inside joke but many electronic artists including myself use it for reasons that are way to complicated to explain in a short post. If you are really interested some of my Twitter friends have great web sites with demos you might want to watch. Just ask and I will introduce you.

I would also suggest to you that there are a lot of free synths and effects out there. Absynth is not one of them but a freeware synths can be a good introduction into synthesis. Start with subtractive if you are interested. Most synths in fact do some form of subtrative synthesis.

I also would add as a caveat that mobility is an issue. I almost got a hernia moving an 88 key Korg M3 up a flight of stairs. It's a great keyboard and has piano like action but not mobile. For those who don't have roadies, size can be an issue with hard synths although there are some nice small ones that are a lot easier to deal with. There are also a wide variety of cheap controller keyboards out there. And when you ready, Ableton controllers :) MTs should just let me know and I can recommend some of them.

Soft synths weigh nothing and install onto your laptop. Interfaces are small and a low watt amp is loud enough to use in most applications is enough to go mobile. You can put it all in a backpack.

So you might still ask why? Simple, synthesizers are much more flexible than real instruments. They can make an incredibly wide variety of sounds which may open up new avenues when working with a client. For example, I have boomwhacker samples so if I want to play boomwhackers I can do that on my keyboard using a form of synthesis called sampling. Thee are also a wide variety of drum samples from diffferent countries that will literally turn you keyboard into a drum kit.

There has also been an explosion of new controllers which could be used to open up a musical universe to those with limited mobility which I see as a major advantage. Body movement and even brain waves to mention just a few can be used to control a synth. Products like Percussa Audio Cubes provide multi-sensory feedback (see, I am learning your vocabulary). The Cubes glow and are easy to move and create music with.

I also think that various minimalistic music forms might help autistic patients. The repeating yet changing patterns of a sequencer might give them something to focus on.

So that is my agenda. I don't want it to be hidden but if I am annoying at times trying to get you to cross the streams. My version of advocacy :)

I wish all my music therapist friends the best of success in your carriers and God bless you efforts to help people with the gift of music. If you are interested in following the rabbit into a vast landscape of new sounds, let me know and I can get you started. Where it goes from there is up to you.

P.S. - There is an Electro Music festival in Heugenot NY. A good time is had by all. It's lots of fun and lots of great people and perhaps a way to follow that rabbit. Warning however, once you get hooked on synths well, there is no helping you after that so take heed :) Us EM types are already addicted so it does not matter to us :)

Saturday, January 8, 2011

Why I am not anti MIDI

I am publishing this as a retraction of sorts. I want to clarify that I am not anti MIDI or event to some extent quantization. However, with a lot of the new technology that is available I fear the technology will dehumanize. The use of technology for electronic music has always been around but many of the early works of electronic music, such as those of Karlehintz Stockhausen, were wondeerful experiments in using technology to explore sound. They were not at all de-humanizing but rather a exploration of how the human person encounters sounds and their organisation in music.

I am very much a technologist I just fear that it becomes follow the leader even if the leader is not all that talented. Kesha is a great example in my mind of the abuse of technology at the service of a creating music that is little more than a form of glitter to sell an empty musical box to a subculture.

What do I like? Well, obviously from my last post the Wavedrum. But I also like some products that might surprise people like the Tenori-on. Why? Because it humanizes sequencing again by putting variations in sequences at the thumbs of the musician. It's both creative and playful.

I also like the Eigenharp although the expense takes it a bit outside my range. I see it as a very expressive electronic instrument. I like the Haken Continuum because it breaks away from the pitch bend/mod wheel domination. There are more products I could mention but I use these as solid examples.

What I try to do in my own music is go back to a more experimental time in electronic music before the domination of the drum machine and the dance beat and the glitter and find ways to use technology to create art that allows technology to be a tool of the human person not the focus of the art. I even want a Octatrack to use it in a way that is not intended for and perhaps break the de humanizing trend.

Wednesday, December 15, 2010

A world without MIDI - The Korg Wavedrum

I recently purchased a Korg Wavedrum which I love. Realize, that I am not a drummer but lately have looked a way to add more rhythm to my songs. I also bought the Wavedrum because I consider it to be on the cutting edge of music right now precisely because it does not have MIDI (hushed sigh, people falling down from shock)


OK, you might think that I am either joking or crazy. MIDI is a great standard right? Well, yes and no. I hope by the end of this article I can both explain how the Wavedrum works (although part of that is hidden in its proprietary chips) but also why MIDI is not the be all and end all in electronic instruments and why it makes at least some sense for the Wavedrum to be missing MIDI.

Anyone who plays a natural instrument such as a guitar or violin, as good examples, knows that there is a magical interaction between musician and instrument. It's a kind of feedback loop. Musical expression, at least for natural instruments, is a complex interaction between the musician and the instrument. The first part of the loop involves both the sense of touch and sight. Although sight only functions initially to identify where notes are. A musician does not need sight to play an instrument but having a sense of touch is essential.
One can even speak of an instrument being embedded in our memories. Our brains even remember the sense of touch in what is often called muscles memory. Our muscles know where to go. A guitarist, such as myself, can bar a fret without looking to see where it is. Even the fine muscle movements for vibrato and harmonics can be learned so that they become fluid and part of the organic whole that forms the basis of musical expression.


One of the faults I see in MIDI is that music becomes quantized and loses expression. At firswt this quantization was time based but with the advent of autiotune, evenp itch is quantized. DAWs even intentionally quantize timing and notes that may not fall on the grid of time and pitch. Any musician will tell you however that musical expression lies in that space in between. The bend of a note or the ever to slight variation in timing can make for magical moments.

Physical instruments also vary from most electronic instruments in that notes are not simply turned on or off (as in MIDI). In fact, MIDI in some ways handcuffed synthesizers by making the keyboard, pitch bend and mod wheel almost obligatory. There are alternative controllers and Buchla has steered away from using a keyboard for it's modulars but for the most part this is true. For a physical system, the playing of a single note is a complex event. The note begins when energy is imparted to the system (or in terms of physics, the system is exited). For example, the guitar string is first stretched with a pick and then once released begins a kind of complex dance (we call this the transient) until it comes to an equilibrium. It is that transient where much of musical expression lies.
There has been a dominant belief in electronic music that the power of electronic instruments is in the waveform. I strongly disagree with this. In fact, studies show that a note cut of from it's transient leaves the listener unclear as to what instrument it comes from. It is also here that musical expression lies in all the subtle ways that that first excitation of a system occurs and resolves itself into a more stable waveform.

So why is all this important? Simple. MIDI ins and outs allow a musician to recreate a performance either through a built in sequencer, a DAW on a laptop or even a hardware sequencer. This is done by recording a MIDI data stream. Because of the nature of a physical instrument, one cannot re-create the performance on a physical instrument using MIDI. Sound can be sampled to create a snapshot in time but not the myriad of ways the musician is able to express themselves. For example, there are guitars that have MIDI outs or can use a MIDI interface but this can't be used to recreate a guitar performance only drive a MIDI synth. Roland V guitar system amd Line 6 guitars take a different route by using piezo pickups to first pick up vibrations and then use additional signal processing to create the final sound. This is what the Korg Wavedrum does as well.

Now its possible to put a MIDI out on a Wavedrum. Note on messages would be easy to create and even velocity. Drum head pressure could be used to map to pitch or aftertouch but these can only be used to drive samples not re-create the performance of someone using a wavedrum.
What is the Wavedrum? It functions somewhat like a Moog Guitar might with built in electronics but the Wavedrum's electronics are far more sophisticated DSP algorithms which process the signals from the piezo pickups (one for the head and two for the rim). These signals also trigger PCM samples which are mixed with the processed signal. In the case of "double sized" algorithms which are more complex, the rim and head and processed together. The head also senses pressure which changes the sound much like a drum head or also creates more ambient sounds.

The Wavedrum sounds and responds like a real instrument. It creates that two way feedback loop I talked about earlier which is lacking substantially in MIDI based electronic instruments. In many ways I would like to see at least some electronic instruments move in this direction. Sure, you can't duplicate the performance of a Wavedrum using MIDI but you can't do it with a guitar either and I don't see anyone stop playing and recording guitars because they don't have MIDI outs.

What the Wavedrum has over a physical drum is that its very very flexible. There are 100 presets and 100 user programs. The drum is programmed by a complex parameters which varies for each algorithm and are based on a generalized model of certain classes of drums. The PCM samples are used to provide a more specific type of sound which corresponds to various drums and percussive instruments.
The Wavedrum is a hybrid of samples and real sound and processing and I would love to see more instruments follow in this direction. Sure, moving outside the realm of triggered MIDI samples is scary to some who have grown up with them but I think its worth the bother.









Tuesday, December 14, 2010

All is not Gold

One of the aspects of music that I think I continue to learn each and every day is that not everything that can be done should be done. I just watched a video tonight of someone using his arm as a drum by tapping his fingers on it. I don't post it here because I am trying to be kind but my response to this video is why? Ultimately the test for any music is in the hearing right? OK, I admit that sometimes I am expressing certain concepts in my music and it might help to know what those are but even in these songs I ultimately want them to stand on their own. I guess my point is that I should not need a video to understand what I hear and what I heard with the finger tapping sounded like a cheap $10 DIY drum machine.

Again, if I point the finger at myself I admit that what I do is experimental. But I hope that I seek something of value musically. This is all subjective but my point is that all music and especially experimental music and instruments require a great deal of discernment and refinement. In other words, just because someone can do something, in many cases they would be better of not wasting there time if it's not liiely to yield some musically useful results.

I will leave it at that until the next blog when I talk about an experimental music that does work - The Korg Wavedrum.

Friday, December 10, 2010

Through the Auto Tune Looking Glass

Lately, I have been hearing a lot about auto tune. Rachelle Norman (a board certified music therapist) recently sent me an article in "Slate" by Jonah Weiner on Ke$ha's use of autotune:

http://www.slate.com/id/2276924/?from=rss

And I also listened to a discussion of autotune's and Melodyne's pros and cons in this recent interview on Sonic State of Tara Busch, Maf Lewis and others. Sideline: some great new music from Tara as well (analogue/Moog goodness):

http://www.sonicstate.com/news/2010/12/09/podcast-sonic-talk-200-tara-busch-live/

I believe that Ke$ha's use of autotune is, like so many other dreadfull applications of it, a gimmick. No doubt her hard edge and dance beat are also just formulas for effective marketing but not necessarily good music. I find it often difficult to distinguish between what is commercial and what is jingle. Weiner talks about "ear worms" in his article. I suppose in many ways that writing jingles or songs that attempt to use the same technique as jingles or commercials is a kind of art form but it is not what I would call creative.

So let me get to the looking glass. Many forms of music make effective use of pitch bending as an often very effective form of musical expression. Consider for example Celtic music that often bends up to create a distinctive style along with the scales that are used. I have used this technique in my music by simply bending the pitch wheel down before playing the note and bending up.

On the side of American musical art forms, blues not only uses pitch bending but also has a note specifically called the "blues note" that is especially appropriate to bend. Delta blues also makes use of the cordican bottle or slide and many old school country music band use the steel slide guitar. This same slid guitar is also effectively used in Rock (with a bit of distortion added) by David Gilmour in some Pink Floyd songs and by Led Zepplin and of course, the Alman Borther band to name just a few. There are many many others. Also consider the use of vibrato for violin in classical music and also for guitar in many genres of music.

Pitch bend also is used almost subliminally by vocal artists from R&B to rock but also more subtly by artists like Bob Dylan who developed an enormously popular style partially because of his use of pitch bend in his voice.

The changing pitch of birdsong had been used to wonder effect by composers like Olivier Messiaen spent an incredible amount of time carefully and artistically transposing birdsong.

Pitch is also instrumental in human language which is neurologically related to music in th brain. Many eastern languages such as Mandarin use pitch as part of changing the meaning of a word but in just about any language used changes in pitch to convey meaning.

My point is that the desire to quantize pitch seems to contrary to what so many spend their lives perfecting in music be it voice or an instrument. We put pitch bend wheels on synthesizers and violins and some basses have no frets so avoid quantization. With the invention of drum machines it also seems that everyone wants to quantize time. Sure there is groove quantize but isn't that just another form of quantization?

I understand the purpose of Melodyne to make minor corrections to pitch to put the finishing touches on a mix but the idea that a quantized voice is desirable when it seems to be so much of the art of music thrives on playing outside the grid lines. Why so many want to autotune leaves me without a clue unless it really is just a gimmick.

So, as I have said on Twitter, I hope autotune dies a quick death. For those who like it, don't worry, someone will fnd a new gimmick to sell. In the meantime others will play there notes off the grid lines and through the looking glass.

Sunday, November 28, 2010

On Gestures, Kinekt and Beyoind

There has been a great deal of excitement lately about the Kinect video game console. This console uses the motions of the human body rather than a Wii controller or something like it to control a virtual video game world. Many have naturally thought about its adaptation as a musical controller. I will hold off my applause for a while but I wanted to express a few concerns.

One application would be a kind of 3D theremin. The idea would basically be to map parameters representing 3D space to synthesizer parameters. That's fine and a marginal advancement over the theremin but I would not really call it groundbreaking. An example of what I do find at least a little ground breaking is the Eigenharp. It's still simply parameter mapping but there is a very fine degree of control over the controllers on each pad not to mention the 2D array. Many I have discussed this new technology with have likened it to being an instrument rather than a controller. I agree. Any thing I have seen for Kinect places it more as a controller. That's ok, but its not groundbreaking IMHO.

So what would be? I believe that controllers will truly break ground when they move from controller to gestural controller. What do I mean by that? Simple. We all use gestures. We first use them when we learn to speak. Our own body had a very complex synthesizer built right in. A voice box that acts as an oscillator and our throat, mouth, tongue and lips that all act as filters and our muscles which control these as modulators. But rather than thinking about position in space (the current paradigm be it kinect, Roland D-beam, theremn, ect), all of these are defined by morphology. Confused, ok. Morphology is just a way of describing how something changes over time. This is why I am interested, fascinated even memorized by developments studying the brain. The brain does not think in terms of coordinates in space. When someone for example extends their hand to use we don't start to think, ok, what is the coordinates of their hand. No! We see gesture. The position of the hand, the open hand, the extension of the hand to the other person, a smile, the direction of our eyes, all of these gestures get processed by our brains and our brains interpret them as a handshake.

Now here is the trick, moving beyond coordinates to gesture. Apple has done this a bit with there computers and I even have a Sony Vaio that interprets two quick finger pats on the mouse pad as a mouse click. Now consider a conductor and how, without using any physical device other than a conductors wand, is able to communicate to the orchestra musical information.

This is my criticism of Kinect as a musical controller. I don't see it moving beyond an XYZ controller (yawn) because to interpret gesture takes a very quick computer and some very sophisticated programmers. Do I think we will get there? Sure and Buchla already has done this with with the Buchla Lighening which is a rudimentary gestural controller albiet at a high price. My problem with Buchla is that they want someone to invest a lot of money in a product without even having a manual or sufficient demos to look at first. The demos that are out there really don't explain the gestural interpretation engine and frankly some of them look more like the motions of an escaped mental patient than a musician.

Anyway, I have bright hopes for the future but is Kinect the answer? I don't think so but as I said, I will at least partially suspend my judgement.