Jump to content

Electronics - possibilities/limitations


Eirik

Recommended Posts

Hey!

I want to compose a piece for string quartet and electronics - the setup I think of is 4 mics - one placed on each string instrument - connected to a computer, which processes the sound and sends it out through two loudspeakers, one to the left of and one to the right of the quartet (stereo).

I want to use effects like excessive reverb (and the lack of reverb at all) and "chord generation" (The violin player plays one note, and out comes four different notes, stemming from the input note, but transposed live.)

The problem is that I don't have enough technical insight to know the detailed possibilities and limitations - I have some ideas in my head, but I'm not sure if it can be realized with most of today's studio software, and I would love to hear about more techniques that can be used. (e.g. sound looping, how can I combine that with other effects?)

Any tips?

And a "side question" - if the piece is performed in a very wet concert venue, I'd prefer that the reberb should be dampened as much as possible. Is this somehow possible by setting up soft texture "walls" in particular places?

Link to comment
Share on other sites

One of the most frequently used and versatile software for live electronics is Max/MSP (or it's free variant PD). It's sort of a graphical programming language, which is capable of pretty much anything in live electronics, but requires some study (and in the case of Max/MSP a couple of hundred dollars to buy it). I'm sure there are many other programs that are capable of the things you mentioned, but generally speaking, the easier such a program is to use without training, the less versatile and open it is.

(Another free and -very- powerful software for live electronics is SuperCollider, but this one is even harder to use, has no graphical user interface and a language that takes some getting used to.)

Technically, all the things you mentioned shouldn't be too hard to program for anyone who has the required software and the knowledge of how to use it. The question is how much you really want to get into this, and how much time you have. If you want good results and don't have much time (or desire) to learn how to realize such things, I'd recommend looking around for someone who is experienced in such things and asking them to do it for you.

If you have time and don't mind the effort, you can of course get yourself one of these programs, work through tutorials and help files until you understand it, or see whether there's some class you can attend that teaches it. This will let you do your own things as precicely as you want them, but it takes some time.

If what you want is rather limited to a few "effects", you can also go an even simpler route, and use standard studio software (like sequencers, or tools even more made for live usage, such as Ableton Live, Logic Mainstage, etc.) with the appropriate plugins. Reverberation, harmonizers (e.g. making four notes out of one like you mentioned), loops, etc. should all be possible like this, without too much effort. You'll never have quite the control over all details like this, as in a programming language like Max/PD/SuperCollider however.

And you have to be extremely careful about overusing effects like this. Some electronic effects might sound cool the first time you hear them, but to a person who has heard lots of electronic music, they might sound terribly old and overused, if you just take the typical plugins of a sequencer. And live electronics can be extremely annoying when used without much consideration, just for "cool effects". So if you go such a path, with prebuilt plugins, I'd try to use electronics very directedly and considerately, and don't just rely on the plugins to "do cool stuff on their own".

Also, one note on your "chord generation" thingy: Recording one note and transposing it on different pitches does work by first analysing the sound (using a so-called Fast Fourier Transform) and then resynthesising it on a different pitch. The problem with Fourier Transforms is that they have a tradeoff between sound quality and delay time, so if you want all four voices to set in without delay, you have to deal with a mediocre sound quality. Additionally, transposing a pitch of an instrument always changes the perceived timbre of the instrument (due to the "formants" not being at the same place), and this gets more extreme the greater the transposition is. Sadly, this is rather strong with string instruments. You can transpose a violin a minor or major second rather well, but a fifth already sounds quite unnatural, and an octave is already something entirely different.

I've used such harmonizer effects in a piece for viola, guitar and live-electronics, and it was always quite hard to adjust the mechanism so it wouldn't sound too "electronical". And you always need to be aware of the fact that if you actually want to sound it like a string instrument, you must keep the transposition intervals relatively small.

(Of course you can also use large transposition intervals to get an entirely different timbre, if that's what you want. Transposing an instrument by an octave can have interesting results.)

You also have to be aware that even if you place the mics closely to the instruments, you'll never get perfectly clean signals. You'll always have some cello sound in the violin mic etc. which will be transposed (or otherwise changed) by your electronics along with the violin. The only way you can avoid this is either using electronic instruments, or only using the effect when the others player rest. You could also use pickups instead of mics for the instruments, but to my experience this produces a rather bad sound quality, so I'd rather use mics. But depending on your music, it might not even matter so much if some cello sound gets in the viola mic etc.

As for your side question: With a good program it should be no problem to make the reverberation characteristics of your electronics highly customizable, depending on where you play it.

Link to comment
Share on other sites

Thank you for your answers!

This certainly helped me a lot, I think! I have tested some of the harmonization concept already, using Logic Pro (we have a nice recording studio at my school), but that program isn't really meant for live performance. So I tried playing a tone on my oboe, and the transposed tone was certainly different!

Here's one of my ideas:

my.php?image=openingdd9.jpg

http://img255.imageshack.us/my.php?image=openingdd9.jpg

I want the strings to play tremolo, and the black notes to be added artificially by the computer program. The last measure should be recorded, and then looped while the players stop playing, then the loop gradually fades out while the musicians do something else.

BTW, I hope I have time and resources to learn the programming language - I'll try The SuperCollider. I'm already familiar with some conventional languages, so maybe there is some similarity. I've even heard of FFT - I'm considering getting a degree in mathematics while studying composition (I'll maybe study composition next year).

And about the sound leaking... the setup I need must have as little sound leaking as possible, which means pickups is the best solution. Anyone her who have experience with string pickups? :P

Link to comment
Share on other sites

I want the strings to play tremolo, and the black notes to be added artificially by the computer program. The last measure should be recorded, and then looped while the players stop playing, then the loop gradually fades out while the musicians do something else.

That should all be quite well possible. The main question is how you want to control the live electronics. Your most typical options would be either to have a separate person who controls the whole electronics (which is a rather safe and stable solution), give the performers some means to start different electronic "effects" (with food pedals, for example), or make a whole automatic machine that either just does certain stuff after a certain time (which I wouldn't recommend in your case though, since the electronics are so specifically tied into the score), or have the computer follow the performance on its own, by taking cues such as the loudness, pitch, and rhythm of the mic input and reacting to it (which is not very easy to get running consistently without error).

As for this particular example: You might consider having the strings play senza vibrato, since it sounds somewhat unnatural when four voices do an exactly identical vibrato, which of course happens if you transpose a voice with vibrato to different pitches. Unless you don't mind it sounding a bit unnatural, or the vibrato is important to you to make the "string characteristic" of the instruments clearer (which may get lost a bit in the transpositions). I also see that you always placed the actual played note in the middle of the chords. I suppose that's to make the transposition intervals as small as possible, which is generally a good idea. I've made the experience however that notes that are electronically transposed downwards often tend to be drowned more easily by the direct sound you get from the instrument than notes transposed upwards; especially in a live situation where you never have a total balance between the loudness of the live instruments and the speakers. That's why I generally tend to let the live instruments play the lowest note of the chord in such cases, to create a solid base, and build the FFT transpositions on top of it. But you don't really have to decide such things right now anyways. You have to try stuff like this out for yourself and see what works, as every live-electronic setup is different, and what works in one case doesn't work in another. (Which also means that live electronics are always "risky", as there's so much that can go wrong in a performance. Your setup might do something unexpected, you might have forgot to bring enough cables, the room acoustics aren't like expected, you get feedback, your program crashes, the computer crashes, etc. But that makes it all so exciting too! :P)

As for SuperCollider: I have played around with it a bit and done some small things, but never a real whole piece, and that's been a while ago, so I already pretty much forgot how to use it at all. I.e. I have no real practical experience with using it for such things (I tend to use Max for live electronics, and Csound for purely electronic synthesis). I guess if I had time to really get into it again, I might actually prefer it to Max/MSP though, as the visual approach of Max has some unclarities and the sound quality isn't always optimal, whereas the sound I've heard SuperCollider produce always sounded great. I think SuperCollider would be very well suited for your task, if you don't mind the effort of getting into it. (But again, I don't have much personal experience with it, so no guarantees.)

Link to comment
Share on other sites

Added risks to the performance - more excitement! :P

What I've planned is to have a separate person play the electronics part and write down some of the electronics instructions as an actual "part" in the score, just a part without a 5-line staff, instead having text and maybe some illustrations.

And on the acoustics thing... I may want to use reverb to give some instruments a lot of reverb while others have a dry sound, to have a difference so the reverb should sound "unnatural". But if the piece is performed in, say an art gallery, which is where the string quartets here usually play, the natural reverb in the room might ruin this.. maybe it helps to direct the speakers very directly towards the audience? That's why I asked if one could somehow dampen the natural acoustics a little (It's probably not a solution to cover up the walls (which also means the paintings) with cloth :P)

And about the chords.. I want the natural played pitch also to be played through the loudspeakers along with the transposed notes, I hope that will let the balance be correct.

Link to comment
Share on other sites

Just give everyone in the audience headphones and you won't have any problems with the room acoustics :P

But yeah, wet acoustics can be a problem. Placing the speakers more closely to the audience may work of course, but that may have the side effect of making the "sweet spot" smaller, where the balance between both speakers and the instrumentalists is optimal. But if the audience is small enough and closely together, it may still work.

What I meant with the balance between the performers and the speakers is more that the loudness of the direct sound from the performers always has some minimum loudness - especially if it's like you say in an art gallery, which might mean a respectively small room with hard walls. And you might not always want to have the electronics so loud that one doesn't hear the players directly anymore at all. In such a case you can get the problem that the direct sound masks lower transposed notes coming through the speakers - and having the untransposed sound coming through the speakers too won't help this. But that's mainly a question of the room acoustics, and your personal taste regarding the loudness of the live playing vs. the speakers.

Personally, if I -do- have live performers on the "stage", I usually also want to hear the direct sound from them not too weakly, even if it is -also- amplified through speakers, because it creates a much more direct listening experience than just seeing performers on stage and hearing some music through speakers as two detached elements. But again, that's all a matter of personal approach and depends a bit on the music.

Writing an "electronic part" for a person to play along with the instrumentalists sounds like a very good idea. I did the same thing in the piece for viola/guitar/electronics I mentioned, where both a sound engineering student and myself controlled the electronics throughout the concert, and that worked very well.

Link to comment
Share on other sites

In that piece I mentioned we tried one on the viola, but it didn't sound good, so we went back to mics, despite the leaking (which is not terribly strong if you use directed mics and place them closely to the instruments - but of course that again depends on the room acoustics). Maybe there are better pickups than the one we tried though.

Link to comment
Share on other sites

Now I've got SuperCollider installed and up-and-going! :-)

But the problem is the latency.. I've tried to connect a microphone into my computer, and told SuperCollider simply to return the input signal so I can hear the input, but it has a latency of about 0.5 secs! And that is without any processing whatsoever...

Link to comment
Share on other sites

Hmm, weird. Did you try executing just a line like:

{ SoundIn.ar([0, 1]) }.play;

That works fine for me and returns everything my internal notebook mic records without latency. But maybe you have to play around a bit with your localhost server settings. But again, I'm really unexperienced with SuperCollider, so you might get better help on a SuperCollider forum.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...