Placeholder Image

Subtitles section Play video

  • Hello.

  • You know, got headphones on, then we're gonna be doing a code of yourself synthesizer video.

  • And today I thought it would be useful to have a look behind the scenes that what's actually happening If you've seen any of the videos before and frankly, to enjoy this video to its fullest, you probably should have seen the others were going to be looking at this mystery file called olc noisemaker dot age, which is that file I've written, which runs in the background to handle the sound hard work.

  • You may also notice to be going on in the background, And that's because I've developed a sequence in which now uses the synthesizer tools that we've already developed to produce this structure drum pattern.

  • It's not a very complicated addition to the synthesizer, so I'll be including it at the end of the video.

  • If you have been using my synthesizer, you'll know that we always include the OLC noisemaker dot h file at the top, and this includes the utilities to talk to the sound card and produce sounds.

  • When we start programming with code it yourself synthesizer, the first thing we need to do is create an object off type.

  • Well see noisemaker, and we talked about this in the first video on.

  • We also said there were two magic numbers at the end.

  • I think it's time to understand what these two magic numbers are really for.

  • Let's take a step back from the code and actually consider how our code it yourself sound synthesizer delivers sound to the speakers so we can hear it.

  • And I'm going to emphasize that this is an ideal real time scenario.

  • The synthesizer produces samples in digital form here, 16 bit would to the sound cut, which is also known as a Dacko, a digital to analog converter.

  • And it takes our digital binary word here and converts a single point off amplitude in the way form in analog.

  • Now we saw all of this in video one, However, for all programmers, this contains two dreaded words.

  • Rial time, an ideal all the Windows desktop operating system.

  • There is no such thing as real time, so we always have to try and come up with hacks and workarounds to make it appear real time on when we're dealing with the real time.

  • In the real world, things can't be ideal.

  • Other things get in the way and stop is achieving what we want to achieve.

  • This is a more realistic look of the system as well as my synthesizer application.

  • I'm also competing for resources with windows were chrome and O.

  • B s visual studio and knowing my look and probably mining Bitcoin for some international agency, which means all of these things are competing for resources on the CPU.

  • And if you're using just the regular windows sound mapper as we are with the one lone code a synthesizer, one of the things we can't escape from is sound Israel Time on dhe for our synthesizer were typically you doing everything at 44,100 hertz.

  • This means were sending this many samples to the sound card per second.

  • And this is a hard deadline.

  • If we don't match these timings, the sound will sound choppy and broken up or it'll sound speeded up like the chipmunks.

  • So what's approaches?

  • Can we take?

  • Let's start off with a really naive approach.

  • Let's take a timer.

  • Andi, this timer is clocked to output the frequency that we need and the timer generates an interrupt which interrupts windows on windows, then goes away and collects all of the information it requires to generate a single sample to deliver to the sound driver.

  • Theoretically, there's nothing wrong with this approach.

  • We know the samples will be delivered in real time to the hard work.

  • Practically, though, it's disastrous for Windows together, all of the information it needs to produce the sample it has to interrogate all of the processes which are using that particular sound interface on.

  • We're instructing it to do this at 44,000 times a second.

  • That means Windows needs to do 44,000 contact switches where it interrupts the currently running process stores.

  • Its state launches the new process to where it was before, gathers the sound information stores that process back on, then goes on and on and on, and we're asking it to do this of all processes 44,100 times a second.

  • This is quite unreasonable on the CPU.

  • Time required to do these context, which is is actually quite significant.

  • We could do a quick calculation to see how much time does the CPU have to produce each sample.

  • In this case, it's no 0.2 milliseconds approximately so Windows has to manage all of these interruptions and data gathering within a not point to Miller second Window.

  • Well, the most obvious thing to do is reduce the number of interrupts.

  • So what if we set our time?

  • It is something a little bit more manageable, say, 20 Hertz.

  • Clearly, in this situation now, we must deliver more than one sample in order to achieve our 44,100 output sampling rate.

  • Doing simple calculation, we can see we need to produce 2205 samples now per into ABS, and this is approximately 50 milliseconds worth of audio.

  • Creating a packet of audio is just better all round.

  • As each processes switched in, it can go away and generate 50 milliseconds worth of audio.

  • This will result in fewer RAM and catch mrs on Be more optimal regarding CPU resources, however, it's introduced now one important dynamic, and that's Leighton.

  • See, in this case, there will always be a 50 millisecond delay between the process or the synthesizer, in this case, out putting sound honors.

  • Hearing it on legacy management is quite important to deal with in most say, digital audio workstations.

  • It's probably okay to have a bit of Leighton see, but in real time situation say, whether Player is playing the keyboard, Leighton see becomes a big problem.

  • In fact, if he gets over 30 milliseconds, it's very difficult to play an instrument where you press the key and you have to wait 30 milliseconds before you hear the sound.

  • Your brain just can't reconcile that, so we always try name for late and cease to be as low as possible.

  • It's worth thinking about an embedded system for this approach, though, so digital instruments say keyboards or guitar effects probably do use this approach of having an interrupt at the sample frequency.

  • But that's okay in that situation.

  • There's nothing else to interrupt the process from doing what it's doing in the real world.

  • We have to live with a bit of Leighton.

  • See, we could modify a drawing a little bit here.

  • Now we don't need a timer because the sound driver can directly tell us when it's done with the sound.

  • However, drivers don't usually like working with shared memory in this way, so it's no good just having one block of samples here because the sound driver will be too busy sending that through the CPU into the D.

  • A.

  • C on DDE.

  • Windows will be wanting to fill it at the same time, so we need to have more than one block in this case.

  • What we actually would prefer is a cue off sample blocks.

  • And this is nice, because if we assume that the system is now two parts again like in our ideal system, this side is clocked fundamentally at the frequency that is required to output the sound.

  • Where is this side could be very variable, and it's quite a common technique to use a Q or buffer like this to cross timing domains.

  • We'll have a little disclaimer that this layout that I'm showing on the screen is quite an abstraction.

  • But I believe it gets the point across quite elegantly, that when we're crossing time domains like this, we need to think about how we handle the data, and there are things that we have to be careful off.

  • If we output too many blocks, we increase the Leighton see because each one of these blocks represents a fixed amount of time on effectively.

  • We're looking into the future here.

  • So if I press a key on the keyboard, there'll be a delay of how many blocks are waiting in this queue.

  • On the other hand, if I don't have enough blocks in this cube, I'm starving the sound driver of sound to actually produce.

  • We could use a synthesizer to explore these effects.

  • Now I can reveal what these two magic numbers are for.

  • So the 256 is the number of samples in a block on dhe the eighties.

  • How many blocks I'm going to make available to put in the queue.

  • If I go to our ideal real time scenario, I can assume there is one block with one sampling.

  • This will, of course, require 44,100 updates per second to produce real time sound.

  • Let's have a listen that phone's on.

  • Well, that was terrible.

  • And I apologize if you've just blown up your head phones what we heard.

  • There was just lots of clicks and pops, and that's because the sound card is significantly starved of data.

  • I'm going to start playing it again, and we can see here.

  • The Leighton see is increasing.

  • Let's go to the other end of the spectrum.

  • I'm going to say each of my blocks sustain 256 samples, but I'm going to have 1000 offline.

  • I hope that sounds just started.

  • So that delay was the late and see about five seconds worth in this case.

  • If I press a key, he's gone down.

  • Nothing's happening way, Theo.

  • So this arrangement makes it impossible to play any kind of live instrument.

  • And it's all about finding the correct balance between the Leighton See on the performance.

  • I found that these two numbers worked well for most of our applications.

  • I'm calculating laden t by looking at the whole time on the CPU time.

  • So for every sample that gets processed by a synthesizer, we know we use this detail invariable that you get in your make noise function.

  • We know we're up to in the songs.

  • That's according to the CPU, but we also have wall time, which is around us.

  • That's that's the natural order of the universe progressing through time.

  • In the ideal scenario, the two should be the same.

  • The CPU is always being able to keep up with the whole time who Amanda with the discrepancy and we call this late see, and we can demonstrate the CPU struggling as well.

  • So if I set this to D book mode and started running, it's going to get a bit noisy and awkward again.

  • We can see under normal running.

  • The latent see is approximately zero.

  • It will fluctuate a little bit.

  • I'm just going to turn down the audio recording for a second.

  • Hopefully, that's not too loud on if I start to saturate the sea for you with things to do.

  • So I'm going to press lots of key simultaneously.

  • Sorry about that.

  • You see, it was a dreadful mess, but the CPU had more to do than the time it was allocated to do it.

  • So we increased the Leighton see.

  • And that was because we were also in deep book mode.

  • If I send this over to release mode and do exactly the same, you can see it handles it.

  • Okay, let's take a brief look now at the OLC noisemaker that h file code, the noisemaker dust H file uses the windows wave out a p.

  • I is actually quite a simple a p I to you.

  • So it's got built in functions to count the number of sound cards, for example, So my new great function justice that it counts the number off sound devices on, then go through them one by one to get the name and pushes them into a vector.

  • Once we've enumerated the devices, we can create the noisemaker background process.

  • And to do this, we fill out a way for Matt Structure with all of the relevant information of our synthesizers.

  • So it's got the sample rate and how many bits per sample.

  • If you remember, this class is a template class, so this is set too short or into afloat.

  • Whatever data type you want, how many channels we've got is whether we're using mono or stereo.

  • We also provide some information about how our memory is going to be structured that contains the sample data.

  • Once we're happy with how it's set up, we call the wave out function here, which takes the device I D number and the way format structure.

  • Next we allocate some memory.

  • These air the blocks in our cue.

  • As far as I'm concerned, the Q is just a big, continuous lump of allocated memory.

  • However, the sound card will appreciate it being delivered to it in chunks, as we've just seen in the in the slight.

  • So I just allocate the memory in one go here.

  • But then I allocate what are called wave headers, and these are the things that the wave out a p I requires to know things about these blocks of memory.

  • So each wave header contains the size off the block Onda pointer to where the block is in our memory.

  • So in my big lump of memory, I'm just breaking it up here, using some simple point arithmetic.

  • Thank you windows conveniently.

  • For whatever reason, I have to cast it to type LP string anyway.

  • The first thing that the well see noisemaker does that active is create a threat, and it runs there in the background.

  • This is what makes it quite easy to work with in subsequent code, because we don't have to worry about what it's doing now.

  • It will automatically call our make noise function as and when it requires data to fill the blocks up with.

  • So let's have a look at the main threat.

  • Fundamentally, it's a wild loop, and in this wild loop.

  • It waits for the sound driver The back end to say right.

  • Police fill this block.

  • You have a block free to fill.

  • But how does it know to do this?

  • Well, when you create the wave out device, you create a function called a wave out Prock function.

  • You'll see this in all of the documentation.

  • It's always called the same thing on dysfunction is registered as a call back within the AP.

  • I so when the AP, I says, Well, I'm done with that block of data.

  • Please give me the next one.

  • We can increase the counter for how many free blocks objector we've got left and I use a condition variable here to notify my third to say, Well, you've gotta block free now.

  • Police fill it with some data.

  • So this unlocks and we carry on The block is no longer free because we're going to fill it.

  • So my count of how many free blocks is decreased by their need to prepare the block for processing.

  • And all this really involves is setting the header to some initial state.

  • Either need to fill the block with the relevant data Now in my coat I have the the make noise function.

  • And so, for each sample within the block in this loop, I call the make noise function along with the current time so everything can be synchronized.

  • And that's done here because if you remember in our code, when we create the sound machine, we then set a user function which registers are make noise function.

  • The make noise function is expected to return a value between minus and plus one on a scale that to the interview domain then, eh?

  • So even though the user experiences everything with floating point numbers, thes sound hard were actually expects a interview format for the simple.

  • So the noisemaker class attempts to deal with all of that behind the scenes so you can just work purely with mathematics.

  • You can work in the floating point domain for the real number domain.

  • If you prefer in a stereo system, we need to do this for both channels left and right.

  • Once the block is filled with data, we call wave out prepare header to tell the head of that it is Ray already.

  • It is full of relevant data and we write the block to the cue the AP.

  • I will handle all of that for us, and that's it.

  • A similar but opposite process could be used for reading sound from microphone, so we might explore that in a later video.

  • But the wave out happy is really quite simple on my rap profile here doesn't make life a bit easier if you're more interested in the functions and formerly behind generating sound.

  • If you do study the source code, you'll see there's some interesting things around static pointers on callbacks.

  • And these are just a little hacks and barges around how Windows can register functions that are actually class members.

  • That said, I believe the code is simply not for anybody to follow.

  • As the synthesizer has grown in sophistication, I've made a few changes to the very basics, but nothing that will stop your programs from working completely.

  • The first thing I've done is taken the channel, which was used to signify which instrument should we use.

  • I've changed that now to a pointer to the instrument, so the note structure itself contains enough information to know which instrument it needs to play itself.

  • And so here we have the previous make noise function where we can see the channel was interrogated each time.

  • This has now changed to a much more elegant approach.

  • I believe that the note now knows which instrument to use.

  • I think this is quite a nice approach now because the make noise function has eventually become static.

  • There isn't really any more user code that needs to go into this.

  • And as we're manipulating sounds, really all we're doing is manipulating the instruments.

  • I thought it would be useful to have notes playing automatically in the background actually heard the introduction to this video.

  • So I created a sequence it The sequence is so simple.

  • I didn't think it warranted a video on its own.

  • And that's why I bundled all that wave out stuff in at the start to explain how the sequence of works is probably best.

  • Have a look at how it's used first, and this is simply it.

  • So I create on objects off type sequence, and I specify the tempo and by default it assumes that I'm working with a typical four beats arrangements.

  • I'm hesitating to say the word bar here, but it's basically if we specify a beat, we look at this number here.

  • We've got x dot Don't doc ex dot dot dot x dot dot x and so on.

  • So the beats Ah, the ex is in this case and the sub beats are the dots on you get to play around with those later.

  • This could give you different sort of timing signatures and fields the secrets that will be given set off instruments to play.

  • So in this case, I've added a kick drum, a snare drum and a high hat.

  • And I specify the pattern I wanted to play.

  • Ah, using just a strength by default.

  • This is set to four beats and four sub beats.

  • So in total, there are 16 elements.

  • And for each so beat, there is a character that represents whether the instrument should be played or not.

  • If I just simplified at the moment, take a listen, we can see that the pattern reflects the arrangements of characters.

  • Let's say I saturate the array like this.

  • I think you got the gist of how this works.

  • So the secrets that can have multiple channels or instruments in this case on each channel will have its own beat string in our main loop.

  • We need to update the sequencer with how much time has elapsed since it was last updated.

  • Now this is important.

  • That is his wall time, not CPU time.

  • Well, time will provide a consistent sounding beat CPU time, depending on how busy the computers will vary, and you'll end up with a beat that isn't consistent.

  • So to get the elapsed time, I'm using some kroner.

  • Library trickery.

  • I think the Kroner library is one of the most cryptic libraries available after the standard library.

  • But you can see some of my other programming videos, which should be indicated on the card above, which used this quite extensively for, ah, for computer games.

  • When the sequence was updated, it creates a vector internally off new notes to play, and it will return how many new notes that are to play.

  • Then the user simply takes that vector that adds it to the list of notes playing throughout the system.

  • You remember, Vac notes contains all of the notes that could be keys, stuff from the Secret Service stuff from MIDI files, any of the other techniques that we've looked at in the past, but notes is just a vector containing all of the active notes.

  • So for each update of the sequencer, we want to firstly clear the vector of notes and we want to accumulate how much time has elapsed because the time has elapsed between updates may be fractions of a second, and we only want to do something when the accumulated time is greater than the beat time.

  • So this is how much time needs to pass perso beat.

  • It's important that we don't just set f accumulate back to zero here because we want to accumulate any residual time left over or else things will get very messy very quickly.

  • Once we know we've gone over the beat time, we increase the current beat on.

  • When the current beat goes over the total number of beats, we reset it back to zero.

  • So our sequence ir becomes a loop when we know that the sequencer is activated a new beat.

  • We want to check all of the channels to see if the character at that particular beat spot is a Capital X that indicates play the instrument, in which case we create a new note After the channel instrument to it.

  • We set the idea to 64 which in my percussion set just means play.

  • The note is normal, and we add that to the vector of notes that we're going to return on here.

  • We would just return the size of that vector to indicate.

  • Yet we've got some new notes.

  • Let's add them to the list of active notes playing in the synthesizer.

  • For this sequence, I created three new instruments a kick drum, a snare drum and a high hat.

  • I've also added a lifetime variable to each instrument on this force Is the system to switch the note off once that lifetime is expired?

  • So previously you press a key on the note would keep playing, and if you lifted the key off, it would then issue the note off command.

  • Now the note off command is issued on Max Lifetime.

  • So which he either comes first.

  • Either the note off command is issued or this lifetime expires on.

  • This was needed because the sequence it doesn't issue a note off and you wouldn't want it too, because sometimes you might want this sounds to overlap slightly.

  • So for my kick, drum is mostly a low frequency sine wave with a little bit of frequency modulation, just to give it a you kind of sound on a tiny fraction off noise.

  • The snare is a similar arrangement, although this time not quite as low frequency and a lot more noise on the Hi hat is mostly noise on a slight square wave frequency thrown in there for good measure.

  • So let's play with some of the characteristics off the sequence it.

  • So by default, I've got it set to 90 beats per minute.

  • Let's increase the tempo.

  • We can also change the time signature.

  • So I've already created a sequence here which, if we noticed the beats now is every three sub beats.

  • So if I tell it that I've got four main beats, but only three so beats, we'll just set that tempo back down.

  • We got a different musical characteristic, so this video has really been about the sequence on the wave out a P I being used in the OLC noisemaker class file that I provided for all the synthesizer videos.

  • It's been a bit of an alternate video, I'm aware of that, but I did want to get these points covered at some point as usual.

  • All of the code is available on Get hope.

  • Take it, Hackett.

  • Do what you want with it.

  • If you've enjoyed this video, please give us a thumbs up.

  • It does help a lot on DDE.

  • Have a think about subscribing.

  • I'll see you next time without keys.

Hello.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it