B1 Intermediate 18 Folder Collection
After playing the video, you can click or select the word to look it up in the dictionary.
Loading...
Report Subtitle Errors
Hello, A short video this week showing fast and simple approximation off the sine function.
If you're watching this video, I'm going to assume you've already searched Google for approximations of the sine function on a well aware off the several techniques that are the people use for different platforms.
So I'm going to leave all of the caveats and the use cases off this approach to the end of the video on Just get stuck in with the guts of it Now.
Now, if you've watched any of my previous videos on sound synthesis, you'll know that we use the sine function a lot.
And it's usually used in this sort of format, where we take the volume on times the sign off to pi times the frequency off, the sound that we want to play times time on.
This is great when you have a processing architecture that has the facilities to do these functions.
However, my plan is, to my great some of the synthesis code to an embedded systems platform that doesn't have in silicon sign computation.
It also has a reasonably low clock frequency, so I need something that's fast and simple and because it's audio synthesis on approximation to sign is sufficient.
So I'm going to start just by defining the scope off the sine function with China's Approximate on.
The assumptions I'm going to make is that the frequency is always greater than or equal to zero on.
The time is always greater than or equal to zero.
So the final form of R sine function it's going to be simply Weikel Sign of X, where X must be greater than or equal to zero on, of course, why?
Our output is going to be from minus 12 plus one.
For the uninitiated, this is a sine wave.
And here we've got it crossing the axis at pie two pi three pie.
In fact, it'll do that forever and ever and ever, and he goes on and on and this is ah, period is city and so are approximation will also need to handle.
This period is city and that can be quite complex.
Now, assuming we're using floating point, one way to handle the period of steel would be to use something along the lines of a floating point module is function which would take our ex input on dhe.
We could modulate that by two pi, which of course means for every value over two.
Pi wraps around to zero, so we can use the basically the remainder as the index into our approximation function.
Later on, however, this function is complex, requiring a floating point divide on it may not be supported on all platforms, and I think there's a better way to do this and what I'm going to do instead is normalized the input to between zero and one.
Of course, by dividing it by two pi in this case, I want to keep it no divides because they're usually quite complicated to do so.
I'm going to multiply it by a constant one over to pie.
So this means now to pi is equal to one on zero is equal to zero, and pi is equal to 1/2 on instead of requiring us to do some arbitrary module is to some complex number like this.
Instead, we can simply hack using and floating point to interject conversion so we can take our normalized value on subtract from it, the interview value instead, so this will always give us a remainder between zero and one on just in case your language doesn't support this.
It's the equivalent of using, say the floor function.
If I graph what this looks like, we'll see its remains a PPE sine function.
But it looks like a high frequency on here.
After student on it, we can see the original sine wave in the background.
And the new Weikel signed two pi x here where the input of exits scaled by one over to pie.
We can see it crosses now at one on half and so forth along.
So now we've got this easy way of calculating the remainder without using the floating point module is I'm going to show to approximations.
I'm going to start off with a quadratic approximation to sign.
To calculate this.
I know that my roots for at least one off the parabolas is zero and zero point fight.
So let's just start with a very basic quadratic calculation with the route at zero on a route at 00.5.
First thing I can say is it's upside down.
So I thought minus sign in there just to invert it the right way up on de so we can see it crosses now at the same places, but it needs some scaling.
I'm just gonna take a guess at the scaling on will say 10 or it's not quite enough.
So let's try 20 looks a bit too much to try somewhere in the middle.
15 16.
Well, that one looks good, so let's just quickly zoom and then we see it's absolutely perfect, so it gives us a value up to a maximum off one that's handled the positive side off the sine wave.
Let's now look at a quadratic equation for the negative side, so we know that we've got a scaling factor already, and we know that we don't need it to be minus 16.
This time, probably 16 will do because it's the problem.
That's the right whale.
But our roots have changed, so now it's, Ah, no 0.5 on one.
So using two parabolas, we can roughly approximate the silent would see.
It is an approximation.
It doesn't perfectly follow the purple sine wave in the background, But you know what?
It's probably good enough for audio synthesis, and we can decide which equation to use based on the input value of X.
If it's greater than no 20.5.
We use our positive problem.
And if it's less than no 0.5, we use our negative crapola.
Let's just have a look at the numbers here.
So if we just copy over our first half, which was minus 16 Next take zero on X take no 0.5 routes.
And if we ah, just tidy this up a little bit, we've got minus 16 Exc word.
Take zero x Take no 0.5 x on the constant disappears.
Not time.
Not quite five that goes away on.
We fill that out now to be 16 minus 16 x squirt.
Uh, take eight x And programmers get very excited when they see sixteens and eights and fours and twos because it usually means to some scope for some bit field manipulation to more optimized the mathematics involved.
Now we're doing this all in floating point.
But if I did want to convert this to a fixed point, of course, multiplied by 16 is the equivalent off shifting?
Ah, four bits on multiplying by eight Is the equivalent off shifting by three bits.
So there's potential for optimization here.
Let's also consider the other half of our problem which was 16 x take no 160.5 on X, take one.
If we expand this out again, we've now got Exc word Take 0.5 x take one X plus half.
Now I'm not going to group up the 0.5 and one for exactly the same reasons as labeled both because if we draw this out now by multiplying everything by 16 we get our 16 exc word.
Take eight x, take 16 x plus eight And again, we remained with co efficiency, which are multiples off too.
Here I've drawn out the pseudo code for approximate sine function.
I'm going to assume that we're using 32 bit floats for all of this.
And here we can see that I, first of all, scale the input to normalize it between zero and one eye.
Then do this nice way of doing ah, module iss just by taking the interview proportion away from the input on.
It's just then a question off, making a decision between which probably use whether we want the first half of the second half.
So here I've overridden the sine function in my synthesizer with the approximate sine function.
You've just seen me type out on DA.
I'm going to play it and we'll have a listen.
So this is just a pure sine wave.
So this is what a sine wave sounds like in this system.
Now, if I removed the sign and replace it with our approximation, it's not a bad sign way.
It does actually sound like a little bit of a square wave.
And if we look at the grafts, we can see why are approximation.
The black line is always larger than the actual sine wave itself.
It bulges out, so it's approaching a square wave very slightly.
That said, I think it's quite a good approximation now.
I'm sure there'll be some program is out there that really don't like having a branch condition in the middle of their continuous function approximation.
One way to do this would be to use an approximation that has three routes instead.
So a cubic on, I suppose it would look something like this so we would have y equals X minus zero, because we know that's one of our roots we know.
The next route is X minus no 0.5, which would be equivalent to pie in our normalize space on X minus one.
Let's take this over to the graphing platform and have a look.
So first route is at zero.
Second route is at North 00.5 on our third route is at war.
We could see the roots are fine, but again, we've got a scaling issue.
So let's just have a quick guess.
I'm sure that's a mass purists out there that might be able to work this out.
15 is not quite enough.
20 is not quite enough.
21 is a bit too much.
So somewhere between the in fact, I know it turns out that 785 actually gives us a precise one for the output.
Using exactly the same period is city calculator that were used for the quadratic.
We can use the same for the cubic on.
We can see that this time the peak happens before the sine wave on the peak happens after the sine wave.
And so, by definition, this approximation is not so much a sine wave.
But it's a very slight sore tooth wave, and so we must listen to it.
So here, in exactly the same function, you can see, I've just typed in the code.
Exactly.
I've not tried Thio reduce it to look at any interesting coefficients that we might be able to optimize further.
Let's have a listen.
That sounds absolutely lovely.
That's just something really nice about that tone.
It sounds a little bit garbled that will be down to the YouTube audio compression, so I recommend that you get the code from get hope.
Follow the link below and try it out for yourself.
But the staff this video I mentioned that there are always caveats when approximating other functions.
For really large values of X, the floating point precision would start to work against you, which may produce some very interesting audio artifacts, perhaps a gradual degradation off the audio quality over time.
It might sound really cool, but it's always worth thinking about the use cases.
For your approximation on the modern desktop computer, you probably better off just using the sign directly will be dedicated silicon on the CPU for processing sign on cost and town and all the others popular methods in times gone by which he used lookup tables that did the approximation for you.
However, on modern architecture's these air, probably more likely just to pollute your cash and we'll be faster than just calling.
Sign directly on embedded systems.
You've always got the trade off because if there is a floating point process, err on the and it doesn't have sign, then you're going to need an approximation such as what I've shown today.
And a lot of embedded systems do have floating point support for multiplies in addition but don't have more advanced triggered a metric functions, but it's a trade off.
You have to question the virtue of using floating point on such a platform when a fixed point approximation may do.
Instead, I hope this video showing you just how easy it is to approximate functions.
If you'd like to give the thumbs up, subscribe in, check out the one loan coded up calm block.
See you next time.
    You must  Log in  to get the function.
Tip: Click on the article or the word in the subtitle to get translation quickly!

Loading…

Fast and Simple Approximation of Sine Function

18 Folder Collection
林宜悉 published on March 28, 2020
More Recommended Videos
  1. 1. Search word

    Select word on the caption to look it up in the dictionary!

  2. 2. Repeat single sentence

    Repeat the same sentence to enhance listening ability

  3. 3. Shortcut

    Shortcut!

  4. 4. Close caption

    Close the English caption

  5. 5. Embed

    Embed the video to your blog

  6. 6. Unfold

    Hide right panel

  1. Listening Quiz

    Listening Quiz!

  1. Click to open your notebook

  1. UrbanDictionary 俚語字典整合查詢。一般字典查詢不到你滿意的解譯,不妨使用「俚語字典」,或許會讓你有滿意的答案喔