Placeholder Image

Subtitles section Play video

  • so lost you.

  • My colleagues not is something pretty exciting.

  • Using only the brows, a little bit of JavaScript and something called the Web audio a p I we were able to decipher brainwaves.

  • Now that sounds little, pretty little bit sensational, but it's it's mostly true.

  • So today I'd like to tell you the story of our journey through the lows and highs of Web audio on requested a cipher Brain waves in the browser.

  • So this is story off.

  • Discovery is about finding out what the Web can offer.

  • It's a story of exploration about pushing the boundaries of Web technologies.

  • Thio benefit to make a difference, and last, it's Well, it's a story has a really cool demo at the end.

  • So, uh, stay tuned.

  • But before I can tell you that story, I have to tell you this one.

  • So my name's Brady more.

  • I'm a full stack developer at an Australian medical technology startup called CIA so CIA provide destroyers, largest epilepsy management and diagnostic service.

  • In fact, the tagline of our corporate website is if lipstick diagnosis made easy now, I didn't know anything about epilepsy before I started working at CIA on and I guess that's kind of true for a lot of people.

  • So let me give you some of the facts.

  • So epilepsy is a chronic condition that effects over six million people worldwide.

  • Now the symptoms of that let's skin vary from person to person but could range from boss awareness, loss of bodily control to full body seizures.

  • Now these events can vary in frequency, sometimes having multiple times a day after once a month or even this is rails once a year.

  • But the results can be devastating regardless, and there's no known cure for all types of epilepsy.

  • But with a diagnosis.

  • Some of the symptoms can be treated, so the way that it flexes diagnosed is by reviewing a patient's brain waves in what's called an e e g or electroencephalogram.

  • So each of these traces they represent the electoral signals from certain parts of the patient's brain captured using electrodes on the head.

  • So at clinical team uses E.

  • G.

  • Just like these to help provide a diagnosis At the service that we offered to see, looks like this patient comes to clinics were to get connected with his electrodes to monitor their brain waves over the next seven days.

  • Send them home.

  • They can live their life in the comfort of their own home while being monitored.

  • Then, after that, time to come back to our clinic to get disconnected.

  • And then we spend a few weeks looking through that data and looking for something that might give us a diagnosis.

  • So we got seven days with the daughter.

  • What is it that the scientists are looking for?

  • Well, look, sometimes looking for the obvious.

  • They're looking for something like these minute long sieges that are pretty easy to capture on tape.

  • But if these are very rare events happening once a month or even once a year, it's not guarantee that will capture that during the seven days getting wanted it so more often they're looking for something a bit smaller.

  • They're looking for something like a needle in the haystack.

  • Well, in epilepsy, there's something called the characteristic signal of epilepsy, which is called a spike way.

  • So what?

  • These little spikes and little waves, um, that around half a second long each and so, while a patient might only have a seizure once a month, they might have dozens of these little spike waves every day.

  • And so it's these needle in a haystack said that scientists looking for In fact, you're looking at a whole kind of run.

  • A little spike waves right there.

  • Okay, that's great.

  • But what does a Web developer I have to do with it?

  • I mentioned before that our clinical team gets this daughter and then they start reviewing it.

  • But it's not the full story.

  • Like every second company these days, we have our own cloud platform, and it's a platform for secure management stories and review off medical data.

  • So one of the powerful things about this platform is that allows a clinical team to upload this E e g brainwave data and then review it anywhere at any time or through a browser.

  • One of the major technical challenges off this building is platform is the ability to view all they start on the platform.

  • So to tackle this problem, we've built something that's very creatively called out viewer to view the data, the name because it's hot and it looks something like this.

  • In fact, it looks like this.

  • This is a screen shot I just took in a few minutes ago.

  • And so it's this dot of you that are clinical team uses to review these seven days worth of data and to label events and to eventually deliver a diagnosis.

  • One of the big challenges in getting US scientists to move from traditional one of the big challenges that we face since Platform is getting a neuro physiologist, as they called to move from traditional desktop based software into a cloud platform.

  • Now, uh, they often neurophysiologist often flick through Edie, not a very quickly.

  • They're going four pages a second.

  • So flick, flick flick that is looking very quickly through this because if you want to get through lots of data, you don't be sending a lot of time on each page.

  • The way they're able to accomplish that is because they're trained to look for patterns.

  • They're looking for certain things like the spike and waves.

  • Remember, that's 1/2 2nd thing in a 12th page that you're looking at for these pages per second.

  • So you're looking for a needle in a haystack.

  • What happens when a needle doesn't look like a needle?

  • What happens when the e.

  • E g data you're trained to look for doesn't look quite right.

  • Well, in cases like those that was could lead to misdiagnosis and awful patient outcomes.

  • So this is our problem.

  • How can we take E E t review from desktop to the browser?

  • So there's one particular aspect off displaying this e e g.

  • That's very important, but also a little bit tricky, and that's called signal filtering.

  • So what's this problem of signal filtering that we're going to explore today?

  • Let's illustrate the problem on the left us an e e g data that has signal field is applied.

  • That's great.

  • On the right is that same daughter with filters turned off.

  • So clearly there's a very big difference between these two.

  • So why is this?

  • You know where it was still there?

  • It's still, you know, recording the brain here.

  • Well, like most electoral signals, when you're recording something, you're not just a recording that brain activity.

  • You're also recording a bunch of extra noise and artifact so some of this noise can come from.

  • It's a high frequency muscle.

  • Contractions could come from like DC drift, or it can come from like the mains electricity noise.

  • So all of these things aren't brain waves, but they still shoppin this E e g.

  • So what is it filter after all?

  • Well, no, it's not that one.

  • Let me explain what a filter is by talking a bit about music.

  • Well, like music.

  • So this is an audio spectrograph.

  • It's a visual representation off all the frequencies off a piece of audio, like your favorite song at a given point in time.

  • So it has, like low frequency is like the bass and drums, mid frequencies like another spoken voice and maybe higher frequencies like wind chimes or pan flutes or other super common instruments that you hear.

  • So, just like audio, your brain wave is also composed off.

  • A range of different frequencies Fall happening at the same time.

  • So think about music.

  • Imagine you're producing a song, but you've got this kind of bass noise you don't really want in the air or there's too much pan flute.

  • Well, one way to get rid of that is by filtering out like the lower frequency or filtering out the higher frequencies.

  • And so for E.

  • G is kind of three main filters we'd like to apply.

  • You can have a high pass filter which is like this, unless everything higher than the filter frequency passed through.

  • There's a notch filter, which notches or cuts out the theme free range of frequencies and a low pass, which is the opposite of your high pass filter.

  • Everything lower than the field Africans he passes through.

  • You can also combine them.

  • You can have multiple of these filters running at once, and that's exactly what we'd like to do.

  • So okay, how can we go about implementing filters and JavaScript in the browser on the cloud?

  • Well, look, first, could we not have a filter?

  • It'll you know the best code is the code.

  • Don't have to write, uh, but remember a couple of slides back with filtering without filtering.

  • That's that's kind of night and day right there.

  • So that's a bit of a no go.

  • I mean, I would've been happy with it, but so could we write our own filthy influence station?

  • Well, it's kind of a maybe, and because of stuff like this and and this and all this stuff.

  • So I trained as a particle physicist, not an electrical engineer, so I don't know too much red filtering, but I've been told that there's a bit of subtleties involved in implementing a filter on dhe.

  • I'm sure you could agree that jobs get probably isn't the best language to handle, you know, complex mathematical processes.

  • So we'll put that on the back burner.

  • Maybe.

  • Could we use a package?

  • You know, where jobs keep developers off parole?

  • That's what we're trying to do.

  • So how about this one?

  • This is promising DSP kit.

  • It's a digital signal processing library.

  • It seems like in process that signals it is not intended to be used.

  • It's good to play with them.

  • How about this one D S.

  • P.

  • J s Now, Actually, this one is.

  • I think it's the most starred, A signal processing library and JavaScript.

  • So you thought you'd think that's probably the way to go, but it hasn't had a meaningful update in nine years.

  • Uh, I mean, I wasn't even coating nine years ago, so I think if it's happened in the Web since then, so maybe maybe that's not the way either.

  • So there's three options were exhausted.

  • What are we left with?

  • Well, if you read the title, this talk, you probably know what's coming next.

  • So I know I won't.

  • I won't leave you in suspense.

  • So introducing there's very quiet drum rolls that you could just heard then but the Web audio, FBI.

  • So this is a way of FBI that's, uh, design for processing audio.

  • It's standardized.

  • We'll maintain as decent browser support and active group of contributors and uses.

  • So looking for looking for something works now Web audio is probably the way to go.

  • Well, when I was first having to use Web audio, the Mozilla developing Eric or Indian docks were invaluable and actually just learned at this conference that many of these docks are written by live GS is very own.

  • Ruth.

  • So big shot.

  • That's the roof for helping you learn this.

  • But let me take you through some of the basic concept here.

  • So how do you use the Web or two later?

  • I had actually use it to process this data, or first you create an audio context.

  • This is kind of the canvas where everything happens Inside this audio context, you create a bunch of nodes, one of which being your input note.

  • So this might be in a Silla scope, or this might be like you're feeding microphone into there.

  • Next, you can make a bunch of a fixed nodes so you could do things like a river or compression panning and many more.

  • All things that you know greatly with audio.

  • Maybe don't put reverb on the e e g Just just may you could also.

  • And then you've got to specify the destination.

  • So where is this getting output to?

  • Conventionally, that's gonna be your speakers.

  • All right, So off you do that, you string all those things together, you know.

  • 12121 Then you process the entire thing, and they go major process audio.

  • Okay, that's good.

  • But I might have jumped the gun a little bit Here.

  • Uh, what does an audio ap I have to do with processing e g daughter?

  • Like, Why Why Why did I jump from, you know, packages and JavaScript to this?

  • Well, it all comes down to one of the effects notes, so the effects notes could do really cool things on your daughter.

  • Such as reverb, compression, panning.

  • It can also do filtering.

  • So we had that audio spectrograph where those filters were applied.

  • Turns out, filtering is a thing that musicians and music engineers actually do so it makes sense that they'd have really well admitted filters in this, a p I.

  • So we could somehow leverage these filters and use them for Yu gi filtering.

  • Then we'd be set.

  • Okay, so in particular, we want one, uh, one message that's really used for us.

  • Called the crate.

  • I are filled to met it Now.

  • Quick Election Engineering lesson I r stands for infinite impulse response.

  • That's all you need to know if you know anything about it else about that, Uh, but it's the filter you can generate with this.

  • Great, I feel to note, creates an i.

  • R a filter, and it's super configurable, so you could make it.

  • Hype are slow pass, not whatever you want him there.

  • So here's the plan.

  • First, we need to make some way off reading or getting their Web audio to read out Di daughter.

  • Once it's there, we can process the starter.

  • We can apply the filters and finally we out put this back under brainwaves displayed on the screen.

  • Cool, that's so well and good.

  • But how exactly do you read E.

  • G with an audio?

  • A P I I'm glad that you asked you.

  • You ask that question because we're about to find out.

  • So here are the six steps that you need to decipher your own brain waves in the browser with the Web or the baby, I Let's go.

  • All right.

  • Entering V s code mode now.

  • So first we need to set up an audio context number.

  • This is the canvas where everything happens next.

  • You need to, you know, create an audio context That's a simple is calling instructor.

  • You passed in the number of channels, the number of samples preach channel and your sample right on there.

  • Now, this is an offline audio context and kind of this to audio context.

  • You can use the offline audio context in the Web or the FBI or something called the audio context Makes sense.

  • So the thing about the audio context is this is the one you'd usually going to use if you're processing audio.

  • If you're making like, a multi track mixer or you want to apply some reverb in the web, that's great.

  • The problem with it, though, is that it plays back the audio more or less in real time.

  • So it's good if you're actually you know, when I applied your affects in real time on music.

  • But if you've got seven days with e d dot just look through.

  • You really don't want to be waiting seven days for that to process.

  • So that's that's maybe not the way to go.

  • There's another type of off audio contact you could use called the offline audio context, so it's quite similar to the regular order context, except just taken directly from Indian.

  • In contrast with the standard standard audio context and often order conducted doesn't render the order to the device hardware.

  • Instead, it generated as fast as it can and outputs the result to an audio buffer.

  • Now that sounds brilliant.

  • So basically what context is slow, often noted into the super fast what we're trying to do?

  • So that's why we're going with it.

  • We've made this audio context.

  • Next, we have to set up our source notes pretty straightforward, create a buffer source and note that these methods are all coming off the audio context so you can actually create nodes in the Web audio without actually being attached to its context.

  • Next up, we need to create in Phyllis Audio Buffa.

  • So first up, we're going to create a buffer, has got the number of channels sample separated.

  • We want go to loop over our channels and then for each channel, I'm going to kind of get the channel buffa that I just created.

  • Remember, this is like an empty, empty audio buffer.

  • So get the channel buffer for that channel.

  • I'm gonna get right over all the samples and then right here, I'm going to, you know, gotta Channel Buffer.

  • I'm going to fill that without E e g data, which is in the very creatively named data Array.

  • So our e G data is just in an array of an array of floats in this data, and something really exciting is happening here.

  • So right there on the left, that's an audio buffet.

  • That's audio.

  • That stuff you can process through the web.

  • What you FBI on the right is your e D.

  • Data, and in the middle is an equal sign.

  • So what you're doing is you're moving your e e g data into an audio buffer.

  • Which means right now you can put filter and river compression.

  • All this cool stuff.

  • You really shouldn't do on E G.

  • But you can just like that.

  • That's really cool.

  • Next, you just assign, you know, in the audio buffer to your source and they go, You've got your input.

  • You fed your easy donna into the web.

  • Audio, FBI.