Placeholder Image

Subtitles section Play video

  • I'm gonna hit this button here on, then.

  • I think, um, I don't know what this is.

  • Not something I've done before, so let's see in a second, we should see this.

  • This appear here?

  • Yeah, that looks promising.

  • Okay.

  • All right.

  • All right.

  • Let me just ask now if, uh, people in the live chat can hear me, and then I'm going thio that I'm gonna close this.

  • Uh, yeah.

  • I think this is gonna go on forever.

  • Uh, all right.

  • All right.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • Hi.

  • I just want someone to say the sounds works.

  • Okay, Okay.

  • So I'm gonna close this.

  • So So from this moment on, my focus is now on you, The people who are physically here in this I just wanted to turn that on.

  • And so I don't know if anybody if there's if anybody is, has that live stream like in the background on their machine.

  • And there's some, like, really important question or tons of saying like it stopped.

  • You could let me know, but I'm gonna not pay attention because this is a rare and unique opportunity to be physically with people in real life.

  • so, uh so Hello.

  • Welcome.

  • I guess it's a few minutes early, but I might as well start.

  • My name is Dan Shiffman.

  • Thank you so much to grow Paris and the tank for having me here.

  • This is like a real honor to get Thio be able to be here in Paris, do a workshop on and meet people who I have corresponded with online or maybe met before on Dhe.

  • Yes.

  • So I'm thrilled to be here, so I most of all probably familiar with me someone, cause that's probably how you found out about this.

  • But in case we just just as a matter quick introduction, what I do actually full time is teach at a program in New York called T P.

  • I'll just pull up the website real quick.

  • Um, this is a two year graduate program at Tisch School of the Arts on.

  • Actually, that's why I'm here in France is N.

  • Y.

  • U s a study abroad program and why you Paris and I'm visiting that for most of this week.

  • And so, in addition to this, I have this YouTube channel where I make coding tutorials that where most people find me yet.

  • Oh, they can't see my face.

  • Oh, because if I go like this All right.

  • So sorry to the live viewers.

  • I This is this is just the web cam.

  • I'm gonna be moving around a lot, but I'll give you a moment of looking at you, but otherwise it's gonna be it's gonna be hit or miss so but, um And so the other things that I work on, I think I think are important to mention which are related to what I'm gonna show today is I helped to know what the right term is.

  • Administer, run, manage, participate in something called the Processing Foundation and the Pressing Foundation.

  • It's a It's a It's a United States charity, a nonprofit company that maintains a bunch of open source software tool kits for the arts processing P five Js.

  • You don't have a P five sticker.

  • I didn't bring any prospect stickers.

  • I didn't have any unfortunate.

  • But if you want to see one, Simon here has one.

  • So, so, so so one of the things that so, so process violation.

  • How does a lot of community and education initiatives one project in particular that I want to mention it, but I just did.

  • Called P five.

  • Something that I started working on a little over a year ago is a project called Ml five and this is an open source library and the five is an homage.

  • Did I use a French word?

  • I did.

  • I do have, like high school American High School level friend, which is very bad.

  • We'll see if I can see how much courage I have.

  • Many French, but so the five is an homage to P five.

  • But this project is funded actually through a grant from Google, which comes from the team that created something called tensorflow dot Js.

  • So I'm just giving you that as background for what I want to do in the workshop today.

  • What I want to attempt to do in the workshop today is Bill, a simple image classifications engine in Java, script in the browser using the P five Web editor.

  • So you don't So So let's let me think here for a second.

  • So if you end, all of you are welcome to be here.

  • No matter what your background and skill level, I imagine that some of you this might be kind of beginner for and some of you might maybe you have just started programming.

  • There might be some stuff that's very new and confusing, so I'll tryto manage that as best I can.

  • But you could certainly ask questions, and maybe also as a small community here, we can help each other for those of you have a lot experienced with this.

  • So if you're if you're new to programming or to Java script, then I would suggest using the P five Web editor a mention a particular linking me to go to in a second.

  • But if your experience with Web development and JavaScript you don't, you could be following along and doing coding through whatever editor system.

  • Build environments that you so desire, but but you'll need to have the things that you'll need.

  • If you're in the P five Web editor, all get package there for you.

  • If you're not, you're going to need tohave the P five libraries, and you can get those through the Cdn link, which which is which is here.

  • So you just go to the P five Web editor and open up the index html.

  • Follow your findings so it's some point if anybody is, like, really stuck and can't find p five, let me know and I'll tryto up definitely gonna like, pause and take breaks.

  • So my goal is to spend about an hour to one o'clock.

  • Now, kind of talking about what?

  • How image classification works in ml five in a particular algorithm called K n n, which dance for cane years neighbor and try to build up something.

  • And then hopefully, if that goes, well, then we could just hang out here for a second.

  • Our people can tryto make something with it.

  • Maybe towards the end, we could share a couple things are or just so this can this will start very presentational.

  • But I'm happy for this too.

  • Move towards being more just hanging out and working on stuff.

  • All right.

  • Um okay, so let me just let me just show an important link here.

  • So if right here, get hub dot coms last Shiffman slash Tank Workshop.

  • This is where I least so far put all the links of the things that are relevant, and I will I will go over this in more detail in a second.

  • But this this is probably the most important link for you Web editor template Because the thing that I want to mention is that here I am in the Web editor over on the top left up there.

  • If I click this little arrows you'll see in here, it says ml five dot minn dot Js So I have built a version of the M l five library just for this workshop with some new features that don't exist in the published first of five.

  • So if you were to be using the published version of Ml five, the stuff that I've gotta try to coat today won't work.

  • So so that's it.

  • And if you aren't used to, you'll need you'll start here and then the first thing you can do is just d'oh save or duplicate.

  • I think so.

  • If you if you want to use the p five Web editor and you haven't before, I would take the time right now just to goto editor p five.

  • Just order and sign up for an account because and then you could go to, uh, this link and click on Web editor template.

  • But I'm gonna I'm gonna be showing you a whole bunch of demos and things before I actually start writing code.

  • But I just wanted in case people are trying to figure out to get set up.

  • Yeah, Yes.

  • So I usually do.

  • This is a good, good good question on that.

  • Simon is asked.

  • So here, in settings there are there's a there's a few different views.

  • And usually I use this high contrast mode.

  • I didn't use it because, um, it just wasn't set to it, actually, on my laptop.

  • But also, I think I'm gonna have a bunch of text that with this gray background, might end up looking little weed.

  • So let's put it back here to this.

  • Um Okay.

  • Um all right.

  • Anybody have any, like, just sort of general questions or logistical questions in terms of, like, how to be set up before we start.

  • Oh, and then there's the WiFi over here, which is E will broadcast a secret wife, but people can see it.

  • Pass it along.

  • What could possibly go wrong with me saying what the WiFi password is?

  • That seems like it seems like I shouldn't Okay.

  • All right.

  • So I think what I'm gonna start with actually is this website called Teachable machine.

  • How many of you have seen this?

  • A couple people.

  • So So teachable machine is a project that was made by Google Creative Lab, a bunch in collaboration with a bunch of different research teams at Google.

  • And it was This came out, like, quite a while ago before the official job script sensor flow was launched.

  • And it demonstrates something to me like this is a really amazing demonstration of what is very recent, only recently possible to do in a Web browser.

  • So I'm gonna skip the tutorial on just run this to show this to you.

  • So the idea of the teacher machine so just, you know, spoiler alert.

  • My goal is to actually basically build a version of this in the P five web editor.

  • It will have none of the bells and whistles that you're seeing here in terms of design.

  • And you start full interaction, but but it will have.

  • The functionality will allow you to create an interactive system in the browser that might be different than the way you thought about it previously.

  • So just to show you the way this works, I I I think I practiced this with my code example earlier.

  • So let's see.

  • So what I'm gonna do here is I'm gonna basically teach this machine toe learn three different what are often referred to his classes or even call them labels.

  • Um, three kinds of images.

  • So one image is just gonna be me.

  • That will be green.

  • I'm holding this down.

  • So it's just learning, like, if I just move around here, this is giving it lots of examples of me in the web browser.

  • Okay.

  • Sorry.

  • Not in the web browser, me, Just me in the camera.

  • Now I'm gonna step away, and I'm gonna do I'm gonna train it just on with nothing.

  • I think you just Yes.

  • Yes.

  • So if you're right, if you watch, the tutorial will walk you through this.

  • And so the last thing I'm gonna train it with is with me holding this book that I found here called Blonde Story.

  • So, uh, okay, so now that I've done this, I've now trained this system and you could see what I'm sitting here.

  • It's confidence, and actually only when I'm looking straight ahead, it's confidence is 99% that it That's me with the green category.

  • The green label by Walk Away, this will just way should see that it became confident in purple.

  • If I sit back down, we're back to green.

  • And then now if I hold the book, it's orange and small Time is pointing out is right.

  • So this is also showing you like it's gonna display a different gift based on which category you can.

  • Also just round it's playing birds from bones.

  • So So this is the idea.

  • This is the idea of of how how this sort of like process of image classifications works, that you're teaching the computer, saying, Here's a lot of examples of images of cats.

  • Here's a lot of examples and image of the dogs.

  • The computer, in theory, is going to learn from that, and then you could present it with new images, and it will make a appropriate guest.

  • So but there's something that's that I'm not telling you, which is super important.

  • So this is this is working on Lee because it is building on top of something that's already already been trained on millions and millions of images.

  • So this is a process known as transfer learning.

  • So there you have any questions about, like, just I'm gonna switch over to some slides that are gonna kind of, like, go go underneath the hood of this a little bit more, But before we move away from here Okay, so let me just close this window on and I'm going to savi This So one thing that I want to mention is yearning.

  • She who is ah, colleague at N Y u N i T p.

  • She taught a course this past semester called Machine Learning for the Web.

  • And it's basically it's a seven week course that goes through some basics of machine learning, different kinds of algorithms and models.

  • So here's a bunch of and so basically what I'm attempting to do in this workshop is what's here in week two.

  • But there's lots of other features of the Ml five library that tensorflow library.

  • That might be things you wanna look into later on.

  • And I forgot to include I really didn't want to pull up my have a link to this.

  • Make sure my volume is off hopes.

  • Okay, so I just also wanted to mention, uh So this particular playlist also goes through, and I realized people who are maybe watching this live.

  • I'm stepping away because I could swivel this.

  • So this particular playlist is actually, in some ways somewhat of a pre requisite.

  • So I kind of got lost over some of the stuff that is in some of these videos.

  • But certainly, if you want to after today, go back and dive more deeply something.

  • This is somebody that you could look at and the thing that's different thing.

  • I want to point out what I'm doing in this workshop today is almost exactly what's in video for five.

  • But I'm going to use a different algorithms for number five called K.

  • And this is a recent feature of the Ml five library that you name she who taught that class added to the library itself.

  • So we go through, maybe I'll come back and try to distinguish those a bit more.

  • But I just wanted to let you do this contact.

  • Okay, so I'm gonna go to Eun ings presentation and use some of that to kind of talk through to talk through how how this actually works.

  • So where's present?

  • Okay, um and thank you to getting for allowing me to use her amazing presentation.

  • So MM five library comes with something called a pre trained model.

  • A pre trained model is a machine learning model that somebody has already spent the time to train to recognize images.

  • And the particular model that unify comes with is is called Mobile Net.

  • So mobile net.

  • You could see this is what mobile net does.

  • It basically looks at an image, and then it's like, Oh, it's a Robin has a confident 99%.

  • So weirdly mobile net So mobile.

  • That is kind of amazing in that it's this machine learning model that you could just immediately have access to.

  • And we could actually I'm gonna I'm gonna go to ml five jazz dot org's I'm gonna go to examples.

  • I'm gonna quickly click on, um, video classifications.

  • So this is an example now of mobile Net trying to classify the images in this video and you can see let's see if it could get this sunscreen.

  • Sun blocker almost said.

  • But I think that bookstore at one point let's see bottle of water, maybe your glass water bottle water bottle.

  • So and we're looking the sunglasses bowtie.

  • Usually it says, Yeah, so this is the thing, Mr.

  • Um, machine learning models are only as good as the data set with which they were trained.

  • All right, so maybe a truck.

  • You think that the race it will work?

  • Well, well, yeah, I think it's a weekly.

  • So So, uh, so one of the X.

  • So let me go back to this presentation here.

  • So, um, and I find this so mobile.

  • Net was trained on a particular image data set called Image Net.

  • And, uh, and basically, this is like image data set that was made just for researchers to experiment with, not necessarily for practical, real world use.

  • And it has a incredible amount of plant elements in it.

  • It has, like, a whole bunch of different, like sport, sport things.

  • You can see it.

  • It has 61 61,003 pictures of the dog.

  • So what?

  • What what mobile?

  • That mom, which was trained on this database.

  • And I'm gonna go to, um here real quick to show you something.

  • Uh, it only knows about these 1000 things.

  • So you could see like, this is what's so crazy.

  • A lot like machine learning off.

  • It feels like magic.

  • Oh, it's like this really smart, amazing system that could recognize anything, and in fact it's really good.

  • This particular model can recognize a variety of species of lizards and reptiles, apparently, but it can't recognize a person.

  • So so that's one thing that's important.

  • While it's useful and fun to play with, and it's a nice demonstration of the idea of image classification, it's not necessarily going to be really useful for you to use with your own project unless what you happen to do is you want a project about rare bird species, then maybe the mobile that model by accident be very useful to you.

  • But what it actually does One of the things.

  • Oh, yes.

  • So this is the list of the classes.

  • So there is something.

  • However, you probably don't realistically have access to a database of 50 million images.

  • So let's say what you want to do is recognize that the example that you Ning has made and then I'm gonna do something with this, like rock paper scissors game, which I learned today is Pierre Voices.

  • Oh, yeah, I would have said Walsh were, by the way to turn, but apparently so.

  • Yeah, so rock paper, scissors.

  • So you would you like if you want to train a machine learning model, Recognize you making the rock gesture or the paper gesture of the scissors Gesture Mobile nets Not gonna do that because it's not able.

  • Doesn't know anything about images of people's hand gestures.

  • But there is a process called transfer learning, which it's defined.

  • Here's used the knowledge gained while solving one problem.

  • Applying it to a different but related problem.

  • And what, um what, um, what mobile net actually does the math behind Mobile Net is what looks at a cat image when it actually does, is creates a graph of probabilities, and you could see these are all 1000 things it knows about.

  • And it ended up here with this very high probability that it is like number 288.

  • Jewel Undertaking the class ideas 285 Egyptian cat.

  • So this is what this is the end result of the machine learning.

  • And this, actually, by the way, comes from this really nice observable notebook that, like click on in a moment, also buying the kill floor out.

  • Who's the creators of on the development team of 10 10.

  • So this is the This is what the machine learning model actually outputs an array of probabilities.

  • But if you were to again, this isn't gonna this workshop.

  • I'm not gonna be kind of like hand wave the aspects of how neural networks work.

  • But if I were a neural network in one way to describe it is right.

  • If the image is the input and the very last thing that comes out is a list of probabilities which I'm showing you right here, there are actually a whole bunch of steps in between.

  • And one of those steps which is the last step right before the probability is known as the like logics activation.

  • So this is basically the a normalized predictions spectrum.

  • It sounds like a very fancy word, but basically, this is this is basically looking at almost like a raiding off a numeric rating for how likely the particular images Any one of these categories and you can see that you see these high spice.

  • But what?

  • What this gets turned into in the mobile net model is something much simpler cause all you want is kind of like intoxication is the answer.

  • Which one is the most likely?

  • So soft?

  • Mad?

  • Max is a mathematical function that basically watches everything and leaves the highest ones.

  • But this is actually a is like this numeric ends of that image.

  • This is what's known as like a feature vector.

  • In other words, the image, which is maybe 250.

  • I know the dimensions.

  • It's 256 by 256 pixels.

  • Has that's how many numbers to go.

  • Um, I should know this math, but I'm in front of a live audience, so that's 65,000 pixels.

  • And if they're RGB, suddenly we have, like 196,000 numbers.

  • So 196,000 numbers.

  • Okay, what did I do wrong?

  • By the way, this is like actually what happens it out.

  • Usual lives be one of 256.

  • You right there.

  • There's infinitely more possibilities.

  • That's that's right.

  • So there's there's 256.

  • Simon's right.

  • There's 256 to the third power possible grayscale, But yeah, rgb Well, rgb right, but okay.

  • Their story 56 time student, 56 spots pixels and each.

  • Each spot has joined 56 to the third power.

  • Possible RGB.

  • Yes, but the point What?

  • I'd say what actual math is, which gets slightly from but actually signed this hearing correctly.

  • This is a much This is a lot less information.

  • This is like 1000 numbers.

  • And so what was that?

  • What the neural network actually learns to do, even though we use it in this end result saying, Oh, it's a cat.

  • It boils a giant data.

  • All these pixels down to its essence.

  • That's yes.

  • So So this particular this particular, like this particular array is known as the features.

  • So what we can then do, um, there's here, some or example.

  • So let me, let's look at some of these, Um okay, So what we can then do is we can make use of the fact that mobile net is very good at taking any image and making it into an array of numbers.

  • And then what can we do that?

  • So the reason why I was also using this term called Vector all right over here.

  • So, uh, I usually when I think of a better I think of like, an era, it's, uh And what that this is a vector in two dimensional space.

  • You have three.

  • Yes, it has an ex.

  • And why composed?

  • But as Simon saying You've got an electric three dimensional space.

  • We should X y z The truth of the matter is mathematically speaking.

  • Even though our brains are really able to visualize past three dimensions in an easy way, this list of numbers is basically a vector in 1000 dimensional space.

  • So another way, Yes, exactly.

  • A plane list of numbers.

  • So if I set you on this on this white board over here and I know, I guess I will make some small effort to turn this way.

  • You know, this point here, is it more similar to this point or this point?

  • You might intuitively say this point because it's a closer distance so we can see that on and we could also we could say this is often I think, the example I gave it.

  • Some of the videos is if we thought of think of RGB color.

  • We filled this room with red, green and blue colors with like, this ax was the amount of red this access was the amount of green and then this hacks.

  • This was the amount of blue.

  • Similar colors would be near each other.

  • So looking at how data appears near to other data in space is a measurement of similarity.

  • So if I were to basically look, these two are probably pretty here, each other in 1000 dimensional space, right?

  • And this is what we can.

  • This is So the process of transfer learning is basically use mobile net to get in 2000 numbers from an image and that finds on then the algorithm that I want to show you is this cane years.

  • Dearest King, here's neighbor King nearest nearest which basically is a way of saying, OK, which category is something?

  • It is something a member of based on its proximity to other entities in that same space.

  • So we go all the way back to that teachable machines just for a second Now, Um, if I go all the way back to hear what I'm basically saying is here's a whole lot of examples of me in 1000 dimensional space.

  • Here's a whole bunch of examples of nobody in 1000 dimensional space.

  • And now here's a new image look isn't closer to images that it's already knows about of me and the picture or with me not in the picture.

  • And this new image is closer to all of those.

  • But I'm not.

  • If I wanted, like, try to compare every single pixel color, I might be able to sort of get something like this to work.

  • But it would run super slow, and it wouldn't be very accurate.

  • But the fact that mobile net already knows so much about what's the mean, meaningful parts of images and turns it into this 1000 dimensional array that we could basically take.

  • Forget about this probabilities.

  • Get rid of that last layer about with international labels and just use the features of an image to try to match any new image with previous injuries.

  • Does that make any sense?

  • Anybody have questions?

  • Yeah.

  • Yeah, right.

  • So sorry.

  • This technique.

  • Yes, So sorry.

  • So I kind of I'm jump.

  • I'm jumping around here quite a bit, certainly, but this technique, the mobile, that model is just a digital file with a lot of like information numbers in it.

  • And that's a thing that you could use in python in the browser and then to the concept of transfer.

  • Learning can be applied in any programming language with a lot of different machine learning libraries or from scratch if you have a few years to write every bit of the Alberta yourself.

  • But what I'm going to show you is that what's what's What's the reason why this is exciting, at least to me?

  • Is the fact that this is a way get image classification system, working in real time in the browser, which prior to this prior to the job script tensorflow version and a lot of these models like Global Net being made Johnson compatible would be things that you would need more powerful computers are lots more training time to be able to do.

  • Okay, so let's see what?

  • Um so let me show you actually a really nice example of how this is be applied.

  • So this is one of the tension flow Js examples.

  • Um and I'm just gonna click on it here, so let's see.

  • This is, uh, yes.

  • Allow using camera.

  • Okay.

  • So I'm gonna make this a little bit bigger.

  • This is basically a project that someone made apologies that I don't know the name of one of the official tensorflow jazz examples where what I could do now is I can say Okay, um, let's say I Pac man has four controls.

  • So what I could do is basically create a physical controller from myself.

  • Thio control the game of Pac Man.

  • If I train a train, a set of images for both, right, left up and down.

  • So, uh, I'm gonna just say that, uh, me just looking here regular is Is this so I'm gonna click here.

  • This is this is for moving up.

  • I'm moving to the right.

  • Is, uh hold this book for you.

  • Exit the left is gonna be impossible, Remember?

  • And I'll just leave for down.

  • So now, in theory, I'm gonna hit this now something that's really important.

  • I don't know.

  • You might have noticed this, but when I was when I waas showing teachable machine, there was no button that said train.

  • I just added the images and then it started guessing.

  • But here in this system, there's a button said it's it's trained.

  • So there's actually two different ways of approaching this.

  • Same with more than two.

  • But in terms of the examples that in the five library and you'll find potential flow down J s.

  • There are two distinct ways of a buzzing.

  • This problem is it's using a slightly different.

  • This is using the method that's actually in the video tutorials that I made.

  • Now I'm sure this other method called Pause for a minute or two tried distinguish.

  • Those were clearly.

  • But for now, I'm just gonna mention those are two different things.

  • So I'm gonna train.

  • So it's training.

  • Lost, by the way, is a key word that you'll see in machine learning quite a bit.

  • It's also called cost, and the fact that it zero is probably not actually true.

  • It's probably 00.0 a bunch of zeros before it gets to some actual numbers.

  • But it's the amount of error.

  • So it's been training.

  • And how does it know the amount of error?

  • Well, it's trained itself with all these images, knowing which ones are right, left up and down, and then it almost like Well, even though I know the answers, I could also try to guess the answers.

  • So then looked at those images again, trying to guess what the answers are seeds in a guest of the correct it if it didn't get a single one home.

  • It's 100% correctly.

  • It's air would be easier.

  • Um, okay, so now I'm gonna click play.

  • We're gonna see if I can play this.

  • Where?

  • What?

  • I want to go to the left first.

  • Maybe it hasn't started yet.

  • Left.

  • Okay, up, Up.

  • Okay.

  • What if I want to write down?

  • You got the idea?

  • I'm not sure this is a left.

  • So So the point of this and another project that I will mention which is in this presentation towards the end is, uh this is posting.

  • It's different.

  • I'm gonna talk about is this is by and an I t.

  • P a researcher named Alejandro called Pong ml.

  • I won't play this three right now, but this is a similar idea where he trained a teachable machine for certain gestures to move.

  • Pong paddles up, down.

  • So this is a pretty there's hopefully your eye.

  • Your brains are filling with ideas beyond, just like, Oh, I could train this to be like a weird, gestural controller for a simple to d game, but this is a nice way of demonstrating what is the creative possibilities of what you could do with this kind of transfer learning technique and this?

  • So there's the other thing.

  • I think it's interesting to mention here is that you'll notice that, like I just trained this right now.

  • So in theory, a couple things one is I could I could save all of these training images and the resulting model and then loaded again.

  • Like if I refresh this page, all that all that work is gone.

  • But what I'm gonna show you want to get to the examples?

  • You can actually save that loaded again later.

  • But even though you can do that which is more typical of machine learning systems, what's exciting about this is you could imagine building like, interactive like museum exhibit kind of interactive kiosks where it's just continually being People are particularly training at all.

  • Day are all night or whatever.

  • However, you okay, so let me see what I What did I miss here?

  • So let's see, what else is in, um innings presentation that I wanted to Yes, I'm I'm gonna talk about posed yet, but I want to be conscientious about the time here, so let me look here and see if there's anything important here that he missed.

  • So this is a war for the code stuff that I'll come by.

  • These are some of those examples, okay?

  • And actually, let me just briefly show you, um, this is Nick Eels.

  • Observable observable is a, uh, system for creating kind of like interactive javascript notebooks.

  • One way of describing it's built by a bunch of people, but I think started by Mike.

  • Both stock is the creator of a job supplied recalled d three and so McKeel.

  • This is how you would basically create a teach the teacher machine demo with tensorflow dot Js, and I'm hopefully gonna show you an easier way of doing it with ml five.

  • But the reason why I want to bring this up here is this is a nice also demonstration of captain nearest neighbor workers in two dimensions here, and you can see it's trying to guess is my mouth mouse part of the blue?

  • I guess that's kind of blue, purple, blue group or red group.

  • And basically the reason why Theobald Graham is called K nearest neighbors classifies it according to a voting system.

  • It's okay I just stepped on this.

  • Everything signed it, classifies in according to voting system.

  • So in other words, for this new points, it's looking for its K nearest neighbor K.

  • Being three, two of its nearest neighbors are read.

  • Only one of its nearest neighbor is blue.

  • Therefore, it is more likely to be off.

  • The category is more like a cat a dog, but you can see that this is a kind of algorithm.

  • First I could add this, which is showing you like actually, where is that decision boundary that's ever at any point in space that's on this side is going to be classified as part of this red category, and then you could also see how this works.

  • If we change like K, so you can see here, it's You can see that here it's got 123456 that's full of pulling.

  • Six.

  • That's closest to that are red and free them.

  • So this is a particular algorithm that's a classic machine learning algorithm, K nearest neighbor that can be computed very, very quickly, which which makes it powerful.

  • And it actually the look at this and even notice this you could change this to three classes.

  • So now we could see how it's category.

  • How categorizing.

  • Based on three different clothes.

  • Three different classes.

  • Okay, so a couple things.

  • So let me quickly.

  • I mean, close a bunch of these windows that I don't necessarily need open anymore because we're going to start writing the code for it.

  • You can follow along, but before I do that Oh, here we go.

  • Um, let me Whoa, Boy, There's already lots of links here.

  • Thank you, Alka.

  • Who is watching the live stream?

  • And I gave right permissions on this.

  • Repo added it done fantastic.

  • Okay, Um, so what I want to just show you is are these two examples real quickly.

  • So this is if you go to this one that says K and an image example.

  • This is basically a fully working, um, can an image classification example in the p five Web Better.

  • I'm gonna go back to a blank, a blank sketch and put the pieces together step by step, because I think that will be more most helpful while following.

  • But if at any point you just wanna go and use this, this is the sort of working version of it has a few.

  • Certainly it's not as sophisticated as teachable machines but has more dollars and whistle.

  • So I'm I'm just gonna mention that that's there.

  • But the other thing that I would what's what's interesting about the can an algorithm is even though I'm talking about Okay, well, we get those logics from mobile net that vector of 1000 numbers.

  • There's lots of other kinds of data that comes as an array of numbers.

  • So this the ml five can and classifier object can take any arbitrary input.

  • So, for example, another example that I want to show you, which I don't think we'll have time to code the whole thing.

  • But I will show it to you.

  • Is this posing that example?

  • So what this example is doing this is using a machine learning model called Pose Net, which basically makes a guess as to where and this is really designed.

  • This is not designed to this is designed to be used with, like more of a full body view attempts to see if I could make that happen.

  • Well, that's close enough, but you could see what pose net does.

  • Is it it looks at any two D image with a person.

  • Yes, right if I turn to the side tries to make its best guest, but its most accurate with a well lit full body in, um, so what it's doing is it's basically making a big list of X and Y values.

  • Here's the X Y value for left, right hand, the right elbow, the right shoulder.

  • It kind of has this skeleton and notes about So what this is doing go back to it's actually will work even is.

  • I could basically say like, let me make two categories here.

  • Let me say that these are a bunch of examples of me with my head to the right, And here's a bunch of examples of be to my head with the left and now you should see 100% B 100% a book and goodbye screens.

  • What just happened there?

  • Why that just decided Thio completely die, but I'll do it again.

  • Left.

  • I think I took the other way before so you could see it's so so This.

  • This is not using, even though the images they're the can and classifier is not trying to categorize the image.

  • Input it all.

  • It's trying to categorize the results of the post net model, which are X Y positions of the body.

  • And then so you could imagine using this with ANAP occassion to choreography or, um, or gesture in a more robust way than the image classification example.

  • I know why this decides to just, like, die after every few after a little bit.

  • That's the I think the live work shop disease that, like, um, live demo ate us.

  • I think it's called.

  • I don't know why they decided to just stop working.

  • Okay, so that's the sort of like a story of all of the pieces.

  • Yes, really?

  • So I'm keeping an eye on the time.

  • Thank you.

  • So what I want to do now is I want to start a bit from scratch.

  • And so if you want to follow along, you would go here to this link Web editor templates, and you should see that it says look tank template by coating train.

  • And then what you're gonna want to do is you have to be logged in.

  • I'm already logged in s coding train.

  • But what you would want to do is then save or I'm gonna do duplicate.

  • And then what you would see is the tank template copy by your user name.

  • So I'm gonna give everybody a minute to see if they can get to the point where they have a copy of this.

  • And if you wanna work off line, you could also just do file, download, download all the libraries and everything and could use a different text.

  • So I'm just gonna give everybody a minute or two to see if you can get set up with Webb at this particular.

  • Normally, I would start a workshop with just like, Oh, just start a blank sketch.

  • But the reason why I can't do that here is you need a copy of this because it has a special version of the M l five library as part of it.

  • So you can check in this left hand column to make sure you see Emma five dot dot Anybody get stuck, just raise your hand.

  • God help you.

  • Oh, yeah.

  • Okay.

  • So thank you.

  • The lives of the live audience still here?

  • I attempted to check.

  • I don't It's just I'm just gonna check real quick on the live audience.

  • Do I have any?

  • Oh, I see People are tweeting about this workshop.

  • That's great.

  • Um uh, where?

  • Let's go to Livestream know, There's, like, a lot of people watching this.

  • That's crazy.

  • Okay, Okay.

  • Okay.

  • So, tank template simply copy.

  • Okay.

  • Okay.

  • I'm going to move on in a minute, like 30 seconds to a minute.

  • Anybody stuck and wouldn't want to ask.

  • Okay, if you're too shy, poke the person next to you that we're all We're all nice, helpful people here.

  • I could see a sea of smiling, generous faces.

  • Okay, Okay, so So I don't know how far.

  • I mean, you certainly have a fair amount of time.

  • I want to leave sometime for people to just try to play around with it a bit on their own.

  • So I'll try to get us far with.

  • This is I can, in the next half an hour ish and then we can always fall back on this or cremate example.

  • Will you just give that a try?

  • But so So I'm gonna You can follow along.

  • I am going Thio.

  • The first thing that I want to add is just the video live video, So I'm gonna create a variable called video, and then I'm gonna say video equals create capture video.

  • I think that's what is Let me run this.

  • So if you've never used the P five web editor before, it's really just This is the text editor where you can write Java script and you can stop and restart your program of these stop and play buttons.

  • You can also click this auto refresh, and it'll update it'll rerun the sketch as your typing, but I'm gonna keep that off right now.

  • There.

  • Yes.

  • Yes.

  • Now you see me next to me, probably.

  • Um okay, so now we have the video.

  • Now, one thing I want to do is I actually just want to, um I want to make this smaller and I want to just I think it's gonna be This is not really important for the machine learning aspect, but I just want to take this video and draw on campus.

  • So that's this drawing canvas.

  • Which p five This through my default set up to work with, um and so I am going to say image video 00 I'm gonna change the videos size to 3 20 to 40 So now you can see the video is in two places.

  • So three times.

  • So now I'm gonna say video dot hide.

  • And so now I just have the video.

  • So this is just a few steps, a few bits of code to just get the video drawing in the campus.

  • Okay, Can you will see this font size.

  • Okay.

  • Okay.

  • So I'm just gonna try to pause, but I'm gonna move fairly quickly through this stuff.

  • But you shall we just wave your arms, flail your arms of me to slow down.

  • Okay, So now, now, the next piece that we need is we need to have this thing called the ml five feature extractor.

  • Uh, so I'm gonna call this, uh, features called feature extractor.

  • Say feature extractor equals ml five dot feature extractor, then So the feature extractor and it isn't really a diagram is the five class that knows how to get this array of those knives.

  • That 1000 dimensional list of 1000 numbers from an image.

  • But he knows how to do that.

  • Based on a pre existing model, that pre existing model is mobile net.

  • So I'm gonna In theory, we could apply the same technique to non mobile other models.

  • But Emma five really only at the moment only supports mobile net, maybe one or two other models.

  • And then I'm also going to give it a call back model ready.

  • So this is the kind of thing this, by the way, requires an Internet connection.

  • I mean, it already requires an Internet connection to be using the P five Web record.

  • But even if you if you weren't using the P five Web editor, it's gotta load the mobile net model from the cloud.

  • It's possible to download that, have that locally.

  • But that's not a thing by default that it does.

  • So I'm gonna say console dot log model ready.

  • So now if I run this again, we could see But down here in the console, if you see that at the bottom, it now says So.

  • I've created the video, and I have made a feature extraction video, which is the source of our images.

  • And by the way, you don't have you could do this with you.

  • J.

  • Paige PNG Image Files You don't have to do this with live video.

  • That's just the way I'm demonstrating it in.

  • Sort of like a live, interactive sense.

  • Okay, so now what I'm gonna do is I'm gonna add a function called mouse pressed.

  • Probably.

  • I'm gonna need some buttons at some point, but this will be fine right now, and I'm just gonna show you something.

  • There is a method feature which is feature extractor dot in for I think this is all I need to write.

  • So what this function does Could I might have this wrong.

  • You have to look this up.

  • What this function does is excess.

  • Give me that list of 1000 numbers from an image.

  • That image is the video.

  • And I'm gonna say I'm gonna put that in an array called inputs, and then I'm going to just do something inputs.

  • I'm gonna say inputs print, and I want to make sure this works, and then I'll explain again what's going on?

  • So this is me just testing.

  • The model is loaded.

  • When I want to do is when I press the mouse, I should see here in the console a list of 1000 numbers.

  • If that happens, then things were working.

  • Here we go.

  • I don't see all the numbers, but I see something that so this is very important.

  • Cannot move on if you don't have this step working.

  • So you should be able to see the video, create the featured tractor object and then infer that's that about the way I'm not 100% sure in for us the best name for this function.

  • So if anybody has any ideas about that joined the ml five open source project Assemble five finally get up issues.

  • I have an idea for the name of the Inferno, but the idea is to do inference to infer the essence of the image to take that and give me my list of 1000 numbers.

  • So the weird thing is, though, how come I didn't just say like, console dot log inputs.

  • So this is like, you know, if this were like a longer course about machine learning, we have done a lot of steps leading up to this point.

  • But I'm just going to kind of give you the quick details here if I click.

  • Look at this.

  • Whoa, look at this is disposed Internal shape, D type, size.

  • So what's this thing?

  • Actually, is now I meant to have both of these print out.

  • Even though I'm describing to you, Let me just get this back.

  • What I'm describing to you that the lodge's is just like a plane.

  • Array of numbers.

  • Yeah, how does?

  • It was the first correct pronunciation, but I'm describing inputs as I don't know why they call them inputs.

  • That doesn't make any sense.

  • I meant I meant I'm gonna do a tribute that's a jet lag.

  • Your brain is only working out past capacity, so this array of numbers is into playing job script array of numbers.

  • It's a special kind of object called a tensor, and by the way, that's why tense airflow is called tensorflow.

  • The idea of a tensor is a fancy word for an array of numbers, but off any dimension, so it could be a multi dimensional array, two dimensional make dregs of three dimensional array.

  • It's tensile resistance, generic term, and so the library tensorflow is named for that because that's the sort of core building block of machine learning systems, right?

  • Even though we think of like, oh, it's generating bike in creative machine, learning this machine, learning problems, generating poetry or its classifying image These are just the sort of human dressing we put all around the system.

  • But the inputs and outputs of a machine learning system are always numbers tenses.

  • So what feature extractor does is it gives you this object called a tensor.

  • Now, if I wanted to look at it as a plain array, I could say values equals lodge.

  • It's I use a function called Data Sync.

  • Data Sync is like a function in that part of tensorflow dot Js that will actually pull the values out of the tenser and put them in a regular array.

  • Now, if I click here, we can see There we go.

  • We got an array that has 1000 numbers in it, so I could look at it this way if I want.

  • But I actually that's just for demonstration purposes.

  • I want it as a tensor because the next thing that I'm going to do is I'm going to take that tenser and add it to the can and a cane and classified right.

  • So what I need now is I'm going to make one more variable called K n n.

  • I'm going to say K n n equals m nullifies totally digging this up classifier.

  • I should look at the documentation or my other example.

  • Let's just look at this example.

  • Ml five can and classified.

  • Got that right.

  • Make a make a cannon classifier.

  • And then what I could do is I could say K and n ad example.

  • These particular lodge it's are a cat legit.

  • It's legit.

  • Too legit to quit.

  • It's okay, Nog.

  • Sorry.

  • That's, um Okay, so let's I don't know.

  • I don't know if I got 100% right, But let's see if this works.

  • Let's see the police. 00:56:50.550 --> 00:56:51.32

I'm gonna hit this button here on, then.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it