Placeholder Image

Subtitles section Play video

  • All right.

  • Hello, world.

  • This is CS Moody Live.

  • My name is Cole Tenaga.

  • Enjoying today again My CI's buddies, Nick Long Neck What we're talking about today.

  • So today we're continuing a little bit of our subject from last stream over.

  • Take a step back for, like, a moment from images and they were happening to neural networks to kind of the biggest buzz word of ml at the moment.

  • Going a little bit more level than images, we're gonna we're gonna use a little bit easier data set when I build kind of a sample version of this This last weekend it took, I think, like an hour to run or so not great.

  • So we're going to use a very simple data set and figure out how to just what a neural network is.

  • You know what isn't neuron?

  • Because they're not quite the same as the ones in our heads were in the rest of our bodies, and it's kind of complex.

  • There's a lot of weird terms like back prop and grading descent and things like that.

  • None of these things I know.

  • Yeah, I know them for quite some time before the stream tonight That's a lot of jargon, but Indy Colts intended target elucidate what it all means.

  • And if anybody's familiar scenes Video Alive is a show hosted here at Harvard University, CS 50 is Harvard's introduce computer science, topped by David Malin, who may make an appearance in the chat.

  • Um, and this is kind of an offshoot of that war.

  • We build things from scratch.

  • Talk about concepts.

  • Nick is a regular here on the stream.

  • We've done not only programming things but also operating system related things.

  • Kelly Lennox was you talked about it at some point, and your first thing was actually a machine learning related.

  • It was becoming a classifier, a very high level of really.

  • We're gonna do some AP eyes.

  • We're gonna just kind of skim over a lot of the math.

  • Now we're going to have a little bit more into the math on.

  • Then we're going to actually end up motivating the high level approach again.

  • But this time we're going to explain each of the kind of lower level concepts that we talked about.

  • Actually, I think in that stream Yeah, the first dream was used tensorflow Google's like there's even higher Yeah, today we're gonna be using just just no way got matrices with the golden eventually of continue because this is kind of like a new sub Siri's.

  • Yeah, when we're doing a sort of like building a neural network, talking about machine learning with the eventual goal of being able to generate images from part of streams which we talked about last week.

  • Yep.

  • And we're going towards that.

  • I think that is our next week.

  • That's where we're going to start hitting you with these, like generative neural networks.

  • This kind of like we have Variation a Lotto encoding.

  • We have kind of like distributional networks.

  • And then we have Ganz, which are generated adversarial networks on those air like state of the art.

  • At the moment, they're super cool.

  • The things that generate the really cool landscape.

  • They produce all these like very sharp images.

  • So even like deep fakes, is kind of a similar concept where it's like you can't necessarily tell that it's not a real person, like like presidential videos where they like dancing a little wacky.

  • The only thing that I think we'll probably explore, we might even start with next week because it's a little easier to build from scratch is style transfer networks.

  • So let's say I give you like, then go painting and I then hand you a modern day picture of something right.

  • I transfer that style onto that, and it's beautiful.

  • I mean, they create.

  • This is like incredible works of art, and it's pretty easy to build.

  • Once we have kind of like our basic understanding of, like, what's a CNN?

  • What is convolution?

  • There's a lot of steps along the way.

  • It's kind of overwhelming, but I think it's cool is a lot.

  • So if you're ever confused, luckily is on video.

  • See if anybody wants to watch YouTube.

  • Our first part where we built the K means classifier because we need We're gonna eventually need a classified images so presuming to be able to generate them, comparing him against those those clusters.

  • Exactly.

  • Um, but you can check that that's on YouTube.

  • That should be about it at the time of this recording.

  • Actually true, Connie says Hi.

  • Oh, nice job on the podcast called Thank you very Much Way are releasing a podcast very soon, probably today.

  • We just filmed it.

  • It's on YouTube and all the major sort of spot of our, although may just modify it.

  • Major podcast distribution channels, including Spotify.

  • But that's to be formally announce probably on Facebook, and otherwise we'll definitely check that out.

  • Sweet.

  • Why?

  • Sarah says nice music.

  • Thank you very much.

  • Um, cocoa.

  • Thanks so much.

  • Everybody for tuning in.

  • I say, let's just weigh.

  • Have a lot to cover.

  • Why don't we just we just dive in?

  • Yeah, right.

  • Make studio Magical.

  • People can actually see what I'm doing now.

  • Way changed our camera lens.

  • Lot whiter.

  • So?

  • So, boom.

  • There we go.

  • See?

  • My favorite classic See matrix screen saver.

  • Cool.

  • So we're gonna hop out of that and we have an empty file start programming.

  • Kind of.

  • Yeah, it looks like that, I guess.

  • You know, no one's really seen those.

  • Yeah, this Sure.

  • Yeah, I guess I got to do this, but it's a little harder to see.

  • Yeah.

  • Yeah.

  • So we're starting out with a, uh, just kind of rob.

  • Fine.

  • I literally nothing but lilies.

  • Zero.

  • This is kind of like keep ourselves on track.

  • I do have a like reference file in case I cannot forget some of the math of the top of my head.

  • A lot of the service.

  • Very.

  • Yeah, there's a lot, A lot, A lot of very, very down in the nitty gritty details.

  • So we're gonna start with kind of like my favorite important lumpy, uh, as God, Q e.

  • My ideas like, Oh, you wantto will abbreviate that we got this overeager.

  • I start with dumpy.

  • And for those that are familiar enough, he's a pretty awesome library.

  • Let's do all sorts of beautiful things mathematically.

  • It also has just kind of inbuilt matrix operations and matrices that make all sorts of math really convenient.

  • We're going to have a simple extent.

  • Ascent, which is going to be a dumpy array of literally, just will put 00 01 And for those that are kind of curious as to what we're doing right now and why we're doing it, it's a great curiosity toe have.

  • Essentially, we're just creating kind of like a toy data sent on.

  • The reason It's really just kind of like a toy thing is the function that we are kind of testing is deterministic.

  • I can given input, tell you immediately, with the output is it's human computer bowl.

  • It's not particularly interesting, but it motivates using a neural net because its function is not linear, and it's not separable in any way by a linear function.

  • Now, if you're familiar with, like basis functions, you can actually transform this into a basis in which it is linear.

  • But we're going to pretend like you can't do that in order to force us to use a neural things.

  • So in case maybe if you're watching very closely, you'll notice that this is the exclusive or function eso given exclusively 11 as its input.

  • It will return to you, won anything else, return zero.

  • So you'll notice that two zeros that zero and to one's that's also a zero.

  • And this could be generalized to be kind of like the parody of a of an input.

  • So if I give it like 100 zeros and then 11 it will also return one there variants on exclusive or kind of like applications of it that also carry these properties.

  • But we're gonna start with the simplest possible data set.

  • You'll notice we're ignoring like matching.

  • We're not going to split this into training and validation and testing.

  • We're not gonna do any of that.

  • There's no unlabeled data.

  • This is very much just like a toys set on.

  • We want to figure out how to build a neural net on top of that.

  • And speaking of neural, Net is kind of like this term that gets thrown around all the time.

  • It's not necessarily intuitive like what it is.

  • It's a network of some sort.

  • But even that's not super intuitively defined network meaning things can talk to each other on.

  • And there is some path by which they do that.

  • Now that path sounds kind of simple and this high level of review.

  • But as we go into it, you'll notice that there's a lot of math involved.

  • Andan, the neural part is where we'll start, which is, you know, what's a neuron, and it's modeled after the idea of you know, our heads in our heads.

  • We have this individuals like billions of neurons that I'll communicate with each other and they do something That was for a long time, thought to be very simple.

  • You gave them some sort of stimulus, and they either fired in action potential or didn't that's the biological neurons.

  • Now that's not quite true.

  • There actually is kind of a hidden complexity to neurons, where they actually have sub kind of like dendrites that poke out, and those can also fire many potentials.

  • And then certain accumulations of these potentials can then fire the overall neurons, action, potential and what that means.

  • And what's interesting is now we have a way to do non linear action potentials, weaken, do all sorts of wild things in our heads and partially also explains why our brains are so complex, or at least gives an understanding for the intuition.

  • Why now, the neurons that we're building our a lot simpler they're not.

  • They're modeled in the same concept, but it's not quite, uh, analogous.

  • So the girl with bigger just posted in our chance and my brain is a neural network.

  • It only contains zeros.

  • No one.

  • I'm sure that's not true, but a very funny joke.

  • So our neural network and neural networks in general they consist of each neuron is essentially one wait and one bias.

  • Now, to say it that way sounds a little funny.

  • And if you're familiar with this at all, hearing that is a little whack, but that's kind of the intuition is I have some weight, which is just a number that says, Hey, anytime I'm giving an input, I'm gonna apply this number two that input and I have some bias.

  • So I know somehow that I am off linearly from whatever the actual answer is.

  • If you're familiar like y equals MX plus be, that would be like the B term and then M might be your weight.

  • Now that's not quite how this works.

  • It's all done through matrix math, so things get generalized a little bit more broadly.

  • But that's kind of the intuition is every time a piece of our input feature is given three neuron that gets it says, I'm gonna multiply, you buy a weight out of bias, and I'm gonna pass you on to the next layer.

  • We're gonna build a single layer neural network, which some of you may go.

  • That's still E.

  • And it is, to a degree, a couple maybe, like 40 years ago, not so silly, but now it is kind of silly.

  • We have deep use.

  • We can calculate like hundreds of neurons in one layer at a time and things like that.

  • And this will not restrict us as faras number of neurons we have.

  • But it will restrict us in number of layers we have.

  • And the reason for that Is there a couple of algorithms that I want to very much like explain very clearly.