Placeholder Image

Subtitles section Play video

  • Hi I'm Jake.

  • I have a question for you.

  • How human are you.com?

  • Well, if you're not sure you can find out because this website will quiz you on how

  • robotic you really are.

  • I didn't score very high, but we can find out why because the site gives some more info

  • on how computers learn how to mimic human behavior.

  • For example, most programs designed to act like us will deliberately include errors of

  • spelling or grammar to seem more person-like.

  • Some programs, like the predictive text feature on smartphones, use data gathered from users

  • and thousands of sentences toguesswhat you might type next.

  • This type of idiomatic sentence construction is also at the heart of r/subredditsimulator,

  • a site on reddit that will create headlines that sound real-ish.

  • All of the comments on the articles are created by bots as well, which leads to some pretty

  • interesting conversations.

  • This is used on an even greater scale by Botnik, a website that has collected text from various

  • sources, genres and even particular people or shows, and allows you to assist a computer

  • in writing a quote or even a whole episode of a show like Scrubs, Seinfeld and more.

  • You can also generate music festival lineups and robo tweets.

  • It's a really cool example of machine learning, but allow me to give you some more examples

  • with even more DONGS, things you can do online now guys.

  • All these systems are based on the probability system called Markov Chains.

  • Markov Chains describe when a random probability is linked to the outcome of the previous probability.

  • This website gives a really good visualization for how they work.

  • Each chain is made up of multiplestates,” which have a certain chance of either

  • moving on to a new state, or looping back onto itself, starting the process over.

  • For example, let's say each iteration of the chain is the probability of a rainy versus

  • a sunny day

  • A sunny day is more likely to be followed by another sunny day and rain follows rain,

  • so the probability of the sunnystatelooping back on itself is higher than moving

  • to the rainystate.”

  • This exact chain is used to test the strength of dams and structural simulations.

  • It's also a nifty tool forrandomname generation, which it's not, because

  • if it were totally random it would sound like this: fprqodpmq.

  • So how do name generators work?

  • Marvok chains have the answer.

  • In the english language, the chance of Q being followed by U is pretty much 100%, but the

  • chance of U being followed by I isn't any more likely than it being by A or any other

  • vowel.

  • So you end up with words that are technically pronounceable.

  • Depending on the way system wastrained,” it may even create readable sentences of real

  • words, even if combined they're total nonsense.

  • A great example of this are found in the album Bot Prownies by DADABOTS.

  • The titles were created using a Markov chain, and so it's basically random noise, but

  • reads like actual song titles, even if, again, they're just a little wrong.

  • Which leads us to another type of machine learning, Neural Networks and Procedural Generation.

  • The thing is, this album was not created by musicians, or even humans, but was instead

  • created by a deep-learning algorithm.

  • This algorithm was fed hours of black metal and math rock albums, and was programed to

  • try to guess what would happen next as it was listening.

  • If it got it right it would strengthen the connection to that particular waveform, and

  • would repeat those guesses hundreds of times until it started to sound more and more like

  • a real piece of music.

  • It's a little wonky, but still sounds like it could have been played by actual humans,

  • even if they sound like the Shaggs.

  • To quote CJ Carr, one of the programmers of the algorithm, “Early in its training, the

  • kinds of sounds it produces are very noisy and grotesque and textural, but as it improves

  • its training, you start hearing elements of the original music it was trained on come

  • through more and more.”

  • Here's the original music it was trained on

  • and here's the computer's approximation of it.

  • Pretty cool huh?

  • Well not as cool as a 3 year old Vsauce1 video.

  • A while back Michael mentioned a computer program that learned to play old video games

  • distressingly well, going so far as to pause a game of tetris right at the last minute

  • so it would never lose.

  • That's pretty neat, but how about a computer program that makes video games.

  • Games By Angelina does just that.

  • Although it's still in its early stages, Angelina is being fed data on hundreds of

  • games and topics and uses imagery and its connotations to create it's own settings

  • and gameplay.

  • Although it sometimes doesn't work very well, it occasionally has moments of simple

  • genius, like when given the prompt to make a horror game, it placed blood red walls and

  • creepy statues all around the environment. Good job Angelina!

  • But using seemingly random generation for video games is nothing new.

  • I'm sure you've experienced it before.

  • The entireroguelikegenre of videogames, in which levels are randomly designed and

  • never repeat, is based on procedural systems similar to Markov chains and machine learning.

  • Although the first game to use this structure was the perfectly namedRogue,” the first commercially

  • successful version was called Beneath Apple Manor, which you can play right now!

  • This game and others like it start with a “seedthat informs the general pattern

  • that the dungeon will follow, and then starts with a single tile.

  • Each adjacent tile is added according to a chain of probability that increases the chances

  • of various blocks while always allowing the player to get to the end of the level.

  • When your done fighting slime monsters and looting crypts head over to Brilliant.org/doonline/

  • to sign up for free to learn more about Markov Chains and machine learning.

  • Brilliant was nice enough to sponsor this episode and their mission aligns really well with Vsauce's.

  • The first 36 people that follow the link will get 20% off the annual premium subscription.

  • So I would highly recommend checking it out

  • In the lesson for Markov Chains I have to figure out where a tourist will be at the

  • end of a three day trip in Tailand, using probabilities from actual tourist data.

  • We did it. Yay!

  • Links to all the DONGs can be found in the description below

  • And if you want more DONGs there's a playlist right here, filled with a gaggle of DONGs.

  • All for your enjoyment.

  • right over there

  • I find it interesting that we're talking about computers and machine learning and all these kind of things

  • I'm actually talking to a computer right now

  • I mean I'm talking to you, but through a camera which is then going to be through a computer

  • And that's you there.

  • So we have two computers in between us

  • so I wonder do either of us exist at all? Or are we just machines?

  • I don't know! Find out in the next DONG episode!

  • Have a wonderful life and as always thanks for watching ok bye.

Hi I'm Jake.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it