Placeholder Image

Subtitles section Play video

  • (upbeat music)

  • - There have been a lot of news about ChatGPT lately

  • like people using ChatGPT to write essays,

  • ChatGPT hitting a hundred million users,

  • Google launching Bard to compete against ChatGPT

  • and Microsoft integrating ChatGPT

  • into all their products, and also the viral sensation

  • of CatGPT where it can answer all of your queries,

  • but as a cat, meow, meow, meow, meow, meow, meow.

  • ChatGPT, if you don't know already, it's a chat bot

  • by OpenAI where you can ask it many things.

  • For example, explaining complex topics

  • like explain why I'm a disappointment to my parents

  • or ask it more technical questions like,

  • how do I inherit more money than my brother from my parents?

  • A lot of people are using it to write essays, draft emails,

  • and even write code.

  • So I tried it myself, of course, as a YouTuber obviously,

  • my first question to it was, who is Joma Tech?

  • And it answered...

  • Are you fucking--

  • You know, ChatGPT has a lot of limitations,

  • like here we ask it to name colors

  • that don't have the letter E in them,

  • and this is what they gave us.

  • Orang, yllow, red, that's clearly wrong.

  • In all seriousness,

  • this is to demonstrate how ChatGPT works.

  • It's a pre-trained large language model,

  • meaning it was trained on text data

  • from the internet until the end of 2021.

  • So it won't know anything

  • about things that happened recently.

  • It doesn't have access to the internet.

  • It'll only predict the answer based

  • on what it has consumed already,

  • and the way it answers your question is

  • by predicting each word that comes next.

  • For example, if you ask GPT who Bard is,

  • it's not going to know.

  • You might ask Joma, didn't your channel launch in 2017

  • and ChatGPT was trained on internet data until 2021,

  • yet it doesn't know who you are?

  • Yeah, so there's actually a technical reason

  • and fuck you.

  • Recently ChatGPT hit a hundred million users.

  • It launched November 30th, 2022,

  • and this article came out February 3rd, 2023.

  • So it took two months to hit a hundred million users.

  • Who are these users and what are they doing with ChatGPT?

  • Well, it's pretty obvious, they're cheating with it.

  • Everybody's cheating such that

  • some school districts have banned access to ChatGPT.

  • If they can write essays, then they can pass exams.

  • ChatGPT was able to pass exams from law school,

  • business school, and medical school.

  • Three prestigious industries.

  • Now, this is why I went into coding

  • because I always thought that law school,

  • business school, and medical school,

  • it was too much about memorization

  • and you're bound to get replaced,

  • it just wasn't intellectual enough, you know?

  • All right, well,

  • I guess engineering is getting replaced, too.

  • ChatGPT passes Google coding interview,

  • which is known to be hard, but I guess not.

  • But note that it is for a L3 engineer,

  • which means it's a entry level, for those not in tech,

  • there's no L2 and L1, it starts at L3,

  • but this does raise questions about ChatGPT's ability

  • to change engineering jobs behind it,

  • and we're already seeing the change

  • as Amazon employees are already using ChatGPT

  • for coding even though that immediately after,

  • they told them to stop, warning them not

  • to share confidential information with ChatGPT.

  • What's happening is they're feeding ChatGPT

  • internal documents, which are confidential,

  • but OpenAI stores all that data.

  • You know, it reminds me of when I used to intern

  • at Microsoft and they didn't let us use Google

  • for searches because they think that they might spy on us.

  • I was like, relax, I'm an intern.

  • I'm not working on anything important.

  • In fact, I actually wasn't working at all.

  • You know, I was playing Overwatch all day,

  • but yeah, anyways, they forced us to use Bing for searches.

  • One thing that's being underreported

  • in mainstream media is the success of GitHub Copilot.

  • It's probably the most useful

  • and most well executed AI product currently out there.

  • Have I used it?

  • No, I haven't coded in forever.

  • Now, here's how it works.

  • The moment you write your code,

  • it's like auto complete on steroids, like this example,

  • it helps you write the whole drawScatterplot function

  • and it knows how to use a D3 library correctly.

  • Another example here, you can write a comment

  • explaining what you want your function to do

  • and it'll write the code for you.

  • Sometimes even the name

  • of the function will give it enough information

  • to write the rest of the code for you.

  • It's very powerful

  • because it's able to take your whole code base as context

  • and with that, make more accurate predictions that way.

  • For example, if you're building a trading bot

  • and you write the function get_tech_stock_prices,

  • it'll suggest, hey, I know you're going

  • through a rough time,

  • but building a trading bot is not going

  • to fix your insecurities and maybe you should just accept

  • that you'll be a disappointment for the rest of your life.

  • Okay.

  • How did all of this happen?

  • Why is AI so good suddenly?

  • The answer is the transformer model

  • which caused a paradigm shift

  • on how we build large language models, LLM.

  • By the way, this diagram means nothing to me.

  • It makes me look smart, so that's why I put it on there.

  • Before transformers,

  • the best natural language processing system used RNN,

  • and then it used LSTM,

  • but then Google Brain published a paper

  • in 2017 called "Attention is All You Need"

  • which is also my life's motto because I'm a narcissist.

  • The paper proposes a simple neural network model

  • they call transformer, which is based

  • on the self attention mechanism

  • which I don't fully understand, so I'll pretend

  • like I don't have time to explain it

  • but I also know that it allows for more parallelization

  • which means you can throw more hardware,

  • more GPUs to make your training go faster

  • and that's when things got crazy.

  • They kept adding more data and also added more parameters

  • and the model just got better.

  • So what did we do?

  • We made bigger models with more parameters

  • and shoved it a shit ton of data.

  • Sorry, I'm trying my best here to make the model bigger.

  • All right, fuck it.

  • Anyway, that gave us ready

  • to use pre-trained transformer models like Google's Bert,

  • and OpenAI's GPT, generative pre-trained transformers.

  • They crawled the whole web to get text data

  • from Wikipedia and Reddit.

  • This graph shows you how many parameters each model has.

  • So as you can see, we've been increasing the number

  • of parameters exponentially.

  • So OpenAI kept improving their GPT model

  • like how Goku kept becoming stronger each time

  • he reached a new Super Saiyan form.

  • While editing this,

  • I realized how unhelpful the "Dragon Ball" analogy was.

  • So I want to try again.

  • To recap, transformer was the model architecture,

  • a type of neural network.

  • Other types of models would be like RNN and LSTM.

  • Compared to RNN, transformers don't need

  • to process words one by one,

  • so it's way more efficient at training with lots of data.

  • OpenAI used the transformer model and pre-trained it

  • by feeding it a bunch of data from the internet

  • and they called that pre-trained model GPT-1.

  • Back then, NLP models would be trained from scratch

  • for a specific task like translation or summarization.

  • Both transformer, we get to pre-train the model first

  • then fine tune it for a specific task.

  • Then for GPT-2, they did the same thing, but more

  • and with a bigger model, hence with 1.5 billion parameters,

  • and then with GPT-3,

  • they went crazy and gave it 175 billion parameters.

  • However, just like raising a kid,

  • just shoving it with a bunch

  • of information unsupervised might not be the best way

  • to raise a kid.

  • She might know a lot of things,

  • but she hasn't learned proper values from her parents.

  • So that's why we have to fine tune it, tell it what's right,

  • and what's wrong, how not to be racist and clean up its act.

  • That's GPT-3.5, a more fine-tuned version of GPT-3

  • with guardrails that can be released to the public.

  • Now you have a decently well-behaved kid,

  • but you now want to show her off, so you dress it up nicely,

  • get her ready for her first job, AKA more fine tuning

  • with some supervised training

  • so it behaves properly as a chat bot.

  • That way it's well packaged and is ready to ship

  • to the world with a web UI.

  • Okay, back to the original shitty "Dragon Ball" explanation.

  • So you can think of Goku's hair,

  • like the number of parameters, 175 billion parameters,

  • which is why you can see Goku has more hair now.

  • Goku hair isn't much longer,

  • but it's just styled a little bit differently.

  • 100 trillion parameters.

  • So technically GPT-3 was already amazing

  • but OpenAI was able to package it neatly with ChatGPT

  • which made it user friendly, so it became a viral sensation.

  • So yeah, packaging is important.

  • It caused everyone to really pay attention to this.

  • So how did people react to the viral growth of ChatGPT?

  • People were mind blown and said, Google is done

  • because ChatGPT is going to replace search engines.

  • No, it can't.

  • Until it can search for porn,

  • it cannot replace search engines.

  • Oh, wait, why search for porn

  • when you could generate it?

  • (upbeat music)

  • Anyway, even losing a bit of search volume

  • to ChatGPT would be a big deal for Google

  • since 80% of their revenue comes from ads

  • and most of it comes from search.

  • People were telling Google to release something similar.

  • Google was like, bruh, we have LaMDA,

  • which is basically ChatGPT, but releasing it would be risky

  • as they had much more reputational risk at stake

  • and has to move more conservatively than a startup would.

  • That's foreshadowing by the way.

  • Microsoft is chilling.

  • They positioned themselves really well

  • by investing $1 billion in OpenAI early on in 2019.

  • That allowed OpenAI to leverage Microsoft's Azure

  • for its compute power to train and run their models

  • and Microsoft gets to integrate OpenAI's tech

  • into their products.

  • So if OpenAI succeeds,

  • Microsoft succeeds and remember GitHub Copilot?

  • Well, GitHub is owned by Microsoft, so that's a huge win.

  • Meanwhile, Google is panicking

  • and issued a code red,

  • calling in the OG founders Page and Brin.

  • Actually I have no idea who's who, so...

  • Anyways, but they called them to strategize

  • on how to approach this.

  • Microsoft is fueling the momentum, especially

  • with ChatGPT growing so fast

  • and the tech is very promising.

  • So Microsoft invests another $10 billion

  • into OpenAI for a 49% stake in the company.

  • That money can help OpenAI,

  • I don't know, unlock Super Saiyan 4, maybe.

  • Microsoft also plans to integrate GPT

  • into Microsoft Teams following the same playbook

  • as what they did with GitHub Copilot

  • which would be huge for them.

  • Google also made some additional moves.

  • Google invests almost $400 million

  • in OpenAI's rival Anthropic, which is pocket change compared

  • to the $10 billion Microsoft invested.

  • If you don't know what Anthropic is, it doesn't matter.

  • It's like the Burger King of OpenAI.

  • Google goes back on their word

  • about not launching a ChatGPT clone

  • and announces Bard AI, a ChatGPT clone.

  • Remember when I said they didn't wanna launch

  • a ChatGPT competitor because of reputational risk?

  • Well, funny enough, that's exactly what happened.

  • The AI made a mistake in the ad

  • and Google shares tanked, losing a hundred billion dollars

  • and I still own my Google stocks from when I worked there.

  • The mistake was Bard said,

  • "JWST took the very first pictures

  • "of a planet outside of our own solar system."

  • But this astronaut said, "No, it was not true, Chauvin did."

  • That tweet alone cost me a lot of money.

  • Anyway, Microsoft responded to the announcement

  • by releasing a new Bing with ChatGPT built in

  • to compete with Google search.

  • Meanwhile, we have Meta, who is in denial.

  • Meta's AI chief says,

  • "ChatGPT Tech is not particularly innovative."

  • That is just massive copium.

  • Finally, we got Netflix,

  • who's too busy cracking down on password sharing

  • to care about AI.

  • All right, what about us engineers?

  • What's the future for us?

  • The reality is

  • that GPT isn't replacing anybody's job completely.

  • Like most technological innovations,

  • that change can seem drastic

  • because the media loves dramatic titles.

  • But if you're open-minded, you have time to learn

  • about it and embrace it rather than fighting it.

  • If you're a software engineer and you feel threatened

  • by ChatGPT being able to solve FizzBuzz, oof,

  • then you should maybe consider becoming a YouTuber.

  • Just kidding.

  • Please don't compete with me.

  • Though, you should incorporate ChatGPT

  • and GitHub Copilot to your workflow.

  • It really removes tedious parts of software engineering.

  • If you're working in a new language or API library,

  • you don't have to Google, sorry, Google,

  • you don't have to Google endlessly

  • for the stuff you already know.

  • Just break down and describe your problem

  • to ChatGPT to get a huge headstart

  • or get good at coding alongside Copilot.

  • If you structure your code base well

  • and write good comments that describe what you want to do,

  • Copilot often gets the logic problems right.

  • It's a symbiotic relationship.

  • Become the cyborg.

  • See, the trick here is that, as a software engineer,

  • your job is to translate

  • and break down a business problem into software problems.

  • Your job is to know what questions to ask

  • and what answers to accept.

  • In fact, here's my prediction.

  • GitHub Copilot is not done innovating here.

  • Their next big product release will turn an issue

  • or PR description into an actual full-blown code commit.

  • So as a software engineer in 2024, you better get real good

  • at writing GitHub issues and reviewing PRs.

  • All right, that's it for this ChatGPT video,

  • but I think this ChatGPT narrative is just one battle

  • of a bigger AI war that's happening

  • between Microsoft and Google.

  • I'll talk about that next time.

  • See you and thanks for watching,

  • and remember to call your parents.

  • (upbeat musical effect)

(upbeat music)

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it