Placeholder Image

Subtitles section Play video

  • STEREN GIANNINI: Hello, hello.

  • Welcome.

  • MYLES BORINS: Hi, everyone.

  • Thanks for coming.

  • [APPLAUSE]

  • MYLES BORINS: It's been a long day.

  • It's been a long three days.

  • And I know Steren and myself really

  • appreciate all of you sticking around and coming

  • to our session.

  • So just one more time, thank you everyone.

  • STEREN GIANNINI: It's good to have you with us.

  • This is Myles, Myles Borins.

  • Myles is a developer advocate on Google Cloud

  • and we've been working together for the last year.

  • He also happens to be the director of the Node.js

  • Technical Steering Committee.

  • [APPLAUSE]

  • And of course he's a long time Node.js lover.

  • I think the first time you used Node.js

  • was for an Earth installation?

  • Something like that.

  • Nice.

  • MYLES BORINS: So this is Steren.

  • Steren is a product manager for Google Cloud Platform

  • on the serverless team.

  • And we've been working really closely on new Node.js

  • technologies.

  • Previously was involved in the Stackdriver Error Reporting

  • product.

  • And you used Node in production in 2011?

  • Was that like 04 or 06?

  • STEREN GIANNINI: 06 I think.

  • MYLES BORINS: 06.

  • Did anyone start with Node 06 in the audience?

  • Right on.

  • That was buggy.

  • I think that's like right when they introduced [? Util. ?]

  • But fun fact about Steren.

  • His wedding invitation was actually written in JavaScript

  • and it's still online.

  • And that means your wedding invitation will never break.

  • STEREN GIANNINI: Never.

  • The web.

  • All right.

  • So who can name these French pastry?

  • So it's called a millefeuille, literally

  • translating to thousand layers.

  • So it's basically a cake made of many, many, many layers.

  • So one of my teachers used to say

  • in French [SPEAKING FRENCH] Sort of translating, every problem

  • is millefeuille of simple things.

  • So you should understand every problem can be broken down

  • into many simple things.

  • And I think this is also true for software, for code,

  • but also for software architectures.

  • If you break down your big app into simple things,

  • it will be simpler.

  • Until you break them down into Microservices.

  • So that brings us to Microservices.

  • Why Microservices?

  • Well, maybe your app has grown too big.

  • Maybe it's actually hard to understand, deadlocked,

  • maintain, manage.

  • This is why Microservices can be a solution.

  • You break up the monolith.

  • You decouple those Microservices.

  • You make them simpler to understand and maintain.

  • And potentially each service will

  • be written in a different language,

  • owned by a different team, and deployed independently.

  • As a consequence, they will scale independently

  • if you've decoupled them.

  • The scaling is independent, too.

  • And by making them more fine-grained,

  • that will result in a better utilization of your resources.

  • That will result in a lower bill for you.

  • And lastly, Microservices pair great with Node.js

  • due to the asychronous nature of Node.js.

  • So Microservices break up the monolith,

  • scale independently, and pair great with Node.js.

  • But what is Node.js?

  • MYLES BORINS: So I was given this slide.

  • Node.js.

  • JavaScript, how many people like JavaScript?

  • Sweet.

  • So I don't if any of you have ever

  • seen this little emoji thing before, the rocket turtle.

  • That was the Node.js logo.

  • It was kind of like a joke.

  • Substack drew this turtle on a rocket ship

  • because JavaScript was not supposed to be a fast language.

  • It was pretty amazing how fast, both as a

  • runtime, but also for iteration.

  • Node is one of those things that just you have an idea

  • and you can get it done in like 10 or 15

  • minutes because someone's already written the module.

  • You just need to find it.

  • At its base, Node is a JavaScript runtime.

  • We take the V8 virtual machine that

  • comes in the Chrome browser, we wrap it

  • with libuv, which is an underlying architecture

  • wrapping all the system calls.

  • And that kind of abstracts all the different platforms

  • and allows developers to just write JavaScript and be

  • able to do compute, write servers,

  • and all those different things.

  • It allows you to leverage your front end skills.

  • It's event driven.

  • What that means is that compute is driven by events.

  • Something computes when something happens.

  • That can be an HTTP request.

  • That can be some sort of message coming over a socket.

  • It can be user input with a keyboard.

  • It can be some sensor if you're working with IoT.

  • Any form of I/O. So anytime there's

  • input, some sort of compute runs.

  • The [? adventure ?] of an architecture

  • plays really great with async non-blocking I/O. Multiple

  • things can run in parallel without blocking each other.

  • When I make some sort of request over a socket,

  • I'm able to suspend, let other things execute,

  • and resume when I'm done without requiring the developer

  • to program any of this.

  • It just happens.

  • Many events execute without blocking each other.

  • And this ends up being really great for web servers

  • because it means multiple people can

  • connect to the same process at the same time

  • and it just works.

  • You don't have to worry about mutex.

  • You don't have to worry about timing.

  • You just worry about handling the events.

  • So to sum that up, Node.js is JavaScript runtime

  • that's event driven with async non-blocking I/O.

  • STEREN GIANNINI: And the last term

  • that we need to define from the title of this talk

  • is serverless.

  • So who here is using a serverless product?

  • So we should agree that there are a lot of definitions

  • for serverless.

  • I'm going to highlight here the key characteristics

  • that we use along this talk.

  • The first one that, to me, is the main characteristic

  • is that you pay for what you use.

  • So it's a billing model.

  • If you don't use it, you don't pay,

  • and then the bill is proportional to your usage.

  • Of course you don't need to provision in advance

  • and your resource, all of this, is done on demand.

  • So why are we able to achieve that billing model?

  • Because of an operational model.

  • So a serverless product will scale with usage

  • without having to worry about it.

  • It grows with you from a single developer to a company

  • with thousands of developers.

  • It follows your usage.

  • And why can it scale?

  • Because of a programming model.

  • The cloud providers takes care of the rest.

  • You focus on your code.

  • That means that no infrastructure management,

  • no VM management.

  • Everything is abstracted from you.

  • So you focus on your code.

  • That also means that you focus on delivering business value

  • and value to your users.

  • So to recap, serverless, pay for what you use,

  • it scales and allows you to focus on code.

  • These are the principles we have to use along that talk

  • and you will see it's not actually not only for compute.

  • MYLES BORINS: So Google Cloud has

  • a number of different serverless offerings.

  • We have Cloud Functions.

  • How many people here have been using Node in Cloud Functions

  • so far?

  • Cool.

  • We've got App Engine Standard, we've

  • got Data Store, Cloud storage, BigQuery Pub/Sub, Dataflow,

  • ML Engine, the Firebase suite, as well as AppScripts.

  • And all of these different products,

  • what defines them as serverless and makes them all serverless

  • is the fact that you don't have to manage them.

  • We will scale them for you.

  • We will manage them for you.

  • You can utilize them as much as you need it.

  • STEREN GIANNINI: And today, we are

  • very proud to share with you the preview of Node.js on the App

  • Engine standard environment.

  • [APPLAUSE]

  • STEREN GIANNINI: Thank you.

  • So yes, we are adding a new language to App Engine's

  • standard, Node.js.

  • We will release it in a few weeks.

  • So let's see a small demo of Node.js on App Engine standard.

  • Myles is going to tell you more about it.

  • MYLES BORINS: So this package json file

  • is outlining a hello world application.

  • We have a single start script that calls node on server.js.

  • Which fun little fact, if you don't define the start script

  • and you just run npm start, that's actually

  • the default command it will just do,

  • which can be really confusing if you're not

  • expecting it to do that.

  • And it has a single dependency, which is Express.

  • Here's our Express app.

  • We're making an app using Express.

  • We have a single route.

  • And we celebrate Node.js on App Engine standard.

  • We listen on an environment variable for the port

  • or fall back to 80-84 local development.

  • And there's one more thing that we'll

  • need to get this onto App Engine standard.

  • It's just a tiny bit of YAML.

  • This one line of YAML runtime Node.js 8

  • is the only extra thing that you need

  • to add to the majority of Node.js apps.

  • And it will work as soon as you run a single command.

  • And that command is gcloud app deploy.

  • So Steren has something loaded up on his computer.

  • So we're going to switch over to it right now.

  • And your going to see Steren's in the Cloud Shell.

  • He's got that app that we were just showing you.

  • And he's going to type in that command

  • that I just mentioned, gcloud app deploy.

  • So this is how you know it's a real demo.

  • But we haven't actually deployed yet, we're ready to deploy now.

  • STEREN GIANNINI: Ready to go?

  • MYLES BORINS: You're ready?

  • STEREN GIANNINI: Let's do it.

  • MYLES BORINS: So we're starting to deploy.

  • And in the time it takes to deploy,

  • I'm going to try to list all the things I like about it.

  • So it's an idiomatic node runtime environment.

  • So that means we're using the exact version of Node that

  • gets published by Node.js.

  • There's no special runtimes, there are no special APIs,

  • it's the same app you can run in other places.

  • It scales to zero.

  • So when you're not using it, you're not paying for it.

  • It's a managed runtime.

  • So that means everything but your code

  • and the node modules that you're using will get updated by us.

  • If there is a security fix that needs to land into Node

  • or open SSL-- and trust me, there

  • are-- we're going to patch that for you

  • without you or your users really ever noticing.

  • Oh, wow.

  • I couldn't even get to it.

  • It's already deployed.

  • You also get npm install in the Cloud.

  • So that also did npm install.

  • You get hybrid monitoring for free and free and free HTTPS.

  • Wow, that was fast.

  • If we can go, we can see the service.

  • Right now it's already running.

  • STEREN GIANNINI: Sorry.

  • MYLES BORINS: The bar was hiding it.

  • So there you go.

  • This service has been deployed.

  • We're already running it.

  • And this is, in my opinion, it's a really great user experience.

  • Being able to deploy, have it up in the Cloud so quickly,

  • and be able to share that with people to take a look.

  • So now that we're done with the demo,

  • we're going to talk a little bit more.

  • STEREN GIANNINI: So this was Node.js on App Engine

  • standard environment.

  • It is idiomatic, which means that you

  • can use any module from the npm registry that you want.

  • There is no API or language restriction.

  • As you've seen, it deploys fast and scales instantly.

  • And as Myles said, it is secure.

  • So you get one-click HTTPS for your custom domains,

  • and [INAUDIBLE] updates.

  • So I will now do a deep dive into three updates

  • pieces of our infrastructure.

  • The first one that is what we call internally

  • faster than light builds.

  • So what's that?

  • When you type gcloud app deploy, we

  • will first only upload the files that have actually

  • changed since last time.

  • This is an optimization for the upload process.

  • The faster than light build arrives after.

  • So what we do is we look at your package.json

  • and package-lock.json to install your dependencies in the Cloud.

  • So we do npm install for you in the cloud.

  • Good.

  • So this is what we defined as the speed of light.

  • But actually, if we notice that your package.json

  • and your dependencies haven't changed, well,

  • what we are going to do is just give you

  • what we just gave you before if you do a new deployment.

  • So we use our caching system to give user cache to modules.

  • And that's how we can go faster than light.

  • This was the first piece of data infrastructure

  • for this new runtime.

  • The second one is a new runtime environment.

  • So let's hear a bit of the technical stack.

  • So first, you gave us your code when you did gcloud app deploy.

  • So this is your application code.

  • As I said, we installed the Node modules for you.

  • So including the native modules, they can be any modules.

  • Good.

  • Under that, of course, there is Node.js.

  • And here, this is an off-the-shelf Node.js.

  • There's no special customization here.

  • Node is relying on the OS packages.

  • And for example, you might have heard about Headless Chrome.

  • And yes, all of the packages needed to run Headless Chrome

  • are present in the App Engine image.

  • And that lives on top of a regular OS, which is Ubuntu.

  • The last three pieces--

  • Node, OS packages, and Ubuntu--

  • are automatically updated without you

  • having to worry about it.

  • So you'll automatically update your Node.js version,

  • the minor and patch version of the Node.js.

  • The OS package and the OS is also automatically updated.

  • All of that lives in a secured sandbox.

  • And this is a new sandbox technology

  • that we are introducing with this new runtime.

  • Under the hood, the sandbox is based on G Vysor.

  • So G Vysor is a content runtime sandbox

  • that we open sourced last week.

  • Check it out.

  • This is our new runtime environment.

  • And the last piece of infrastructure

  • I would like to highlight is automatic scaling.

  • So what is automatic scaling?

  • Let's take an example.

  • So App Engine constantly monitors the instance capacity

  • and traffic, your application capacity and traffic.

  • And it ensures that the appropriate number of instances

  • are available to run your application.

  • So what does that mean?

  • Let's take an example.

  • So as I said, if there is no request arriving

  • to your application, there is no instance scheduled.

  • And when I say instance, it's not a VM instance,

  • it's an abstract that we named instance.

  • OK, now some traffic arrives to your application.

  • Well, we will scale up as fast as possible one instance.

  • Here I'm going to pause for a second.

  • I want to highlight one concept that is

  • very important for developers.

  • The very first of these requests really

  • hits what we call a cold start.

  • That means that this request will actually hit the instance

  • while it is starting up.

  • And this usually results in a higher latency than usual.

  • Of course we try to not retry, but we do boot up

  • the instance very, very fast.

  • But you also have a responsibility here.

  • You have to ensure that your start command is fast too.

  • So in the start script of your package [INAUDIBLE],,

  • make sure to not do things like build

  • or a lot of initialization.

  • Do the minimum required to start your application,

  • because this will have an impact on the cold start

  • of your application.

  • OK, so here as I said, the very first request

  • will hit this cold start.

  • But the next ones, everything is fine.

  • But then your traffic will start to increase a little bit.

  • And here, before this first instance arrives at capacity,

  • we will anticipate that you need a second one.

  • And we will actually schedule a second one.

  • So that when you actually need it, it's there.

  • Doesn't need to scale more.

  • And now let's imagine your app is featured on a popular blog

  • post and you have a huge spike of traffic.

  • Well, don't worry.

  • App Engine scales very fast.

  • So it would be to scale up instances as much as needed.

  • You control, you can tell us a maximum of number of instances

  • if that matters to you.

  • And of course, when things go down,

  • well we will ramp down the number of instances to zero.

  • So that's what we call automatic scaling.

  • You did not have to do anything to get this behavior.

  • But this is not new at all.

  • App Engine has been doing that for 10 years, 10 years now.

  • What is new is that actually we updated our automatic scaling

  • algorithm.

  • So the updates include it is more predictable.

  • As a developer, you will better understand how it behaves.

  • When you look at your instance chart,

  • you will better understand it.

  • Second, it actually improves the instance utilization

  • and performance.

  • So we measured a reduction of 30% of those cold starts.

  • So that means that the detail, the latency

  • of all of our apps across all of App Engine has went down.

  • And also better utilization means lower bills

  • for everybody using App Engine.

  • And this was released in the last months.

  • The third point are new settings that we

  • are introducing to better control this automatic scaling

  • algorithms.

  • Those new settings help you balance

  • between cost and performance.

  • Also, as I said, help you cap the number of instances

  • so that you can cap your bill but also the number of database

  • connections, for example.

  • So this was three pieces of infrastructure we updated.

  • Up to you, Myles.

  • MYLES BORINS: So we built a product

  • with Node.js comprised of Microservices

  • to share with you today.

  • And Steren's going to demo it on the machine for us right now.

  • STEREN GIANNINI: All right, so go for it.

  • MYLES BORINS: So we created a web page watcher.

  • And we're able to put an address into the submission bar

  • and it will start watching it.

  • And at five-minute increments, it's

  • going to use Headless Chrome to take a screen

  • capture of the website.

  • And it's going to compare it to other screen captures.

  • And it's going to only take screen captures

  • when there's actually been a difference in the page.

  • This website allows you to set any number

  • of different websites to watch.

  • And this has been a lot of fun to use on the Google website

  • to see the changes to the Google header.

  • You could use it to watch your own site to make sure

  • that things aren't going down if you have a personal page that

  • isn't changing a lot.

  • But it was a way for us to experiment with breaking things

  • into different services, and put together

  • a really great example.

  • So Steren, did you have anything you wanted to add to that?

  • STEREN GIANNINI: I want to tell you how it's done.

  • MYLES BORINS: That sounds great.

  • STEREN GIANNINI: Let's get back to the slides.

  • So this is our product.

  • This is our application the website watcher.

  • How did we build it?

  • Well, every five minutes there is

  • a cron job that's we trigger of a first Microservices.

  • This Microservice is called Task Scheduler.

  • The job of this Microservice is to ask our database for a list

  • of web pages to screenshot.

  • When it receives this list, it will actually [? enqueue ?]

  • into a Pub/Sub topic a message for each page to screenshot.

  • This is the first Microservice.

  • Then Pub/Sub will dispatch, will push to a second

  • Microservice, the web pages.

  • And the job of this Microservice,

  • is it has one job, is to take screenshots.

  • So as I said previously, this is using

  • Headless Chrome under the hood, [INAUDIBLE]

  • to take the screenshots.

  • So it opens a web page, takes a screenshot,

  • saves it into Google Cloud Storage.

  • That's its only job.

  • Now a new file arrives into Cloud Storage.

  • This will send an event that will trigger a Google Cloud

  • function that will compare this new screenshot

  • with the last known reference screenshot for that web page.

  • We call it image diff because the job of that function

  • is to compare two images.

  • And if there is a difference between the images,

  • it will store the new image into a new folder in Google Cloud

  • Storage.

  • So this is how we continuously get

  • the updates of all of the web pages that we want to track.

  • OK now, a user arrives onto a website.

  • So an HTP request arrives onto our web Frontend,

  • another Microcservice, that will query our database asking

  • for a list of web pages.

  • You've seen that list.

  • And when I see the detail of one of those web pages,

  • we actually display the screenshots.

  • So it also queries Cloud Storage.

  • So that's how we build our products.

  • Four custom Microservices all communicating with each other.

  • And now, Myles is going to tell you more about the code.

  • MYLES BORINS: So the first Microservice that we're showing

  • is our screenshot server.

  • You're going to notice we're using async functions

  • because App Engine standard comes

  • with Node 8 out of the box.

  • We're going to start a browser session by awaiting Puppeteer

  • to launch.

  • We're going to then await a new page from the browser object.

  • We're going to await surfing to that URL.

  • We'll await setting the viewport to the size of the image

  • that we want to take.

  • And then we'll await a screenshot in the image buffer.

  • We then await closing the browser.

  • And then we return the image buffer.

  • We're doing a lot of asynchronous work in here

  • that is going to allow us to use the same service to pull

  • lots of different images at the exact same time

  • without blocking.

  • We're able to do it in a single linear fashion

  • without having to manage a whole bunch of callbacks

  • or any sorts of promises.

  • Although under the hood, there's lots

  • of different promises going on.

  • Now, this is deployed using App Engine standard.

  • And there were some other talks going

  • on this week about Headless Chrome.

  • And you can find those if you go and check

  • the I/O webpage later.

  • So this is the part of the service that

  • actually saves to a bucket.

  • Similarly we're using Async/Await

  • to be able to do this in a linear fashion.

  • But we go to the bucket, we rename

  • the images to be based on a time stamp,

  • and we save it, and await that information, and then log it.

  • And because all of the Node.js client libraries support

  • promises, you're going to be able to use our various client

  • libraries to talk to all of our various servers

  • using Async/Await right out of the box.

  • And then the last thing that we wanted

  • to show you was the image diff from our Cloud Function

  • that's doing the image diff.

  • Every time a new file gets added to a Cloud Storage bucket,

  • this function's automatically called.

  • And what we wanted to just show is how quickly

  • from the event that the function is executed with

  • we can grab the file and the context at which

  • of that file inside of Cloud Storage.

  • So we don't even need to write any logic

  • to tie the two together.

  • We just use one service to save it,

  • and this new service just reacts.

  • And one thing that's really great about all

  • these different services is that out of the box,

  • if you start logging, Stackdriver's

  • going to grab all of those logs and you're

  • going to be able to use Stackdriver logging to look

  • at all the logs of what's going on.

  • And I don't know about you, but I never write a service right

  • at the first sign.

  • I write it locally, and then it works differently in the cloud.

  • Being able to just log and have it working is really awesome.

  • For the demo that we put together,

  • you'll see online later we used Bunyan

  • and have a pretty robust logging system

  • that you'll be able to take a look at.

  • STEREN GIANNINI: So let's look at the Cloud Console

  • to see those Microservices in the console.

  • So as I said, we do have three App Engine services.

  • So you will see them on the App Engine dashboard.

  • So opening my services-- or we have four because I just

  • deployed one earlier with you.

  • So defaults, screenshot, task editor.

  • We have one cloud function.

  • So if I open it,

  • it will display here.

  • Image diff, as you see, the trigger

  • is a Cloud Storage bucket trigger.

  • Now, as Myles said, out of the box all of your logs

  • are going to be collected into Stackdriver logging.

  • So here, let me filter out some of those logs

  • and only show the application logs that are here.

  • So these are the logs for our app across all of our services.

  • So as you see, we see we loaded the website.

  • We save screenshots so all of this

  • gets logged, and with a different severity.

  • Here we had an error, apparently.

  • And the last thing I wanted to show

  • is that when your logs go to Stackdriver,

  • they are going to be collected and grouped together

  • by Stackdriver Error Reporting.

  • So what you see here is a list of error groups.

  • So all of your errors, if they are the same,

  • are going to be grouped together in this place as one group.

  • So this is out of the box when you use App Engine, especially

  • Node.js App Engine.

  • That is all for the demo of the resources.

  • Let's go back to the slides.

  • And now I'm going to talk a little bit more

  • about how to connect Microservices.

  • So of course, you can directly call a service.

  • So that's the case for example, when

  • a client calls our front end.

  • So an HTP request arrives, that's a direct call.

  • It's also the case when you receive a webhook.

  • When you receive a webhook, that would be a direct call.

  • And lastly, two Microservices can call each other directly,

  • of course.

  • Another kind of communication between Microservices,

  • which actually can be quite convenient and useful,

  • is what we call asynchronous execution and events.

  • So if you remember, every five minutes [INAUDIBLE] services

  • was scheduled.

  • So this is one kind of scheduled services.

  • Now you remember any time we put a file into Cloud Storage,

  • something happened.

  • The function was triggered.

  • So this is another kind of communication.

  • We call that reactive services.

  • So when something happens, this is

  • triggered [INAUDIBLE] events.

  • Another way to do an asychronous execution

  • is to actually use a task queue.

  • So one service will make you a task into a queue.

  • And then this queue will make sure

  • that this task is executed by retrying, if needed,

  • and by dispatching at the rate that you want.

  • So task queues can be quite convenient for

  • asynchronous task execution.

  • And lastly, as we made in our demo,

  • you can use a publisher and subscriber

  • pattern when one service will publish

  • a message into a Pub/Sub topic, and one or many other services

  • that are subscribed to that topic,

  • will receive the message.

  • And lastly, maybe this is obvious,

  • but sharing storage is also a way

  • to communicate between services.

  • You can have two services sharing the same database.

  • That is a kind of connection.

  • In our case, we used Cloud Datastore and Google Cloud

  • Storage.

  • So connecting Microservices, you can directly call them,

  • you can use asynchronous execution and events,

  • and shared storage.

  • MYLES BORINS: So after kind of talking

  • about all these different types of services and products,

  • you may be wondering when should I use functions?

  • Or when should I use App Engine?

  • Or maybe when should I use a Kubernetes?

  • These decisions are things that teams need to make.

  • So I want to get a little bit more into functions first.

  • So we talked about events and async

  • and that's a lot of the programming model

  • that's used for that.

  • So you have different kinds of triggers.

  • You have Pub/Sub, you have Google Cloud Storage.

  • So whenever something ends up in a bucket, it triggers it.

  • You can have database transactions.

  • One that's really interesting is logs.

  • So you can have a function run every time

  • you see things in logs.

  • You can use that to filter certain kinds of error messages

  • or automate pager duty or something fun like that.

  • Maybe not pager duty.

  • But you can also use webhooks.

  • Any sort of HTTP can go and call in.

  • One of the things that's nice with Cloud Functions

  • is they can be really single-purpose.

  • It has a well-defined programming model

  • with a really quick way to get to the data that's

  • coming in from the event.

  • One that I really like is an emailer, for example.

  • So you may have a whole bunch of different services

  • that need to send email.

  • So if you have one Cloud Function that

  • just does all the auth and all the message

  • handling to whatever email service you're running,

  • you can make a email topic on Pub/Sub and just publish a to,

  • a from, a body, and a title, and an email will just go out.

  • And let's say you decide to change your email provider

  • service.

  • You just have to change that one Cloud Function.

  • And no one else needs to refactor their code.

  • As long as you have a defined API surface between all

  • your services that isn't broken, you

  • could change it to any language or any service you want to.

  • So on the other hand, we have App Engine.

  • App Engine, for example, HTTP services

  • are generally what are exposed.

  • Now, it's multi-purpose, though.

  • You could use it for front end applications.

  • You can use it to serve static resources.

  • It does not have a defined function wrapper.

  • So you can just take any Node application for the most part

  • that you've already been running and just launch it.

  • If you have multiple different things that

  • are going, multiple routes that you're handling,

  • database queries templating, you may

  • find that running something on App Engine

  • makes more sense for the use case than a Cloud Function.

  • But you can define your own compute relationship.

  • Card functions and App Engine work really great together.

  • If your team prefers thinking about things as functions,

  • run with that.

  • I know entire websites that have thousands of page views

  • and hours and they've written their entire infrastructure

  • on functions.

  • And there's tons of big note apps

  • that could just run on an App Engine and scale as necessary.

  • You could really break apart the problem

  • as it makes sense to your team as you need to go.

  • And they both share a lot of similarities.

  • They're both going to scale to zero

  • when you're not using them.

  • You're going to pay for what you use.

  • And under the hood, they're using the same sandbox.

  • They're using the same auto scaling.

  • There's no right or wrong.

  • One of the cool things about it, too,

  • is you can start thinking--

  • it almost becomes more of a difference

  • of how you want to think about your compute

  • and how you want to factor your code.

  • But from an OPS perspective, they're very similar.

  • And we also wanted to announce today

  • that [INAUDIBLE] is going to be coming to GCF really soon.

  • So you're going to be able to use all of these features

  • both on Cloud Functions as well as App Engine.

  • STEREN GIANNINI: All right, to recap,

  • deploying Node.js Microservices, serverless Microservices.

  • What have we seen today?

  • First, breaking up the monolith.

  • So as you've seen, we've built an entire product made out

  • of very, very simple pieces.

  • Each of these pieces is a few files, or even one file

  • for some of them.

  • Break up the monolith.

  • Microservice architectures can help you build rich products

  • from a de-coupled, small, scalable, maintainable,

  • understandable components.

  • Second, we showed you a preview of Node.js

  • on App Engine standard environment.

  • This is coming in a few weeks.

  • And lastly, what you should remember

  • is that Google Cloud Platform has

  • everything you need to build serverless Microservices using

  • Node.js.

  • And personally, I can't wait to see

  • what you will build with Node.js on App Engine standard

  • and all of those products.

  • [APPLAUSE]

  • MYLES BORINS: Thank you.

  • Thank you.

  • So if you enjoyed the session, you

  • can go and get your feedback here through the schedule link.

  • You can use web or Android or iOS, just let

  • us know what you think.

  • Here are some resources.

  • You'll find the docs for the App Engine standard.

  • You're going to see the functions and the demo

  • that we were showing you today is all there.

  • And so just a quick round of applause for everyone

  • who's watching at home on the live stream.

  • [APPLAUSE]

  • And one more quick round of applause.

  • You don't really see it, but there are some amazing people

  • at the back of the house and the back

  • here who have been working all day tirelessly.

  • Just a round of applause for them.

  • [APPLAUSE]

  • And so don't go yet.

  • STEREN GIANNINI: One more thing.

  • One more thing.

  • Look under your seat.

  • No joking.

  • Look over there, we have a stand where you can scan your badge

  • and we will send you invites to try tomorrow,

  • or as soon as we get your e-mails, Node.js on App Engine

  • standard environment.

  • So if you want to give it a try, scan your badge over there.

  • Thank you.

  • MYLES BORINS: Thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

STEREN GIANNINI: Hello, hello.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it