Subtitles section Play video
everyone.
So our story begins with a humble, await patch.
These two lines of code making http request.
Wait for a response and print out the status code.
While this code is waiting, a cascade of activity is taking place in your JavaScript runtime, your operating system and your hardware.
In this talk, we're going to look at what happens between these two lines of code and what waiting can teach us about software design.
Now, you may be thinking, Hold up.
Why do I need to think about this stuff while riding JavaScript code and more so 100 feet from a beautiful beach?
Well, I can sympathize.
I've wrestled with motivated myself to dig into these lower level details before, and I'd like to offer to motivating ideas, maybe enough to get you over the postal on chump first.
This isn't trivia.
These aerial systems we use every day while writing JavaScript code understanding Systems teaches us examples of how problems can be solved, and most importantly, it gives us the opportunity to identify patterns that guide how new systems can be built.
Second, computers are really complex and powerful, and that can sometimes make them feel magical.
But much of that complexity comes from layer upon layer of simple solutions to simple problems.
As JavaScript programmers, these problems a relatable to us because we live in a synchronous world.
By understanding the problems the lower layers solved, we can gain a greater confidence in the face of this complexity.
As you'll see, a lot of that complexity serves a simple task.
How computers Wait My name's Max.
I'm visiting from San Francisco and I'm a developer.
Experience engineer, a patriotic.
On the first Wednesday of every month, I co organized a local meet up called Waffle Jazz.
You could see the logo on screen.
It's a really fun time.
I worked with a team of some amazingly talented people I've learned a ton from, and if you ever passing through, I'd invite you to come join us.
It's a really fun night.
Or come give a talk to talk to me after the after this, So back to waiting.
Waiting's kind of a strange concept, and it was difficult for me to like, nail down in writing this talk because we rarely do it.
We spend a little time waiting is humanly possible.
If I ask you to wait for something you're gonna switch to something else to D'oh.
So our use of the word wait is usually in the context of a set of instructions to do something.
And when there's some dependency in that process, we have to wait for our progress into the Hill of Progress is no longer blocked, by the way.
Thanks, Andres, for the cute spaghetti image on the slide.
Good recipes take advantage of the idle time by inter leaving processes while you're blocked on one thing you can start on another.
This is a great analogy to how computers multi task except computers have thousands of different processes They're carrying out at any given time.
And unlike most recipes, the order in which things could happen is unpredictable.
As programmers, we like to think in terms of sequential steps because it makes our code simple to reason about you making http request.
Then you wait for the response, then you print out the status.
This is why a single weight style program was so useful.
It lets us express steps in a clear and then your order waiting makes that possible.
It's the glue that allows us to express a Siri's of separate events linearly.
But when we say to wait, we expect that the human or computer on the other side is gonna find something else to do.
In the meantime, this is a behavior we have to design into our computers.
Let's dive into the mechanisms of how this works.
Starting with a simple microprocessor for simplicity, I'm gonna focus on a single core CPU.
That means it can only do one thing at a time.
Richest talk, which is immediately after this that around 1 30 is gonna cover threading, which takes advantage of modern si, pues having multiple concurrent course.
So back to the CPU, we're gonna start with a simple sub component called the clock the clock.
Periodically, pulse is an electrical signal on and off.
And this is what it drives.
Instruction execution in your CPU.
In modern si, pues, we measure the frequency of the clock and gigahertz, meaning the clock is oscillating billions of times per second.
From a reductionist point of view, every single thing a computer does or waits for, begins at one clock cycle and ends with another.
By programming a microprocessor, we can implement a simple kind of waiting called busy waiting.
This is a loop where each cycle we checked the condition we're waiting for.
If you want to do other things while we wait, though, we kind of have to intersperse them in this wait Luke.
And that increases the time between checks as a number of things we want to wait for increases.
This loop becomes less and less efficient because you're checking more things.
Each type you cycle.
That's where interrupts come in.
Interrupt.
Signal the CPU when something happens, such as your network interface receiving data, keyboard key being pressed or timer elapsing.
When a processor receives an interrupt, it pauses.
What's currently running saves the execution ST and switches to different code called interrupt Handler.
The interrupt handler takes any immediate actions necessary and then determines how to proceed forward.
The code that implements interrupt handlers is part of your operating system.
The OS makes it possible to write higher level programs that don't have to worry about interrupt handling and communication with hardware.
The OS governs switching between which programs are running so that multiple programs can take turns sharing a CPU.
This is called scheduling.
The operating system also provides a P eyes for Io called system calls for things like writing to files and sending packets in limits.
Most kinds of Io's representative operations on streams of bites.
Here's a couple examples on screen I owe takes time, though you know discs take time to perform operations.
Network devices take time to transfer data.
When a program is performing, I owe it often wants to wait until that Io is completed.
So a simple model for this is called blocking Io.
While the program's waiting for an I O operation, we say that the calling program blocks until that IO completes.
Once the OS receives an interrupt that the eye is completed, the process is queued to be resumed by the scheduler.
While that process is blocked, the OS scheduler can run something else.
In the meantime, simple blocking Io system calls only wait for one thing at a time, though.
If we want to wait for multiple things, we're gonna need some more tools.
Operating systems provide non blocking versions of many Io calls for this.
When you open a file or network connection, the OS returns a number that identifies the stream called a file descriptor you can use that f d to reference the stream and other system calls.
So, in this example, if an operation would block the non blocking, read returns an error instead.
If we get an error indicating the reed would block, that means there's no late data left in the buffer for us to read.
So instead, we can find something else to do instead of pausing execution.
This leaves our process running, so we get to decide what to do instead of blocking non blocking.
Io calls can also be used to wait for multiple things at a time.
For example, we can loop over a set of file descriptors and try to read from each of them.
If there's no data to read, we continue on to the next file descriptor.
However, now we're back to essentially busy waiting.
What we really want to do is block on a set of things that could happen.
Resuming when any one of them does operating systems provide event based system calls for this.
A simple one is called Select.
Select has given three sets of file descriptors, one for each kind of event streams ready to read, streams ready to write and streams with errors.
The select call then blocks until an event happens or a specified amount of time elapses.
It then returns to the programmer, which file descriptors could be read to written two or have errors.
So here's a really simplified example of how select works we pass.
A set of file descriptors were interested in reading from, and then it blocks until one or many of them becomes readable when select returns.
It gives us the list of files that now of data available, and we can loop over them and read from them without blocking.
Each operating system provides a slightly different modern implementation of a venture of an IO, though Lennox this is called a pole Mac OS and BSD F K Q.
And Windows has OCP, too right cross platform software.
We have to implement different Io code for each one.
Some programs do that, but many others used libraries that abstract over the differences between these AP eyes.
This is where Libya V comes in Libya, the abstracts over the varying implementations of event driven Io in different operating systems and provides a common interface for them.
Libby V is used by note, which will be using as our example JavaScript run time In the future of the slides, I'm guessing there may be a few Libya v developers in the audience today.
If you can find one of them, give them a high five for me.
Okay, Liberty lets you perform network and file operations and register callbacks on events happen.
Libby V uses the operating systems event driven Io constructs toe, wait for events and when something happens, live TV then executes the registered callbacks and resumes waiting.
This is called an event, Luke, and here's an example to make a network connection.
We initialize what Libby V calls a handle, which represents some kind of io object we can perform operations on.
When we opened the network connection, we passed live You ve a callback to run when the connection's established.
This is very familiar.
If you used to like running job script code in assigning callbacks from things complete to read data from the connection, we need to tell the TV to track.
When the connection becomes readable, we provide a read callback, which will be called with the data as it becomes available, and then the last thing we need to do is run Libby V's event loop.
This is gonna block until the next IO event were interested in happens and call the related call back for us.
No, J s is implemented on top of this Libby V event.
And when J.
S uses a note a p I to perform, I owe under the hood notice calling Libya Vita performance.
And then when I owe events happen, node runs the callbacks or triggers events and jobs.
So that was a lot.
Now that we have our cast of characters, let's return to our code.
In order to make an http request, we have to perform several network operations.
Couldn't looking up the host name, opening a socket, sending in htp requests for receiving data.
I'm gonna gloss over many of those steps so we can just look at a single walk through the layers as we established a socket connection.
So let's dive in and break this down.
Let's start with the fetch Call fetch uses nodes.
Https module to start the request.
It then returns a promise that represents the pending value of the fete response.
Our javascript awaits the response from the fetch.
This tells Node to pause and save the Java script state and then switch to running up other cute up JavaScript code.
We're going to skip over looking up the I P address here and go straight to opening a connection so no one uses Libya.
Vito open a connection to J.
S.
H i dot com and accuse up a callback to run when the connection's established.
To accomplish this, Libya details Lennox to make the connection.
At this point, if Note has no more JavaScript to run it, then calls into Libya V's Event Loop, which will wait for the next Iot event now know Js is waiting under the hood.
Libya V is using the Lenox a pole, a P I.
To track these Io events, Libya V tells Lennox it's interested in when the socket becomes rideable, then Libya the weights as well, locking until a time out where the next event happens.
In the meantime, Lennox's network stack is busy making the connection.
This involves the operating system, network drivers sending data and the CPU transferring that data to the network device.
While we wait for a response, the operating system is going to switch to running other things.
Eventually, though, the CQ will receive an interrupt that we're interested in that the network device has received data.
This interrupt will cause the operating system network driver to read the data from the network interface.
And then this is gonna continue back and forth for a while until the connection is totally established.
At this point, the connection socket becomes ready to write, which is what Libya V is waiting for.
Libya v executes any callbacks waiting on file descriptors that became ready.
And it just so happens that we have one from node waiting for the connection to be established once we've finished processing all waiting callbacks.
We finished our first our iteration of the Libya, the event loop.
There's gonna be like a couple of the things that happened here that I'm glossing over.
There's several similar back and force that we've seen.
Like starting a security less connection handshake and actually making the http request.
We're gonna skip over them here.
But when that's all finished nodes H GPS module is going to admit a response event.
This is what our fetch promise that we initially made is waiting for when the promise resolves the weight is now ready to resume.
So node executes our next line of JavaScript code printing out the response code.
That was a lot to digest, huh?
Let's look at the key parts in one picture.