Placeholder Image

Subtitles section Play video

  • JUSTIN UBERTI: Hi everyone.

  • Thanks for coming to the session on WebRTC for

  • plugin-free realtime communication.

  • I'm Justin Uberti, tech lead for WebRTC at Google.

  • And with me today is-- hey, has anyone seen Sam?

  • SAM DUTTON: Hey.

  • JUSTIN UBERTI: Sam Dutton, coming to you live from WebRTC

  • on Chrome for Android.

  • [APPLAUSE]

  • SAM DUTTON: On a beautiful Nexus 7.

  • We got this low-res to cope with the Wi-Fi here.

  • That seems to be working pretty well.

  • JUSTIN UBERTI: That was quite an entrance.

  • Why don't you come up here and introduce yourself?

  • SAM DUTTON: Yeah.

  • Hey.

  • I'm Sam Dutton.

  • I'm a developer advocate for Chrome.

  • JUSTIN UBERTI: So we're here to talk to you today about the

  • great things that WebRTC's been working on and how you

  • can use them.

  • So what is WebRTC?

  • In a nutshell, it's what we call realtime communication--

  • RTC--

  • the ability to communicate live with somebody or

  • something as if you were right there next to them.

  • And this can mean audio, video, or even just

  • peer-to-peer data.

  • And we think WebRTC is really cool.

  • But there's a lot of other people who are really excited

  • about WebRTC as well.

  • And one of the reasons is that WebRTC fills a critical gap in

  • the web platform, where previously, a native

  • proprietary app like Skype could do something the web

  • just couldn't.

  • But now we've turned that around and changed that so we

  • have a web of connected WebRTC devices that can communicate

  • in realtime just by loading a web page.

  • So here's what we're trying to do with WebRTC, to build the

  • key APIs for realtime communication into the web, to

  • make an amazing media stack in Chrome so that developers can

  • build great experiences, and to use this network of

  • connected WebRTC devices to create a new

  • communications ecosystem.

  • And these kind of seem like lofty goals.

  • But take this quote from the current CTO of the FCC who

  • said he sees traditional telephony fading away as voice

  • just becomes another web app.

  • So we're trying to live up to that promise.

  • And right now, you can build a single app with WebRTC that

  • connects Chrome, Chrome for Android, Firefox, and very

  • soon, Opera.

  • I'm especially excited to announce the as of this week,

  • Firefox 22 is going to beta, which is the very first

  • WebRTC-enabled version of Firefox.

  • So within a matter of weeks, we will have over one billion

  • users using a WebRTC-enabled browser.

  • [APPLAUSE]

  • JUSTIN UBERTI: And I think that just gives a good idea of

  • the size of the opportunity here.

  • And we respect that number to grow very significantly as

  • both Chrome and Firefox get increased adoption.

  • For places where we don't have WebRTC-enabled browsers, we're

  • providing native, supported, official tool kits on both

  • Android, and very soon, iOS, that can interoperate with

  • WebRTC in the browser.

  • [APPLAUSE]

  • JUSTIN UBERTI: So here are just a handful of the

  • companies that see the opportunity in WebRTC and are

  • building their business around it.

  • So that's the vision for WebRTC.

  • Now let's dig into the APIs.

  • There are remain categories of API that exist in WebRTC.

  • First, getting access to input devices--

  • accessing the microphone, accessing the webcam, getting

  • a stream of media from either of them.

  • Secondly, being able to connect to another WebRTC

  • endpoint across the internet, and to send this audio and

  • video in realtime.

  • And third, the ability to do this not just for audio and

  • video, but for arbitrary application data.

  • And we think this one is especially interesting.

  • So because there's three categories,

  • we have three objects.

  • Three primary objects in WebRTC to access this stuff.

  • The first one, MediaStream, for getting access to media,

  • then RTCPeerConnection and RTCDataChannel.

  • And we'll get into each one of these individually.

  • Sam, why don't you tell us about MediaStream?

  • SAM DUTTON: Yeah, sure.

  • So MediaStream represents a single source of synchronized

  • audio or video or both.

  • Each MediaStream contains one or more MediaStream tracks.

  • For example, on your laptop, you've got a webcam and a

  • microphone providing video and audio streams, and they're

  • synchronized.

  • We get access to these local devices using the getUserMedia

  • method of Navigator.

  • So we just look at the code for that, just highlight that.

  • And you can see that getUserMedia there, it takes

  • three parameters, three arguments there.

  • And the first one, if we look at the constraints argument

  • I've got, you can see I'm just specifying I want video.

  • That's all I'm saying.

  • Just give me video and nothing else.

  • And then in the success callback, we're setting the

  • source of a video using the stream that's returned by

  • getUserMedia.

  • Let's see that in action, really simple example here.

  • And you can see when we fire the getUserMedia method, we

  • get the allow permissions bar at the top there.

  • Now, this means that users have to explicitly opt in to

  • allowing access to their microphone and camera.

  • And yeah, there we have it.

  • Using that code, we've got video

  • displayed in a video element.

  • Great.

  • What really excites me about these APIs is when they come

  • up against each other, like in this example.

  • What's happening is, that we've got getUserMedia being

  • piped into a canvas element, and then the canvas element

  • being analyzed, and then producing ASCII, just like

  • that, which could make a good codec, I think.

  • JUSTIN UBERTI: It would be a good codec.

  • You can press it using just gzip.

  • SAM DUTTON: Yeah, smaller font sizes, high resolution.

  • Also, another example of this from Facekat.

  • Now what's happening here is that it's using the head

  • tracker JavaScript library to track the position of my head.

  • And when I move around, you can see I'm moving through the

  • game and trying to stay alive, which is quite difficult.

  • God, this is painful.

  • Anyway--

  • whoa.

  • OK, I think I've flipped into hyperspace there.

  • And an old favorite, you've may well have seen a webcam

  • toy which gives us access to the camera, kind of photobooth

  • app, uses WebGL to create a bunch of slightly psychedelic

  • effects there.

  • I quite this old movie one, so I'll take

  • that and get a snapshot.

  • And I can share that with my friends, so beautiful work

  • from Paul Neave there.

  • Now you might remember I said that we can use the

  • constraints object.

  • The simple example there was just saying, use the video,

  • nothing else.

  • Well, we can do more interesting things with

  • constraints than that.

  • We can do stuff like specify the resolution or the frame

  • rate, a whole stack of things that we want

  • from our local devices.

  • A little example from that, if we go over here.

  • Now, let's look at the code, actually.

  • If we go to the dev tools there, you can see that I've

  • got three different constraints objects, one for

  • each resolution.

  • So when I press the buttons, I use the QVGA constraints,

  • getUserMedia, and then with the VGA one, I'm getting high

  • resolution.

  • And for HD, I'm getting the full 1280 by 720.

  • We can also use getUserMedia now for input from our

  • microphone.

  • In other words, we can use getUserMedia to provide a

  • source node for web audio.

  • And there's a huge amount of interesting stuff we can do

  • with that processing audio using web audio, from the mic

  • or wherever.

  • A little example of that here--

  • I'll just allowed access to the mic, and you can see, I'm

  • getting a nice little visualization there in the

  • canvas element.

  • And I can start to record this, blah

  • blah blah blah blah--