Placeholder Image

Subtitles section Play video

  • MALE SPEAKER: Thank you for coming to Edmond Lau's talk.

  • He will be talking about how to be a more effective engineer.

  • I met him about four months ago or so,

  • and it was also during one of his talks.

  • And I was just blown away by it.

  • And I thought it would be so useful to share his experience

  • and knowledge with the rest of Google.

  • He is actually also an ex-Google.

  • So, he worked at Google before.

  • He also worked at various startups.

  • And this talk will be based on his experience as well

  • as everyone he interviewed with, and what

  • makes for an effective engineer and for an effective team.

  • So without further ado, let me introduce to you Edmond Lau.

  • [APPLAUSE]

  • EDMOND LAU: Yeah, it's great to be back.

  • I think it's been maybe 10 years since I first joined

  • Google right out of school.

  • I joined the search quality team.

  • And since leaving Google, I've worked

  • at a bunch of different startups,

  • including Ooyala, Quora, and Qup.

  • And two of those were actually founded by former Googlers.

  • And so, even though I've been away

  • from Google for quite some time, Google culture

  • has actually been a large part of the various startups

  • that I worked at.

  • And earlier this year, I did publish a book, "The Effective

  • Engineer."

  • And what I like to do during this talk

  • is share with you some of the stories and lessons

  • that I've collected during these experiences.

  • And so my promise to you during this talk

  • is that you sort of walk away with a framework and a set

  • of actual strategies that you can apply to your day

  • to day job on how to become more effective as an engineer.

  • But, before we dive into the meat of the talk,

  • I want to do a quick poll and get a show of hands.

  • Who here has pulled a long night on a project for work before?

  • Who here has done that-- who's done work on the weekends

  • before?

  • A large number.

  • Who here has worked multiple months on a project, only

  • to see maybe it not actually launch?

  • What about working multiple months on a project and then

  • see it launch, but not actually sort

  • of seeing the impact or the effect

  • that you actually wanted?

  • And let's do one more.

  • Who here has to do pager duty?

  • And how many of you have been paged in the middle of night

  • or over the weekends before?

  • Yeah, so it's a good number of people.

  • And when this stuff happens, a lot of times we sort of wonder,

  • was all this time and energy, was it worth it?

  • Was this actually the best use of my time?

  • And this isn't a question that you just sort of ask at Google.

  • It's also questions that we ask at other places as well.

  • I remember when I left Google, the week

  • immediately afterwards, I jumped-- dove head first

  • into my first startup.

  • I didn't even take a week of break.

  • And this first startup was Ooyala.

  • It was an online video company focused on building

  • a platform for the enterprise.

  • It was founded by a former Googler.

  • I remember my very first week, the CTO told me,

  • you're responsible for building this feature that's

  • already been promised to a paying customer.

  • And the video player has written an actionscript,

  • because it was a Flash based video player.

  • And it was a language that I didn't know.

  • The servers were all written in Ruby

  • on Rails, which I had no familiarity with.

  • And there was not a single unit test

  • across the entire codebase.

  • And so, I was really scared of just accidentally breaking

  • something production.

  • And so, my first two weeks there,

  • I ended up pulling 70 to 80 hour weeks to try to get that done.

  • And when I want to talked to my CTO

  • about how intense this felt, his response was sink or swim.

  • There's no life preserver coming.

  • You're just going to have to work hard.

  • And that was a very intense and very stressful

  • time in my career.

  • And so, the next startup I joined

  • wasn't all that different.

  • When I join Quora, which was a question answer site,

  • the jobs page actually read, you should

  • be ready to make this startup the primary focus of your life.

  • That's sort of how intense the culture was.

  • And that time, all of these narratives

  • about how you need to work hard to succeed sort of really

  • lined up with how we were thinking.

  • My team and I, we really wanted to succeed.

  • But at the same time, we knew we were the underdogs.

  • While we were building an online video

  • platform for the enterprise, YouTube was free.

  • Our biggest competitor, Brightcove,

  • dominated most of the market.

  • And so we thought, one of our secret weapons

  • was that we're just going to outwork everyone else.

  • We're going to work so hard that we're

  • going to overcome all these obstacles and actually succeed.

  • And it took a while before I started wondering,

  • was this actually the best use of our time?

  • Because we worked on projects where

  • we'd spend multiple months building a future for a paying

  • customer, but then they would never use it.

  • Or at Quora, we'd build content tools.

  • And it just wouldn't get the adoption from users

  • that we actually wanted.

  • There were all these things where we spent so much time

  • on these projects, by the end of day,

  • it didn't really lead to any impact.

  • And so, we started sort of wondering,

  • can we actually work smarter?

  • Can we deliver significantly more value

  • while working fewer hours?

  • And it's sort of like after doing years of actually working

  • these crazy hours before I had this key insight that effort

  • doesn't really equal impact.

  • You may have good intentions, but those good intentions

  • don't actually lead to results.

  • Think about a staff engineer at Google.

  • They probably produce more than twice the impact

  • or twice the output of a more junior engineer,

  • but they're probably not working 2x the hours.

  • Or think about Jeff Dean.

  • So, I heard Google had to sort of invent some new engineering

  • level for him, like senior Google fellow.

  • He probably produces more than 10x the impact of most of us

  • in this room.

  • But he's not working 10x the hours

  • because that's physically impossible.

  • There aren't that many hours in the day.

  • And so, the notion that effort is

  • sort of what determines our impact

  • isn't really a correct one, and we need a better framework

  • for the really understanding and thinking and reasoning

  • about impact and effectiveness.

  • And so, that's sort of where I sort of came

  • upon this principle of leverage, where leverage

  • is defined as impact that you produce for the time

  • that you invest.

  • It's the rate of return on your investment of time and energy.

  • It's the ROI.

  • It's the rate of impact.

  • If you think about the Pareto Principle, the 80-20 Rule,

  • it's the 20% of work that actually

  • produces 80% of the results.

  • Those are the tasks, those are the activities that actually

  • are extremely high leverage.

  • Now, sort of a central thesis that I want you to take away

  • from this talk is that this central concept of leverage

  • is really the guiding metric that effective engineers should

  • use to determine how and where to spend their time.

  • Now, it might seem a little obvious and maybe

  • a little simple to some of us, but the same time, oftentimes

  • we're so engrossed in what we're doing,

  • we're so focused on the project that we're working on

  • that we don't really take the time to ask,

  • are we working the right thing?

  • We're attending meetings.

  • That might not be the best use of our time.

  • Or we're fixing the next urgent bug,

  • or we're just fighting the next fire.

  • Or we're working on projects that end up not shipping or not

  • going anywhere.

  • And we don't really take a step back to think about,

  • was our time on these projects actually well spent?

  • Were those activities actually high leverage?

  • And you can think about if you're walking along the road

  • and you see a boulder, that boulder is really hard to move.

  • But if you can find the right lever,

  • then you can apply just a little bit of force

  • and move that boulder out of the way,

  • because that lever amplifies your output.

  • In a similar way, in software engineering,

  • we're looking for those levers where

  • we can amplify our energy, our time and produce massive gains.

  • And so, the next question we might ask ourselves,

  • given that we have this framework,

  • is how do we actually apply it to engineering?

  • What are the highest leverage activities for engineers?

  • This is a question that I really personally wanted to answer.

  • I knew that working 60 to 80 hour weeks

  • simply wasn't sustainable.

  • It wasn't going to help us win in our different markets.

  • Also, I'd spend a lot of time on engineering hiring

  • and recruiting.

  • And I'd screen thousands of resumes or interview

  • maybe 500 candidates.

  • And I really want to know, how do we actually

  • identify the most effective engineers to hire for our team?

  • I'd also spent about maybe a year and a half building

  • the on boarding and training programs

  • for engineers at Quora.

  • This was basically what every new engineer at Quora

  • would go through.

  • And I wanted to know, how do we actually

  • train engineers and teach them to be more effective?

  • And so, I really wanted to know the answer to this question.

  • And so, that's what started me on a quest

  • where I quit my job at Quora and then basically took two years

  • to explore this question.

  • And I went around Silicon Valley.

  • I interviewed a bunch of engineering leaders

  • from different companies.

  • So, VPs, directors, managers, tech leads,

  • people from larger companies like Google, Facebook,

  • LinkedIn, Twitter.

  • A few were from more medium size companies like Dropbox, Square,

  • Airbnb, Box, Etsy.

  • And even smaller startups: Instagram, Reddit, Lyft

  • at the time was pretty small.

  • And I grilled them.

  • I asked them really hard questions.

  • Asked them, what separates the most effective engineers

  • you've worked with from everyone else?

  • What are the most valuable lessons

  • that you've learned in the past year?

  • What investments have you made for your team that

  • have paid the highest returns?

  • I really wanted to know what from their experiences proved

  • out to be the highest leverage activities that engineers

  • everywhere should be focusing on.

  • And now 22 months later, I basically

  • had a collection of stories, lessons, and actual strategies

  • on how to be more effective engineer.

  • Now, everyone's story was different,

  • but there were a lot of common themes.

  • And in this book, this is still a lot

  • of those common themes through actual strategies

  • that you and I can apply in our day to day jobs as engineers.

  • And so, what I'd like to do for the rest of this talk

  • is actually share with you five of those high leverage

  • activities for engineers that I think

  • would be really useful for us to incorporate in our day

  • to day jobs.

  • The first high leverage activity really

  • starts in our own mindset, how we

  • think about our own development as engineers.

  • And that activity is optimizing for learning

  • This Mantra is something that has guided all of my own career

  • decisions.

  • Every change I made from company to company or from company

  • to writing a book was because I saw an opportunity

  • to actually increase my learning rate.

  • And this is really important because learning

  • is something that actually compounds over time.

  • What you learn today sets you up for other opportunities

  • and other learning that you might have in the future.

  • And when I say that it compounds over time,

  • I actually mean a few things.

  • One, this curve of learning is actually exponential.

  • Just like if you were investing in your financial investments,

  • your investment portfolio, your financial investments

  • compound over time exponentially.

  • In a similar way, investments in yourself

  • also do the same thing.

  • So, the second implication is that the earlier

  • on in your career that you invest

  • in yourself, the more time that learning

  • has to compound the long run.

  • And thirdly even small deltas in your learning rate

  • can have tremendous impact on how much

  • knowledge and how much learning you have in the long term.

  • And so, it's really hard actually

  • to quantify how much we're learning, but suppose

  • you actually could.

  • Suppose you could actually improve yourself by 1% per day.

  • What would that mean?

  • Well, it would mean by the end of the year, after 365 days,

  • you'd actually be 37 times better than you

  • were at the being of the year.

  • That's a huge difference.

  • And the best engineers and the best engineering leaders

  • actually aggressively and relentlessly optimize

  • in their own learning.

  • One of the pieces of advice Tamar Bercovici, who

  • is a senior engineering manager at Box,

  • tells all of the engineers he manages,

  • is to own your own story.

  • What she means by that is that you need to take

  • control and take ownership of your own learning and growth,

  • rather than waiting for opportunities to come to you

  • and find opportunities to really invest in yourself.

  • Read books, take classes, work on side projects, attend talks.

  • These are all ways that you can invest in yourself

  • and then those investments will compound

  • in your career in the long run.

  • And so, when you think about growing your career,

  • ask yourself how might you improve yourself

  • every single day?

  • And commit to making a habit out of that.

  • Now learning is one thing that compounds.

  • But another thing that also compounds

  • and that also is very high leverage

  • is actually investing in your own iteration

  • speed, how quickly you can get things done.

  • The faster we can get things done,

  • the more impact we'll have in the long run.

  • And because we're all engineers, one

  • of the best ways that we can invest in our iteration speed

  • is actually by investing in tools.

  • When I was talking with Bobby Johnson, who's

  • the former engineering director at Facebook,

  • he made the observation to me that he

  • found that people who are successful, almost all of them

  • wrote a lot of tools.

  • He said the strongest engineers and his team

  • spend probably a third of their time working on tools.

  • Tools to do monitoring, tools to make debugging easier,

  • tools to just glue the system together.

  • But the surprising thing was that he

  • said this wasn't actually obvious on his team.

  • Because a lot of engineers, we like

  • working on the new shiny tool.

  • We want to be the author of some new system.

  • We want to build this new data storage system

  • and be the author of that.

  • When in reality, even more mundane tasks,

  • such as investing in tools and investing in your own iteration

  • speed, can be extremely high leverage and have huge payoffs.

  • This is sort of the reason why a lot of big companies, Google,

  • Facebook, Twitter, LinkedIn, Dropbox,

  • they all have teams the focus on development tools.

  • Because if you can decrease bill times by, say,

  • even one minute per day and engineers are building,

  • say, 10 times per day, and you have 1,000 engineers,

  • that's a one engineering month saved per day.

  • I remember when I joined Google back in 2006

  • and I was compiling in a Google web server on search quality

  • team.

  • It was something that you kicked off and then you went home

  • and it compiled itself overnight.

  • That's how long it took.

  • And by the time I left in 2008, sort

  • of when Blaze was getting introduced, bill times I think

  • dropped to around 20 minutes.

  • And that was huge.

  • I'm sure it's probably dropped down even further since then.

  • I'm not sure how fast it is now.

  • I see I see some shakes of the head.

  • But all of these investments in bill times

  • are extremely high leverage.

  • Because it means that you can, instead of doing things

  • in large batches, you're doing them

  • more iteratively over time.

  • One of the proudest accomplishments

  • that we did while at Quora was we

  • had a system where we could actually

  • deploy code production 40 to 50 times per day.

  • For every time an engineer pushed to commit to get,

  • it would automatically kick off a suite of a few thousand unit

  • tests.

  • And if all those tests passed, that code

  • would then ship to a canary machine, that would then

  • run another battery of tests.

  • And if that passed, the code would be automatically shipped

  • to all our webmachines production.

  • And that entire process only took six or seven minutes,

  • which meant that we could push code production 40, 50 times

  • a day.

  • And that changed deployments which,

  • from any team to this one off event

  • that they have to do maybe every week, maybe even every day,

  • to something that was a very normal part of a development.

  • And it meant if we had a question, how often is

  • this feature that's on our web page actually being used,

  • it means that an engineer could just add a simple log line,

  • push that code to production, and then start getting

  • some data back within minutes.

  • So, a lot of these questions that

  • are impossible with a slower workflow, we

  • were able to do because of a system

  • of continuous deployment.

  • And you might wonder, when does it make sense for us

  • to invest in tools?

  • And a good rule of thumb that I got from Raffi Krikorian,

  • who grew the infrastructure team at Twitter

  • from about 50 engineers to 450 engineers,

  • used to tell his team that if you have to do something

  • more than twice manually, you should build

  • a tool for the third time.

  • And that's a good rule of thumb, because we

  • tend to underestimate how often we're going to need

  • to manually do certain tasks.

  • Sometimes we get started, we do things manually

  • because it seems to be simpler.

  • And then requirements change, or we mess up

  • during our manual steps.

  • And then we end up spending a lot more time

  • than we actually could have.

  • Instead, if we had invested in tooling and automation sooner,

  • we could save ourselves a lot of time.

  • And so, when you're working on a project,

  • you should also always ask yourself, what are the events?

  • What are the bottlenecks that you face during development?

  • How can you speed those things up?

  • Because all of those improvements

  • will sort of compound in how fast

  • or how quickly you can get things done.

  • You can also ask same question when you're debugging, as well.

  • Say you're building an Android app

  • and you're debugging this photo sharing

  • flow that's a few clicks away from the home screen.

  • Could you wire up your flow so that when you start the app,

  • you land right on that screen?

  • Simple tweaks like to optimize your own debugging flow

  • to really optimize your iteration

  • speed can have a huge difference in the long run in how

  • productive and effective you are

  • I've talked a lot about how to get things done quickly.

  • But another important question to think about

  • is not only how to get things done quickly,

  • but how to get the right things done quickly.

  • And so, that's another key high leverage activity

  • is actually validating your ideas aggressively

  • and iteratively.

  • And a good example of how to do this right as well as how

  • to do this wrong actually comes from Etsy.

  • Etsy is a company that sells handmade goods online,

  • and last year they hit almost $2 billion in revenue.

  • And they had this one project where

  • they were trying to build infinite scrolling

  • on the results page.

  • So, when you type a query at Etsy,

  • you see a bunch of product products.

  • And the product page is paged.

  • And they were exploring, maybe we

  • should add infinite scrolling.

  • Similar to how on Google Images, if you scroll down on the image

  • search page, results just sort of keep loading onto that page.

  • And so they'd spent many months building this out,

  • ironing bugs, and just when they were about to ship,

  • they decide, we should probably test this.

  • And so, they ran an AB test.

  • And they found that click through rates actually dropped

  • by 10%.

  • Purchase rates actually dropped by nearly 25%.

  • So, there was no way they were going to actually launch

  • this product.

  • Then they actually spent some time

  • trying to figure out why this wasn't working.

  • And they realized that for this product change

  • to actually work, it sort of relies on two assumptions.

  • One is that if we show more results to users,

  • they'll be more likely to buy products.

  • And the second is that if we show results

  • faster to the users so that they don't actually have to page,

  • they would also choose to buy more.

  • But the key insight is that each of these assumptions

  • could have been independently validated with much less work.

  • If the assumption is that if we show more results to users,

  • they will buy more, they could have just increased

  • the page size.

  • And in fact, when they did that after the fact,

  • they found that that really had no impact on purchases.

  • An assumption that maybe a faster page

  • will lead to more purchases, that's something a little bit

  • harder to test.

  • But they got a little creative.

  • They artificially added some latency

  • to some fraction of users, made the page slower.

  • And when they tried that test, they

  • found out that also didn't have much of an impact.

  • Each of those tests were much easier to run.

  • Very little engineering effort required.

  • And if they had run those tests first,

  • then they would have realized their basic assumptions

  • behind infinite scrolling didn't even prove out in the wild.

  • And so, there was no real reason to actually invest all

  • that time and effort to build infinite scrolling.

  • They sort of took these lessons to heart

  • when they worked on a different project, which

  • was rebuilding the product page when you click through

  • on a search result.

  • They actually went through 14 different iterations

  • of this page when they were redesigning it.

  • And they broke down the redesign project

  • into a number of different testable assumptions.

  • Does showing more related products on the page actually

  • decrease bounce rate?

  • Does showing the price in your native currency,

  • does that increase purchases?

  • Does showing a default shipping option

  • and the price of a default shipping

  • option make a difference?

  • Does swapping sides of the page make a difference?

  • They broke down each of these assumptions, each

  • of these hypotheses, and tested each one.

  • And there were a bunch of ones that didn't work.

  • They failed.

  • But, all of the information that they collected from these 14

  • different tests helped them build confidence

  • in theories about which changes did matter,

  • which ones did make an impact.

  • And so with that, they were able to launch a redesign

  • that when I talked to Mark Hedlund, who

  • was the former VP of product engineering at Etsy,

  • he said that this was actually the single largest

  • experimental win in Etsy's history.

  • It was the most successful project

  • that they launched in terms of purchasing performance.

  • And it was only possible because they

  • learned that experiment driven product design

  • is a very powerful tool.

  • Now, this is something that I think,

  • at least when I was in search quality,

  • Google did pretty well.

  • They run a lot of AB tests.

  • They sort of validate a lot of changes

  • to prove these changes are actually improving searches

  • and improving the bottom line.

  • But another take away from Etsy is

  • that incrementally validating your assumptions

  • is a very high leverage technique.

  • If you can break down a large problem into smaller testable

  • hypotheses, and then evaluate each one,

  • you can really build confidence in what works

  • and what doesn't work.

  • Or if a test proves that your assumption is correct,

  • then you have more confidence that the path you're going down

  • is the right path.

  • If it proves your assumption is incorrect,

  • then that means you maybe need to revise your project plan.

  • At the very least, this could save you months of effort.

  • And in startups, there is this idea

  • of focusing on the minimal viable product.

  • What's the smallest version of the product that

  • takes the least amount of effort to build that you can

  • show to real users and get feedback

  • and validation that what you're building, what you're designing

  • is actually the right thing to do?

  • That's a very powerful idea.

  • Now something, while Google, especially search,

  • it does a good job with their AB testing,

  • I think the idea of building this minimal viable product

  • is an idea that they can definitely leverage more.

  • Because a lot of times at Google,

  • we sort of build products for speed.

  • We optimize for performance, we optimize for scale

  • rather than really focusing and asking a question.

  • Like, is this product actually the right

  • thing to build in the first place?

  • Because it doesn't matter if something is really performance

  • if it's not the right thing that users actually want.

  • One of the good rules of thumb that I got from Zach Brock,

  • who was an engineer manager at Square,

  • is that he would constantly ask his engineers, what's

  • the scariest part of the project that you're working on?

  • That's the part of the most unknowns, the most risks.

  • That's the part you should actually tackle first.

  • Because you want to reduce and tackle the risk

  • head on so that if it proves that these risky areas are

  • there aren't doable or don't impact your bottom line,

  • you can cut your losses short and move on.

  • And so, when you're working on projects,

  • you should really ask yourself, can you

  • break this down into smaller testable hypotheses?

  • Can you use an inexpensive test to validate

  • that what you're doing is the right thing?

  • How might you expend 10% of your effort upfront

  • to validate that the project you're working on

  • will actually work?

  • Because that 10% is a very high leverage use of your time.

  • It can save you months of wasted effort further on.

  • Validation definitely is very important

  • and can save you wasted effort.

  • But another powerful technique, another high leverage activity

  • that can help you reduce wasted effort,

  • is this idea of minimizing operational burden.

  • A lot of times, we spend so much time maintaining and operating

  • our software that we don't really have time

  • to actually build new things.

  • And if we can minimize the operational burden

  • that we actually have on a day to day basis,

  • we can spend a lot more time actually focusing on impact.

  • And a great story which really illustrate this point

  • comes from Instagram.

  • When they were acquired in 2012 by Facebook

  • for a billion dollars, they actually

  • only had a team of 13 employees.

  • And of those 13 employees, only five of them were engineers.

  • They had a really small team supporting

  • over 40 million users.

  • So by any metric, this was a pretty effective team.

  • And I was curious, what was it that allowed the Instagram

  • team to be that effective?

  • And so, I sat down with Mike Krieger, the co-founder

  • of Instagram, and I asked him, was there

  • any secret sauce to this?

  • What was your key technique?

  • And he said one of the most important mantras that they

  • had on Instagram was to do the simple thing first.

  • They actually had this on posters in the office.

  • During design meetings, when someone

  • was proposing a new feature, they would actually

  • challenge each other, is this the simplest thing

  • that we can do?

  • If not, why are we adding all this complexity?

  • The key insight here was that every feature they added,

  • every system they added, was another potential fire

  • that they might have to put out in the future.

  • And with such a small team, they couldn't

  • afford to spend all that time just putting out fires.

  • And they were very aggressive and very stringent

  • about cutting sources of complexity.

  • And this actually a very common theme.

  • And I mentioned in the beginning of how I went around and asked

  • a lot of leaders, what's the most valuable lesson you've

  • learned the past year?

  • And it turns out that a lot of engineering leaders,

  • including Mike Curtis, who is the head of engineering

  • at Airbnb, or Chris Lambert, who's the CTO of Lyft,

  • and a bunch of other leaders, all

  • said that they wished they had made things simpler.

  • They wish they hadn't added so much complexity

  • in the last year.

  • And the reason why this was such a common theme

  • is because we often ignore the hidden costs that

  • are associated with complexity.

  • When you introduce sources of complexity,

  • we're actually introducing an ongoing tax

  • that we have to pay as we develop software in the future.

  • And this tax actually comes in a variety of forms.

  • On the most basic level, there's code complexity.

  • That when we're dealing with a very complex code,

  • it's hard to ramp up on.

  • It's hard to understand.

  • It's hard to reason about.

  • And as engineers, sometimes we see complex code

  • and then we tiptoe around it.

  • We decide, we're not going to touch this code base

  • because it's a little hairy.

  • And then, we miss out on opportunities to actually build

  • things that might be impactful.

  • Or we do things in a roundabout way

  • just so that we can avoid the complex piece of code.

  • Besides code complexity, there's also system complexity.

  • So, how many moving pieces are there actually in your system?

  • I was teaching a workshop at Pinterest.

  • It was a five week workshop for engineers

  • on how to be more effective.

  • And one of the stories I gathered from them

  • was that back in 2011 when they were scaling their site,

  • they actually had six different data storage systems.

  • They were running Cassandra, MySQL, Membase, Memcached,

  • MongoDB, and Redis.

  • And they actually only had a team of three back end

  • engineers.

  • That's insane.

  • There were, like, two systems per engineer.

  • Each system on paper set claims that they

  • would solve some scalability problem that they were facing.

  • But in reality, each system just failed in its own special way.

  • It was just another fire that they had to set out.

  • And so, when you make choices that

  • really sort of fragment our infrastructure,

  • that has a lot of ongoing taxes that we have to pay.

  • It means that we're spending-- we're thus

  • able to pull together resources to really strengthen

  • libraries and abstractions for a particular system.

  • It means we have to take time to understand the failure

  • modes of each system that we introduce.

  • It means that every new engineer who joins a team

  • not to ramp up on one additional system.

  • And so, it took a while, but in the end,

  • Pinterest finally learned that it's much simpler

  • to have a system, a scheme, where you can just

  • add more machines to scale.

  • And so, they cut out a lot of the other systems

  • that it didn't really need and ended up

  • with a much simpler system that they could actually maintain.

  • So, we've talked about code complexity and system

  • complexity.

  • Another such complexity is product complexity.

  • Now, that comes when there isn't a clear vision for where

  • the product is going or there isn't enough product focus.

  • Product complexity leads a lot to code complexity

  • and simple complexity because it means

  • you have to write more code and build more systems in order

  • to maintain and support the features that you're building.

  • And when you have a wide surface area in your product,

  • every new feature that you want to add

  • becomes harder to add because you

  • think about how does this fit in to the existing context of all

  • the other features.

  • There's more code branches you have to look over.

  • There's more issues and bugs that are filed.

  • There's more context switching that you have to deal with

  • to go back and forth from feature to feature.

  • And all of the time that you spend doing this context

  • switching is time that you're not

  • investing in abstractions, [INAUDIBLE]

  • paying technical debt.

  • And so, those are all things that end up slowing you down.

  • [INAUDIBLE] last soft of complexity,

  • organizational complexity.

  • Because when you have all of this complexity in your code

  • and your systems and your product,

  • it means that you need a larger team to actually deal

  • and manage with them all.

  • And that means you have to spend more

  • time hiring and interviewing.

  • You spend more time training, onboarding, mentoring

  • new engineers.

  • And so, there's another tax on the organization as well.

  • Now alternatively, some teams decide

  • OK, let's just split our team to smaller teams

  • and each team sort of manage a separate component.

  • Or maybe even have a one person team for each component.

  • And there's cost for that as well.

  • It's less motivating to work on your own.

  • If you get stuck, it can become more demotivating.

  • The quality can go down because it's harder

  • to get feedback from your coworkers

  • if you don't the same shared context.

  • And your bus factor also goes down.

  • I remember when I was working at Ooyala,

  • there was one point in time where

  • I was the only person who was knowledgeable about this piece

  • of software called the logs processor.

  • So, the logs processor was responsible for processing

  • all of our customer's data and then produce the reports

  • that they could actually read and sort of understand

  • what their viewers were doing.

  • And I was taking a much needed vacation down in Hawaii.

  • I was hiking around one of the volcanoes there.

  • And then, I suddenly get a text, and it's from the CTO.

  • And it says, logs processor is down.

  • I said, well that sucks.

  • Unfortunately at that time, I was the only person

  • who knew how to fix it.

  • And also unfortunately, there's no Wifi in volcanoes.

  • And so, that wasn't a great situation.

  • It was bad for me because my vacation was interrupted.

  • It was bad for the team because they depended on me.

  • And it was bad for our customers because they couldn't

  • get the reports and the data that they needed

  • to operate their business.

  • And so, there are all these costs of complexity

  • on these different levels.

  • On the level of code, systems, product, organization.

  • And they all come from this core thing

  • where we added more complexity than we needed.

  • And so, that's why this theme of asking ourselves

  • what's the simplest solution to this problem is so important.

  • Because it allows us to stem off this complexity

  • explosion at its bud at the very beginning.

  • And so, minimizing operational burden is super important.

  • And the last high leverage activity I like to close on

  • is actually building a great engineering culture.

  • I mentioned earlier on how I've spent a lot of time

  • on engineering hiring and recruiting.

  • And one of the questions that I'll

  • ask a lot of the candidates I interview

  • is, what's one thing you like and one thing you dislike

  • about your previous company?

  • And over time, this really helped

  • paint a picture of what exactly attracts an engineer

  • to a company.

  • What are the things that turn them off?

  • And it should actually come as no surprise

  • that engineers, they like to work in environments where

  • they can focus on high leverage activities.

  • They want to work in environments

  • where there's a chance to focus on and optimize on learning.

  • They want to work at places where everyone

  • is very productive and getting a lot of things done,

  • and there's good tooling and good iteration speed.

  • They want to work at places where products aren't going

  • to waste because assumptions and hypotheses weren't

  • being tested earlier on.

  • They want to work at places where they can actually

  • build new features as opposed to just maintain and pay off

  • taxes for old existing ones.

  • Now, Google is a sort of great place

  • in that there's a very strong sense of engineering culture.

  • But this is also something that you can focus on your team,

  • as well.

  • On a team level, you can really ask,

  • what are the highest leverage activities

  • that you can start working on?

  • You can think about how can you invest in your own learning

  • and in your own growth to really sort

  • of further your own career?

  • You can think about what tools can you build

  • or what workflows can you improve for both yourself

  • and for your team to really improve your iteration speed?

  • Or think about a project that you're working

  • on that's a multimonth project.

  • Are there ways that you can break it down

  • into testable hypotheses that you can validate to make sure

  • that you're actually building the right thing for customers,

  • for users, or other members of the team?

  • Or think about if you're working on something

  • that seems a little bit too complex,

  • is there a way to actually make it simpler

  • and pay off these future taxes on complexity?

  • Or think about, how can you improve the engineering culture

  • in your own team so that new engineers who join the team

  • can ramp up much more quickly?

  • All of those high leverage activities

  • will make us more effective engineers.

  • I'll be happy to do some Q&A. Thank you all for coming.

  • [APPLAUSE]

  • I also a few limited copies of books

  • that we're going to hand out.

  • May be Cindy or [? Chang ?] can sort of hand them out as people

  • ask questions.

  • AUDIENCE: Unit tests.

  • I kind of read in your book and it kind of

  • presented unit tests as a high leverage kind of activity.

  • But at the same time, kind of like intuitively

  • when I think about writing unit tests,

  • I feel like it's kind of slow.

  • I got to write all these to cover all this code.

  • And if I want to refactor it, I got to change this unit test.

  • So, iteration speed kind of slows down.

  • Maybe you could talk a little bit

  • about exactly why unity tests are really

  • high leverage in spite of that kind of feeling that I get?

  • EDMOND LAU: Sure.

  • Great question, [INAUDIBLE].

  • So the question is, how do you reason about unit tests

  • with this framework of leverage?

  • And really, I think it's important to not be

  • religious about unit tests.

  • I think there are some teams that focus on, OK, we

  • need 100% coverage of our code.

  • And I don't think that's necessarily the best approach.

  • It's a lot better think about which types of tests

  • would be high leverage?

  • Which types of tests would be easy to write, and yet

  • pay off high rewards?

  • And the area of code bases that tend

  • to benefit from a lot more tests are

  • code paths that either see a lot of traffic,

  • code paths that a lot of engineers touch,

  • or code paths that are a lot riskier.

  • And so, if something were to go wrong, data would get corrupted

  • or the cost would be large.

  • In each of those areas, it can be

  • really beneficial to actually have a set of unit tests.

  • On the other hand, if you have a piece of code that's

  • sort of off to the side, not many people use it,

  • it's not touched that often, in those cases,

  • writing a unit test there might not have that strong a payoff.

  • And so, I would sort of break down unit tests

  • into different priority buckets and sort of focus on the ones

  • where there is more traffic, people

  • are changing it a lot more often, or it's a lot riskier.

  • Because those will have sort of a better rate of return

  • on your time.

  • AUDIENCE: So, it seems to me you have a lot of experience

  • with working with startups and small companies.

  • And I wonder actually how does your advice

  • translate to a bigger organization such as Google?

  • As you were talking through strategies,

  • I was thinking in my head about a particular example

  • that illustrates this strategy.

  • And for a big company, for example,

  • for the recent problems, organizational complexity kind

  • of becomes unavoidable.

  • So, it seems to me from your presentation,

  • you give advice for the more bottom

  • up approach, which Google is a lot like.

  • But for a big organization, actually

  • maybe some top down strategies are needed as well.

  • For example, for using product complexity and organization

  • complexity.

  • Do you have any thoughts on this?

  • EDMOND LAU: Yeah.

  • So the question, to rephrase a little bit,

  • is how do these ideas of focusing on complex--

  • or addressing organizational complexity apply to larger

  • companies like Google?

  • And I think it's a great question.

  • I think some of the clarity comes

  • from-- some of the strategies that an individual contributor,

  • an individual engineer where you can sort of influence

  • the complexity in large organizations

  • comes I think a lot from data.

  • So, I know Google is a very data driven company.

  • And a lot of times, if you can ask the right questions

  • and back that up with any data from users, data from activity,

  • you can sort shift organizations or products in a way

  • to sort of streamline it.

  • So, if you see a source of complexity from a feature

  • that you gather data and it seems like it's not actually

  • being used, I think that sort of forms

  • a more compelling case for decision makers

  • that, maybe this is an area of the product that isn't really

  • worth all the engineering effort that's being spent on it.

  • And so, I think data is was definitely

  • one area that can be used to amplify your voice when

  • you are making arguments of that nature.

  • I think another way that you can sort of reduce

  • complexity in these situations is to talk to the decision

  • makers and get clarity on what the focus

  • and what the mission is.

  • And make sure that the projects that you work on

  • are ones that sort of align with that mission and that focus.

  • Because if you're working on something that seems peripheral

  • to that mission or focus, then you're

  • sort of introducing additional sources of complexity

  • to that mission, that focus, that the organization has.

  • So, you want to at least make sure

  • that your own work is aligned with the goals of the rest

  • of the organization has.

  • AUDIENCE: So, I have two questions having

  • to do with the transition between from engineer

  • to engineer manager, so maybe they

  • are within the scope of the book.

  • One is about mentoring, the other is about technical data.

  • So, first would be, what are your thoughts on mentoring

  • in terms of leverage?

  • And the second is, how do you recommend

  • convincing an engineering manager

  • that the code complexity in your project

  • is worth taking a sprint break from delivering to actually

  • refactoring something thing from scratch with a much

  • simpler version?

  • EDMOND LAU: Good question.

  • So the two questions.

  • One is, is mentoring a leverage?

  • Two is how does refactoring and paying off technical debts

  • [INAUDIBLE] into this?

  • So with regards to the first one mentoring,

  • mentoring is definitely I think extremely high leverage.

  • I spent a year and a half when I was at Quora building out

  • the onboarding and mentoring programs

  • because I realize that even if an engineer would have spend

  • and one hour a day training a new hire,

  • that only represents like 1%-- it's like 20 hours.

  • That's like 1% of the new hire's total hours in that year.

  • And yet it can have a dramatic output, a dramatic impact

  • on that engineer's output.

  • And so, it's super valuable, I think,

  • to invest money in training your teammates and making sure

  • that you tell them and teach them

  • what the best practices are before they spend

  • a lot of time writing code that's

  • not following the right guidelines

  • or following bad designs.

  • And so, mentoring is super valuable.

  • And the second question is, how do you

  • convince engineering managers to actually focus

  • on or to budget time for technical debt?

  • And I think some of the best strategies I've seen for teams

  • is to rather than just to have that conversation

  • and spend all that energy convincing your engineering

  • manager after, like say, every project,

  • to instead sort of just set a sort

  • of consistent budget for that.

  • So some teams, for instance, every month

  • will have one week where it's code cleanup week.

  • Or after every sprint, they'll have some number of days

  • where they spend refactoring and cleaning code.

  • That way, it's-- you don't have to have this sort

  • of conversation every time after every project.

  • And in addition, you can use that time and prioritize

  • which areas of the code actually need to be refactored.

  • Which areas of code need to be cleaned up.

  • And so, I think it's more useful to have

  • a more general conversation where

  • to hit the goals for the sprints,

  • we're going to make some trade offs.

  • Those trade offs will lead to some debt.

  • And we want to have a consistent schedule where

  • every so often, we budget some number of days

  • to basically clean up that debt.

  • Yeah, it's a little tricky if you're a new engineer.

  • I would say that even as a new engineer,

  • the way that you should approach technical debt

  • is to focus on the areas where reducing that debt

  • allows you to be more productive.

  • So, if you're reducing debt on errors of the code

  • that you have to deal with yourself,

  • a lot of times doing that making,

  • that investment actually pays off on your own project

  • because you can get your project done sooner.

  • That's sort of, I think, a very easy way

  • to justify that time spent because you can provably

  • demonstrate that it actually accelerates your own project

  • speed.

  • And so, I would start there.

  • AUDIENCE: Some of the advice mentioned

  • tackling unknowns early in the project lifecycle and some

  • of the other advice was focusing on the simple things.

  • I wonder, do you have any advice on how

  • to identify unknowns that could be removed by focusing

  • on simplicity versus unknowns that

  • deserve their own investment and investigation?

  • EDMOND LAU: So, the question is, how do you

  • differentiate between unknowns that need to be investigated

  • and unknowns that you can potentially cut by focusing

  • on a simpler solution?

  • I don't have a simple answer for that but I

  • think some of the approaches that you can take

  • is to think about what are your goals for this certain project?

  • And if you didn't have to design a grand system, what

  • would be the simplest solution that you

  • could think of that could maybe hit most of those goals?

  • Because a lot of times, the complexity

  • comes from trying to address the last 10% or the last 5%.

  • And if you would take a more pragmatic approach,

  • maybe you can ask instead, what covers

  • 80% percent of the use cases?

  • Can I come up with a solution for that?

  • And then maybe for the other cases,

  • can we do something different rather than making the core

  • solution more complicated?

  • And I think that's something that-- that's

  • a theme that I've definitely seen

  • other teams do with success.

  • Because then for the most part, they're

  • dealing with something that is simple to reason about.

  • AUDIENCE: So I have a question, a lot

  • of cases we have dependencies on other teams.

  • For example, my team, sometimes I go,

  • there's a problem in the [INAUDIBLE] Bluetooth stack.

  • Then so I have to make a talk with them or either

  • I'd have to look into the Bluetooth stack by myself.

  • Which sometimes can be harder because I don't really

  • have a lot of expertise on the low level code.

  • So, can you talk a little your experience

  • about how do you deal with team dependencies?

  • How to talk and actually cooperate with other teams?

  • EDMOND LAU: So the question is, how do you

  • deal with interteam dependencies where you're sort of blocked

  • on someone else?

  • So a lot of times when you're blocked on someone else

  • and they're not actually working on this thing

  • that you really care about, that happens because there's

  • a misalignment in incentives.

  • Somehow your team has a certain goal.

  • The other team has a certain goal.

  • And those goals, maybe our priorities don't really align.

  • And so, I think structurally, a good approach

  • is to try to make those goals align.

  • To see if you can agree that a certain focus is

  • a priority for both teams this quarter

  • and sort of agree from that perspective.

  • If they don't align, I think at least

  • having that conversation tells you then

  • where-- the part that you're blocked on,

  • where that fits on their priority list.

  • And so, at least having that conversation,

  • you know that maybe this is only like a medium to low priority

  • for the other team.

  • At that point, then you can make this decision of well,

  • do I wait for them to work on that?

  • Or do I then-- maybe I ramp up on that Bluetooth stack

  • to actually make progress.

  • But I think it really sort of starts

  • from that communication about goals and incentives.

  • If they align, then you can sort of rely more on that team.

  • If they don't align, you do have to make that trade off of,

  • maybe it's worth my time to learn this Bluetooth stack

  • so that I can get the thing out the door sooner rather

  • than waiting for them to get to it on the bottom

  • of their priority list.

  • AUDIENCE: So, you mentioned the 60 to 80

  • hour work weeks not being sustainable.

  • I was wondering what your typical work week is like?

  • And if you ever work weekends anymore?

  • And you also mention learning.

  • I was wondering how much time per week

  • do you set aside for learning.

  • Is it an hour on the weekend that you spend

  • or is it five minutes a day?

  • How do you incorporate that into your daily routine?

  • EDMOND LAU: So nowadays, I still work at a startup.

  • I work at a startup called Quip.

  • And we actually work very normal hours,

  • so we work maybe 40 hour weeks.

  • I do spend a lot of personal time

  • on just sort of investing in things like the book

  • or these talks or other ways I can teach

  • engineers to be more effective.

  • Mainly because that's something I'm excited to do.

  • In terms of how much time I invest in learning,

  • I personally love to read, so I end up

  • just reading a lot of books.

  • And I think books are a really large source and wealth

  • of information for me.

  • I don't know how much time I spend reading,

  • but I [INAUDIBLE] spend a lot of time reading.

  • MALE SPEAKER: Thank you very much for coming [INAUDIBLE].

  • [APPAUSE]

MALE SPEAKER: Thank you for coming to Edmond Lau's talk.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it