Partilhar via


What's the difference between SDE and SDET at microsoft?

I was exchanging email with a candidate last night who was confused about the differences between what dev and test do at microsoft, and as I tried to compose my thoughts about what it is that SDETs do, I thought that it would be good to share this on my blog.

You can envision the SDET job in two vectors, one in feature work, another in tools work. We have a product to test, and we need to eliminate defects, so we spend time in both proactive and reactive activities to help PMs specify the right thing, as well as validating what it is that our SDE counterparts built. Here’s the typical workflow for the feature work we do.

Feature Work

  1. PM writes a spec for a feature. PM, SDET, and SDE sit down and discuss the feature, make changes and get general agreement on the spec.
  2. SDET takes the spec, writes a test spec based on it, describing the testing in general terms. The test spec also defines an object model that you can think of as an API to the feature.
  3. PM, SDE, and SDET sit down and discuss the feature spec and test spec, and agree that the OM defined by test is correct.
  4. SDE begins to implement the feature
  5. SDET implements the OM, and then starts writing tests against the OM.
  6. As features come online
    1. SDE and SDET together run tests agasinst the feature (manual, automated, and writing unit tests).
    2. SDET does adhoc buddy testing against the feature to get more testing done before check-in.
  7. Post code complete, SDE fixes bugs.
  8. Post code complete, SDET analyzes automated test failures.

While there’s a lot to get excited about in both jobs (I can’t believe they are actually paying me to do what I do in my spare time for fun!) there’s drudgery in both jobs. Test case writing, automation analysis, and adhoc testing for the 1000th time can be mind numbing, but fixing bugs for a months at a time is equally as painful.

Tools Work

To extend our existing automation infrastructure, or to do testing in additional vectors, we build test tools. Lots of them. Test Labs can have 100s of machines, and if we’ve written the tests correctly, we can determine the current quality of the build far more efficiently than with manual testing. So while we do some manual testing, the focus is on building test automation that can be used milestone to milestone, release to release. Obviously, the quality of the tests and test infrastructure is super important. As the saying goes, who tests the tests, and who tests the tests that test the tests? Tests that don’t test properly can give false positives, and can actually be worse than no test at all because of the false sense of security they encourage. And infrastructure that is poorly designed will give you big problems as you approach v2, v3, etc., as well as sustained engineering efforts on your currently released products. So we have code reviews for every checkin, and we have design reviews for all tool work.

Some of these tools we created, some we took a snapshot of the code from other teams and make our own modifications, others are shared amongst many teams. Here is a list of some of the tools we build to test:

  • lab, machine and run management – infrastructure that drives our test automation in our test labs. Machine re-imaging and configuration, automated logins and installs of pre-reqs, fail/pass reporting, run management, machine allocation, etc.
  • test harness that allows for functional tests or unit tests to be executed. Builds test binaries and reports pass fail results to the run management system.
  • Team libraries – we build tons of our own libraries: for my team, we divide them like this
    • Physical OM
    • Logical OM
    • Internals OM – we build add-ins into the product to get at internal data structures to determine current state \
    • Extensions to those OMs to help us write fewer numbers of tests that test more of the product, and lower maintenance through ideas like
      • systematic variation of common data and behaviors
      • loosely coupling test execution and test verification to make verification more comprehensive and reusable
      • pairwise testing tools for combinatorial testing  
  • Buddy build tools that Allows SDEs to run specific sets of tests on their desktops (or offload it to the lab and send them an email with the results) before checkin.
  • Basic UI automation libraries - we take these building blocks and build our Physical OM.
  • Static Analysis for
    • security
    • style guidelines
    • common bugs on looping/branching
  • Code Coverage/Branch Analysis
  • quality dashboard - Web portals that track key metrics
    • Code coverage
    • Test automation pass/fail rates
    • Bug Stats: Current total defects, defects per developer, incoming defect rate, fix rate, regression rate.
    • Code base size
    • performance
  • Auto-analysis tools
  • Performance Testing tools
  • Stress / mean time to failure testing
  • watson crash bucket analysis

When I think about SDE vs SDET, what I tell folks is that:

  • You are better suited to be an SDE if
    • you like to get really deep in one technology space for long stretches
    • you don’t stop until you write the perfect algorithm, or the most elegant code
  • You are better suited to be an SDET if
    • You like system integration type work:
      • you take the technology that this team did, make use of it and tie it to this other piece of technology, etc.
    • You may have been a TA and graded other peoples code, or were the guy in group projects who enjoyed and was good at poking holes in other peoples designs
    • You have strong big picture thinking, and are focusing your energy on solving the whole problem.
      • You find yourself coming up with “good enough” solutions to problems. Your solutions may or may not be elegant or perfect, but you are happy with it and you’ve moved on to the next part of the problem.

So I describe the SDE problem space as more narrow and deep, and the SDET problem space as broader, not as deep. Some SDETs have more penchant for tool building, less for feature work, or the other way around.. In my group, I try to balance between both of them so that we get smart people thinking about the hard problems within feature work, giving them good cross functional exposure and scope of influense, and still mixing in a healthy amount of tools work. Other teams (many I've been on in the past) structure it differently, but my personal feeling is that this balance gives people the right amount of both for healthy career development.

Comments