But the risk analysis, the design of the check, the programming of the check, the choices about what to observe and how to observe it, the critical interpretation of the result and other aspects of the outcome- those are the parts that actually matter, the parts that are actually testing. It’s cool to know that the machine can perform a check very precisely, or buhzillions of checks really quickly. (So did McCracken (1957), and so did Alan Turing.) But whatever you call the automated activity, it’s not really the interesting part. We might choose to automate the exercise of some functions in a program, and then automatically compare the output of that program with some value obtained by another program or process. If you’re describing a complex, cognitive, value-laden activity, at least please focus on the brain. We don’t say that Itzhak Perlman, using his hands, is performing “manual music”, even though his hands play a far more important role in his music than our hands play in our testing. It’s true that sometimes we use our hands to tap keys on a keyboard, but thinking of that as “manual testing” is no more helpful than thinking of using your hands on the steering wheel as “manual driving”. My trusty Chambers dictionary says manual means “of the hand or hands” “done, worked or used by the hand(s), as opposed to automatic, computer-operated, etc.” “working with the hands”. When it comes to software, there is no manual testing. Here’s a long and excellent rant by pilot Patrick Smith, who for years has been trying to address a similar problem in the way people talk (and worse, think) about “manual” and “automated” in commercial aviation. Can we please put them in the compost bin now? Thank you. The categories “manual testing” and “automated testing” (and their even less helpful byproducts, “manual tester” and “automated tester”) were arguably never meaningful, but they’ve definitely outlived their sell-by date.