I enrolled in the University of Pennsylvania’sonline Master of Computer and Information Technology program in Spring 2021 and since then, a number of people have reached out to ask me about applying to MCIT. Since most potential applicants have the same questions and I find myself repeating the same answers, I figured I might as well put my thoughts all in one place.
If you are at all serious about applying to MCIT Online, you need to read the Penn Engineering Online FAQs. Please.
I understand if you have questions or doubts because you come from a different school system and need advice on recommenders, writing your personal statement, taking the GRE, or anything of the sort. I understand if you want to hear a student’s perspective on some of the official recommendations that the program makes. I do my best to be helpful. But if your question can be answered by a quick Google and a little bit of research, I’m going to doubt whether you’ve considered the commitment you are about to make.
All opinions are my own, and do not reflect those of the MCIT program. I’m just a student who applied, got admitted, and enrolled. I don’t have a privileged view inside the admissions committee.
Your application should have either a previous degree in maths, physics, engineering, or with a similarly heavy quantitative component, or a high quant score on the GRE. MCIT Online does not publish GRE statistics, but the on-campus MCIT program does.
For the on-campus Fall 2021 admissions cycle, the average (median? I guess) GRE of admitted students was 162 Verbal, 168 Quantitative, 4.3 Analytical Writing. The number that really matters is the Quant score: 168Q out of a maximum score of 170 is around 92nd percentile. That’s higher than previous years, but honestly not by much: the average Quant score of admitted hovered between 165 and 167 between 2013 and 2020.
Tell a story of how a program like MCIT Online fits into your personal and professional goals. Perhaps you’re a career changer moving into software development or data science. Perhaps you work in a tech-adjacent role (product manager, business analyst, legal or public policy in a tech company) and want to understand the tech domain better. Perhaps you are already working in software development or data science and want to fill in knowledge gaps. Heck, maybe you just want to learn.
If you consider other professional graduate programs such as the MBA, MD, JD, etc. there’s often an implicit requirement that you need to have a certain amount of a certain type of work and/or internship experience. That’s not the case here: MCIT students really do come from all sorts of academic and professional backgrounds. The important thing is that you can weave a story of how studying computer science will fit into your professional life.
Proven ability to learn academically in an online setting
If you’ve done any university-level courses on Coursera, edX, or any online learning platform, you should list them.
If you have not taken any such classes online, I would suggest doing a course like Harvard’s CS50x and getting the verified certificate. It’s useful both for admissions, and for assessing how you will do in an online learning environment.
Obviously the world has changed since MCIT Online was launched, and many students have had to learn online out of necessity. Even so, I think it’s useful to do online asynchronous courses on top of any online classes you may have had as a result of Covid. Instructional design and expectations for a class that’s designed to be online, as opposed to one that had to be moved online at short notice, are going to be different. Online learning favours learners who are independent, self-motivated and who know when and how to proactively ask for help. I think it’s instructive to figure out for yourself if you’re the kind of learner who suffers from the online environment or who benefits from it.
There’s also a question of whether online learning platforms such as Codecademy, Educative or Udemy count. My honest answer is that I don’t know how the admissions committee views them in comparison to Coursera and edX. However, my own experience has been that the MCIT Online learning experience is closest to Coursera and edX courses, in the sense that MCIT Online classes are academically-oriented and tend to be lecture- and assignment-driven. You should make sure that this is a style of online learning that you enjoy.
Strong recommendations
Choose your recommendations carefully. One of the worst things that can happen to your application is a lukewarm recommendation.
I suggest reading the Letters of Recommendation FAQ thoroughly to get a feel for what the program is looking for in the letters of recommendation.
Who should I ask for letters of recommendation?
You must choose at least two recommenders. Three is ideal but not always possible (I had two). In my opinion, the most important reason for nominating three recommenders is to make sure you don’t get any last-minute surprises from a letter-writer who doesn’t submit their recommendation before the deadline. Another good reason to find three letter-writers if you can is that they can address different dimensions of you as an individual: perhaps one person can talk about you in an academic setting, another in a professional setting, and the third in the context of a community or volunteer project.
What I typically suggest is that at least one recommender should be academic, ideally a professor who knows your work well. This could be a thesis advisor, a professor you’ve TA’ed or done research for, a professor whose class included a substantial assignment, or a professor with whom you’ve taken multiple classes. The academic recommender should be someone who can talk about your qualities as a student: do you work hard to understand the material, do you ask good questions in class, do you produce good insight in your work, etc. Ultimately, MCIT is an academic program, and an academic reference will likely be most reflective of how well you will do in an academic setting.
The second and third recommenders can also be academic, or they can be individuals who know you from work or other professional context (e.g. your manager, your web development bootcamp instructor, a leader at a program you volunteer with). Ideally this person would talk about your ability to succeed professionally: do you deliver when needed, can you manage your time and your responsibilities, how do they see you growing as a technology professional?
There are a few resources that I think are useful for navigating the letters of recommendation:
Computer science professor Kyle Burke has an excellent FAQ on asking him for recommendations. Of course, your mileage will vary if your professor is not Kyle Burke, but it’s still a good overview of what kind of information to provide to your recommenders.
If one of your recommenders is happy to help you out but is not sure how to go about writing your letter, take a look at UC Berkeley’s advice to GSIs on writing letters of recommendation. It’s targeted at graduate students who may be writing letters of recommendation for the first time, but most of the advice (particularly “Paragraph by Paragraph” and “Dos and Don’ts”) is easily adapted to other contexts such as work or community service.
UC Berkeley also has Guidelines for Writing Letters of Recommendation broken down by the type of graduate program (MCIT would fall under “Academic Graduate School”), but I find the advice given here to be a bit more generic and less explicit about what makes an effective recommendation letter.
My own preference is for Harvard’s CS50x, which I think is an excellent introductory course that will both give you technical skills and a broad understanding of computer science. It’s designed to be a first undergraduate course in computer science, so it needs to cater both to students who may never take another CS course in their life, and at the same time adequately prepare students who intend to major in CS. Somehow, it succeeds.
The wonder of MOOCs is that there are so many amazing courses available for free or at very reasonable cost. Other MOOCs that I’ve heard great things about are:
These are a little more advanced, though, so treat them as the suggestions that they are, and not as a checklist of courses you have to do. The important thing here is simply to demonstrate that you’ve sought out some CS learning on your own.
How long should I spend on my application?
According to my Notion page history, I created my MCIT “Essay” page on April 6, 2020. The admission deadline for the Spring 2021 semester was July 31, 2020, so the whole process took me about four months.
The parts of the application that have the longest lead times are:
GRE preparation (4-12 weeks, depending on your preparation and test-taking ability)
Letters of recommendation (1-2 months advance notice for your recommenders)
If you’re only getting started on the application with just two months before the deadline, I would consider that to be too tight. It’s doable, but it increases the likelihood that you’ll submit something less than representative of your strength as a candidate.
Another wrinkle in your planning is whether you foresee taking the GRE more than once. I discuss this further under How should I prepare for the GRE?
Do I need to take the GRE?
Yes.
If you need to ask, the answer is yes.
The official answer is this:
No, the GRE is optional. But there are a few scenarios in which we strongly recommend taking the GRE:
You have not taken any quantitative courses (such as math or physics).
You feel the grades that you received in your bachelor’s program do not represent your current abilities and are lower than you would like them to be.
You received your undergraduate degree 15 or more years ago.
The way I see it, there is virtually no situation in which it is beneficial to skip the GRE. The best that can be said is that for some people, skipping it won’t actively hurt your application. That’s the class of applicants who already have strong evidence of quantitative ability on their transcript or résumé.
If you graduated from a mathematics / engineering / physics program with a 3.8 GPA less than 15 years ago, sure, you don’t have to do the GRE. If you build quantitative financial models for a living, you don’t have to do the GRE. If you majored in film but have an A+ in Real Analysis on your transcript, you don’t have to do the GRE. (I was a film major, I can make jokes about film majors and maths.) You already know if you fall into this category.
The truth is, if you do not have a quantitative background, the ability to use the GRE to prove your quantatitive bona fides is a godsend. Most schools want to see a college-level maths class on your transcript (Bath and OSU both do, for example), if not college-level CS classes. I have neither, and if you don’t either, the GRE is not optional for you.
Yes, standardised testing sucks. No, the GRE does not predict graduate school success. But let’s also be real here: in the MCIT program, there is at least one proctored exam per course. Each of these exams is about as long as the GRE, and requires the same kind of exam skills as the GRE. If preparing for a proctored, standardised exam is a deal-breaker for you, you may not enjoy graduate school very much.
How should I prepare for the GRE?
I spent about a month’s worth of evenings and weekends preparing for the GRE and took it once, coming out with a score I was more than happy with.
What I did was to buy Manhattan Prep’s 5lb. Book of GRE Practice Problems, take a diagnostic test, identify my weakest areas, and spend most of my time practising them. I probably spent 80% of my GRE preparation on statistics questions. My quantitative scores on practice tests at the end of my preparation were the same as my actual exam quant score.
You may need more time to prepare: most GRE preparation sites suggest 4 weeks as a minimum, and an upper bound of 12-20 weeks. Because the only sub-score that really matters is the Quantitative score, you don’t need 20 weeks cramming GRE vocabulary, but you want to give yourself enough time to be confident under exam conditions.
To retake or not to retake
Sometimes things go wrong. Your score isn’t as good as you want it to be, you fell ill on the day of the exam, you blanked out under exam conditions, whatever.
Retaking the GRE is pretty common, and there’s no harm in planning for it. You don’t have to take the GRE multiple times, but it’s nice to have that option if you bomb the first time. After each GRE test, you must wait 21 days before you can take the test again. That means that you should aim to take the GRE at least three weeks before the application deadline, in order to give yourself enough time for a re-take.
Must accept students without any prior CS background (this rules out Georgia Tech’s OMSCS, among others)
Must be doable part-time
Must not be more expensive than NUS’s MComp program at S$60,000+
Must be doable completely remotely (NUS is the exception, as I can commute to it)
Reasonable reputation as a research university (lots of schools offer online and/or part-time CS programs now, but I wanted to prioritise schools with a stronger reputation first)
The complication is that my undergraduate transcript does not have a single maths class. No Calc I, no “Math for Non-Majors”, no “Great Ideas in Mathematics”. This makes meeting the pre-requisites for many programs a bit of a challenge.
I could apply to MCIT without doing any prerequisite classes, as long as I had a good GRE score. To apply to Bath or OSU, I would have needed to take a college-level Calculus class first, and to apply to NUS, I would have needed to take three certificate CS classes first. The decision came down to this:
Ease of application (from best to worst): Penn, Bath/OSU, NUS
Cost (from cheapest to most expensive): Bath, Penn, OSU, NUS
Study mode (from best to worst): Penn/Bath/OSU, then NUS (in-person is actually a negative here, because of the commute time and the inflexible schedule)
Strength of CS program (from best to worst): NUS, Penn, Bath/OSU
That’s not to say that Bath or OSU have bad CS programs. Rather, I culled several programs that also met my criteria, but that had weaker reputations than these four. NUS and Penn are simply on a different level from Bath and OSU when it comes to computer science.
My plan was to apply one at a time, so if I had been rejected from Penn, I would have taken a Calc class and applied to Bath, then OSU, and then finally signed up for the certificate classes at NUS and pursued the CS program there. Since I got into Penn, I didn’t end up applying anywhere else.
Other questions
I hope this is useful to anybody planning to apply to MCIT Online. If you have any other questions, though, feel free to contact me. If I think your question is potentially relevant to other applicants, I may add it here.
Yesterday, I wrote a post about why and how to write tests, inspired by my most recent project. The gist of it is that the key purpose of automated tests is to provide fast feedback. I ended the previous post with a remark that the need for fast feedback is the reason test frameworks exist.
You’ve used a test framework, but how much do you understand what it’s doing? I know that before this social impact project, I had no clue. On this project, we were working in a game engine that, to the best of our knowledge, does not have a testing framework in its ecosystem. One of the things that our team explored over the course of the project, and which we eventually did, was to build a test framework.
Evolving towards a test framework
(The idea behind this progression is shamelessly adapted from the first part of Kent C. Dodd’s Assert.js workshop. If not for this video, I don’t think I would have dared to contemplate writing a test framework at all, but he demystified the process and made it so accessible. Thanks Kent!)
A test is a bit of code that, when run, tells you something about whether some other part of your code (the subject under test) is working as intended.
In my previous post, I laid out the key pieces of information that the developer needs to be able to identify quickly, in order to understand what their tests are telling them:
How many tests ran?
How many tests passed?
How many tests failed?
Which tests failed?
What was the expected result?
What is the actual result?
Which line of code is the immediate cause of test failure?
If in doubt: What is the stack trace?
Let’s now write a single test, the simplest test that fulfills these conditions.
Writing one test
Let’s say I need to write a function that converts inches to millimeters. For the rest of this post, I’ll conveniently ignore the question of “how many tests ran?”, and concern ourselves with the other pieces of information. If we want to determine if the subject under test is working as intended, we first need to define what “working as intended” means. In this case, let’s say we want to convert 10 inches to millimeters:
This test prints Passed if it passes. If it fails, it prints out the expected result and the actual result.
“Wait a second,” you might be thinking, “you said there should be a third piece of information given for test failures. Which line of code caused the test failure?”
I’ll get there. Now, let’s write the code to pass the test:
You’ll notice that the subject under test is just one line. If the test fails, there’s only one line to look at!
In all seriousness, there’s an important observation to be made here. If an exception is thrown while running the code under test and that exception is not caught, Jest will point it out to you, as will most other testing frameworks. In that case, you know exactly where the immediate failure is inside the code under test.
However, uncaught exceptions are the exception (ba-dum-tss). They’re the one time the testing framework can tell you, from within the subject under test, where the test failed. That’s not what happens the rest of the time, because otherwise the test and the subject under test are insulated from each other. The test does not know about its subject’s implementation details (or at least, it shouldn’t).
So what is that “which line of code is the immediate cause of test failure” stuff? I’ll admit, I may have tried to be too pithy with that one-liner. If an exception is thrown and not caught, your test output should identify where that line of code is. On the other hand, if a test result does not meet its expectation, your test output should tell you where in the test that failed expectation is.
As a case in point, I wrote a test in Jest and made it fail. Look at the information Jest gives me:
Output from a failing test in Jest
Jest shows the expected result, the received (actual) result, and the line in the test defining the expected behaviour that didn’t happen. In other words, Jest points you to the line of code in the test that defines this as a failed test.
That is something that our test still lacks. Of course, there’s only one test and that test has only one “assertion”, so let’s add more.
Adding a second test
Now, I’ll add a function that converts kilograms to pounds. Again, I write the test first:
Rather irresponsibly, I commit this without running my tests (“it’s just a toy codebase”), push it, and go off to lunch.
Now my teammate pulls the repo, runs the test file, and sees:
Passed
Expected: 22.2
Actual: 20
It’s only two tests, but we’ve started to run into problems. From the test output alone, my teammate doesn’t know which test passed, which test failed, and where to find the expectation that failed. Of course, with two tests, you might be tempted to dismiss this as a contrived and trivial problem, but it doesn’t take much imagination to see that at just 10, 15, 20 tests, the feedback loop will already start to slow down. There’s not enough information to quickly identify and fix failing tests.
This form of automated testing doesn’t scale well.
Minimum Viable Test Framework: Which Test Failed?
There are two problems with the test output to be solved here. At a glance, we are unable to identify:
We might be tempted to fix this problem by writing a console.log("testNameHere") at the top of each test function. Don’t do it – that way lies lots of copy and paste and madness.
Instead, let’s think of the test from the object-oriented perspective. What properties does a Test object need to have? It needs to have a name or a description, so that we can quickly identify which tests passed or failed. It needs to have test code that we can run. There are other things we can add to our Test object, but this is the absolute minumum.
Having conceptualised the test as an object, I’m now going to do a 180 and write it as a function instead:
Now we use our new test() function to invoke our tests:
test("convert inches to millimeters",()=>{constexpected=25.4;constactual=convertInchesToMillimeters(10);if (actual===expected){console.log("Passed");}else{console.log("Expected: ",expected);console.log("Actual: ",actual);}});
If this is starting to look familiar, it should. The function signature for Jest’s test() function is test(name, fn, timeout), with timeout being optional. We’re slowly evolving our way towards a test framework. We’re not there yet, though: all our test() function does is print the name of the test before running it. The test() function doesn’t know anything about whether testCode() passes or fails – it doesn’t know anything about testCode(). How can the testCode() callback tell the test() function whether there was a test failure?
By throwing an exception!
Here we can take the opportunity to write an assertEquals() function and also keep our code DRY. We’ll start by simply moving the comparison between the expected and actual values into its own function:
constassertEquals=(expected,actual)=>{if (actual===expected){console.log("Passed");}else{console.log("Expected: ",expected);console.log("Actual: ",actual);}};test("convert inches to millimeters",()=>{constexpected=25.4;constactual=convertInchesToMillimeters(10);assertEquals(expected,actual);});
Next, we need to make assertEquals() throw an error if the expected and actual values are not equal:
I’ve taken the opportunity to move the “Passed” message from the assertion block to the test() function. The assertEquals() function has no idea which test it’s running inside of, and it shouldn’t be assertEquals()’s job to decide if a test has passed or not. Besides, there may be multiple assertEquals() inside one test().
If the testCode() callback runs without throwing any exceptions, it passes. Otherwise, it fails. At this point, running our tests will produce this test output:
PASSED: convert inches to millimeters
FAILED: convert kilograms to pounds
Expected: 22.2, actual: 25.4
Now we know exactly which test failed. The next step is to identify where it failed.
Minimum Viable Test Framework: Where Did The Test Fail?
It turns out that the task of identifying which assertion caused the test to fail is much easier than putting a name on the failed test. We can simply print error.stack instead of error.message, and get the call stack at the point the error was thrown:
PASSED: convert inches to millimeters
FAILED: convert kilograms to pounds
Error: Expected: 22.2, actual: 20
at assertEquals (~/foo.js:3:11)
at ~/foo.js:31:3
This is all the information we need. It is arguably too much information. I’ve already manually truncated the stack trace, but we still have two line numbers – 3 and 31 – and we only care about line 31 (line 3 is the throw new Error() line from inside the assertEquals() function).
Still, the refinement of the minimum viable test framework is an exercise that I will leave to the reader.
Minimum Viable Test Framework: Next Steps
This is a good time to revisit the information that we care about when getting fast feedback from tests:
How many tests ran?
How many tests passed?
How many tests failed?
Which tests failed?
What was the expected result?
What is the actual result?
Which line of code is the immediate cause of test failure?
If in doubt: What is the stack trace?
We’ve made a lot of progress here in identifying which tests failed and how. This test framework still doesn’t tell us how many tests ran, passed or failed, but I think for most developers, that’s a challenge that’s approachable enough that I won’t cover it here.
If you wanted to extend this test framework, there are a lot of things you could do next. You might want to distinguish AssertionErrors from other types of runtime errors. You might want to time how long the tests take to run. You might want to have the ability to group tests into test suites, and to run only specified test suites. You might want to create lifecycle methods that run at the start and end of each test or test suite. You might want implement mocking or stubbing.
Regardless, I think you’ll agree that two of the biggest workhorses of any test framework are the test() and assertEquals() functions, or their equivalents in the framework of your choice. Hopefully, this post has given you some insight into the inner workings of test frameworks, and a deeper appreciation for how the best test frameworks seamlessly tighten the feedback loop in software development.
Over the course of my last project, I found myself examining many of my own implicit assumptions about testing. I’d like to explore some of them today and share what I’ve learned.
What is a test?
This is a simple question, but the first time I heard it posed was a couple weeks ago, while watching Kent C. Dodd’s Assert.js workshop from 2018:
Kent C. Dodds talks about testing software
I was actually befuddled by this question, because I’d never thought about it! Up until mid December, the way I thought about tests was primarily in terms of frameworks and levels of the test pyramid.
My immediate response (since this was at a JavaScript conference) was “it’s a thing you write in Jest, or Mocha, or some other test framework, to check if your code is working.” The problem with this definition is that it’s circular: What is Jest? Jest is a test framework. What is a test framework? It’s a thing that helps you run tests. What is a test? A test is a thing you write in Jest or some other framework…
A test is code that throws an error when the actual result of something does not match the expected output.
Personally, I would go even further and argue that throwing an error is an implementation detail (although an extremely useful detail, as we’ll see in the next blog post or two). From a technical perspective, a test is simply a bit of code that, when run, tells you something about whether some other part of your code (the subject under test) is working as intended.
This test code doesn’t have to be run inside a test framework, it doesn’t have to use an assertion library, it doesn’t have to do any fancy things like calculate test coverage, take snapshots, or even print out a test summary.
The funny thing is, I knew this implicitly. I just finished the Introduction to Computer Systems class in my part-time degree program, and in our C programming assignments we were exhorted to write tests. And yet, in the course, there was no expectation that we would use a test framework or assertion library to do so (I don’t think the words “test framework” or “assertion library” were ever uttered in the course).
So what did I do? During development, I stuck code like this directly in main():
// this...// 1 is a pass, 0 is a failprintf("%d\n",expected==actual);// ... or this...printf("Expected: %s\n",expected);printf("Actual: %s\n",actual);// ... or this!if(actual!=expected){printf("ERROR!!!!!!!!!1!1!!11!!!\n");printf("Expected: %s\n",expected);printf("Actual: %s\n",actual);}
Those were my tests, and because I never needed to run more than a few at any given time, this simple setup worked perfectly fine. Nothing we wrote in class was ever complicated enough for me to need more than a few printf statements. It doesn’t take long, however, before this approach becomes unwieldy, even for simple command line applications.
This brings me to the next question: what is the purpose of writing tests?
Why write tests?
The question “why test?” is a simple one to answer. We test software so that we know if it works the way it’s supposed to.
The question “why write tests?” is not the same question as “why test?” It’s a small but subtle difference: writing tests implies automated testing against a specification, while “why test?” does not.
Sure, we often talk about automated testing as giving us confidence in our code – and yet, how often does code go into production without any manual testing whatsoever? Rigorous testing in general is what gives us confidence in our code, and automated testing is just one dimension of that.
This means that writing tests requires an additional layer of justification. Test automation is not free: it costs developer time, as any stakeholder desperate to get the next release into production will tell you. A manual tester can do everything that an automated test can do, and a manual tester can also do many things that an automated test cannot do.
So why write tests?
The purpose of writing tests is to get fast feedback. This is the principle that justifies the economic cost of writing tests. It’s an investment of time that pays back dividends in the shorter time it takes to detect bugs, allowing developers not to lose context during debugging or refactoring, and freeing up developer and QA time for higher-value activities.
You know this, I know this, and we still struggle to convey the full impact that good test coverage has on development. We often frame writing tests as “providing confidence in our code”, which is true, but as I mentioned above, automated testing is not the only way to be confident that our code performs to spec. The real reason for test automation is that this confidence can be provided quickly.
This is also the reason that the test pyramid looks the way it does, with unit tests forming the base and E2E tests at the top. If we have E2E or functional tests that mimic an entire user journey, why do we even need unit tests? After all, E2E tests are the tests that give us the greatest confidence that the application performs as expected – but E2E tests also impose a lot of overhead, have many potential points of failure, and take more time to run. We use unit tests because unit tests give us faster feedback.
(An aside: if we want fast feedback, why do we run automated E2E tests, since they’re slow? It’s important to consider what E2E testing is, and what it’s meant to be an alternative to. Unit tests and E2E tests are not substitutes for each other. Instead, E2E tests are meant to replace repetitive manual testing, and automated E2E tests definitely give faster feedback than a human. I should know – I used to be that human.)
The value of a test suite that can run in seconds rather than minutes is the difference between refactoring being tolerable or intolerable. If you have to wait three minutes every time you refactor something minor to confirm that your tests still pass, the refactoring is not going to happen. That’s why fast tests are essential to any refactoring.
Understanding that we write tests to get fast feedback then helps us to frame how we go about writing our tests.
How do you write tests?
This section isn’t about the mechanics of writing tests. Instead, it’s about what information you need from a test in order to extract fast feedback from it.
There are three aspects of getting fast feedback from a test:
How long does it take the test to run?
How long does it take to determine whether the test has passed?
How long does it take to determine why a test failed?
I’m going to skip over the first point, for the simple reason that it’s trivially obvious that all else being equal, a faster test provides faster feedback. There’s been plenty of ink spilled elsewhere about how to write fast tests, by people much smarter and with far more experience than me.
The second point also looks trivial. What do you mean, how long does it take to determine whether the test has passed? I’ll point you up to one of my wonderful C “tests”:
// this...// 1 is a pass, 0 is a failprintf("%d\n",expected==actual);
Imagine ten of these tests:
1
1
1
1
1
1
1
1
1
How quickly can you determine that all your tests have passed?
Count again: there are only 9 outputs, and none of them are 0. Why? Which of the 10 failed silently?
This is an easy problem to solve: don’t write tests like this! Make sure you can see at a glance whether a test passed or failed, which test passed or failed, and how many tests were run. That’s part of the answer to “how do you write tests?”. You write them so that the most important information is reported most prominently:
How many tests ran, and did they all pass?
If the answer to the second question is yes, these two pieces of information are all I need to know. Anything else is gravy.
Why do we care how many tests ran? If you expected 20 tests but only 15 of them ran, a 100% pass rate could be a false positive.
If there were failures, how many were there, and what were they?
From this point, it’s an easy jump to the third point: if a test fails, you want as much relevant information as possible to determine why it failed – and nothing else.
The relevant pieces of information answer these questions:
What is the expected result?
What is the actual result?
Which line of code is the immediate cause of the test failure?
If in doubt: what is the stack trace?
The risk of adding more information into the test output (logging intermediate states, etc.) as a matter of routine is that it generally slows debugging down. Remember, the purpose of writing tests is to provide fast feedback. Every additional piece of information that is not essential to debugging a failed test is noise, and it will slow you down.
In order for a test to provide fast feedback, then, the test needs to:
Run fast (duh)
Clearly convey the most critical information about test passes and failures
That’s how to write tests: write them in such a way that they fulfill their purpose of giving fast feedback. If you’ve only ever written tests in the context of test frameworks, none of this seems groundbreaking or meaningful, but that’s probably because you’re used to getting feedback from your tests so quickly you don’t even have to think about what your test framework is doing for you.
Next: thinking about test frameworks
This is the motivation behind a test framework. A test framework gives you the tools to quickly write repeatable, automated tests that give you fast feedback.
In my next post, I want to explore the parts that make up a test framework – a “minimum viable test framework”, as it were.