David Zemens

So You’ve Failed…

I’ve recently taken a few coding tests. I failed both of them!

It’s easy to get discouraged by failure, but that doesn’t really do you any good. A better approach is to step back, evaluate what went wrong, and try to learn something from it. I’m happy to report that I’ve since solved the first one (which was objective), and learned a good bit from both exercises (the second one was subjective, so not truly “solvable”).

The first test was on HackerRank, which I wasn’t familiar with. I received no real feedback during the test or after the fact. The errors in the 3/24 failed tests indicated only “Timeout Error” without any context. It was not helpful (actually, it was an hindrance!) that HackerRank does not expose the test criteria.

By the time I realized this timeout error was not related to browser, VPN or other connectivity issues, it was too late, and I’m not the only one who’s run in to this. Had I been familiar with that site and its constraints, I probably wouldn’t have failed!

[I] coded the simplest solution to the problem, pressed “run” and it said “Correct Answer” and then showed the output of some tests it had ran. Strangely (to me anyway) some of their test cases had “timed out” and I assumed they had server issues, or their site wasn’t correctly built. How wrong was I? What they had actually meant was “Your code is not quick enough to deal with the inputs to these tests in the allotted time.”

Beware of HackerRank – by Richard Linnell

It’s big enough of an issue that I now find that HackerRank has added some documentation explaining this.

The other one was more involved, un-timed, and actually fun to work on. This one hit me a bit harder because I was really excited about it. I was asked to refactor a C# application. I’ll do a deeper dive on this one, later, but ultimately I think that failure to really address the unit tests is what got me.

Here’s the thing I probably shouldn’t cop to: I don’t really write unit tests.

I’m not alone, of course, and the rationale is pretty boilerplate: writing tests costs time (and budget), and if you’re not given the time or the budget, and not expected to write tests as part of the work requirements, they fall by the wayside.

How Do You Test Your Code?

That’s not to say my code isn’t tested. I test it on a few use-cases, validate the output that it creates and ensure that output script is executable, then I logic-check the resulting output from that. After code review, it goes to QAT/Integration Testing, where it’s tested similarly in our pipeline. Our QA guy has some additional tests that he runs, and compares the outputs to some known priors, etc. If it passes muster there, then it goes to UAT, and only then does it get merged in to our production code.

(When) Should You Write Unit Tests?

Of course, there are LOTS of very good reasons why you should be writing tests. No debate from me on any of those points. But there are also situations where unit testing isn’t really appropriate; points 3, 4 and 5 apply to most of what I’ve worked on over the past few years:

  • The code needs to interact with other deployed systems; then an integration test is called for.

I’m developing and maintaining modules that are part of a larger data pipeline, so they do pass through integration testing and user acceptance testing. Requirements are often somewhat loose, and there are multiple ways to produce “correct” outputs that are not always predefined.

  • If the test of success/fail is something that is so difficult to quantify as to not be reliably measurable, such as steganography being unnoticeable to humans.

I could try to write a compiler and/or syntax validator (which would be enormously stupid, costly, and redundant), or I could write a test that runs the output script in its native application, but that’s also redundant since that’s handled by Integration Testing. Finally, UAT provides verification that the outputs are correct per the customer. If not, well, we’re agile, so we log the defect, or clarify the requirements, lather, rinse and repeat.

  • If the test itself is an order of magnitude more difficult to write than the code

We don’t usually know in advance what is the “correct” output for a given class or method. Should we? Arguably, but when you’re automating processes that have historically been done manually by data processors, who interpret and iterate their efforts with the customer whose requirements are anything but standard or machine-readable, with multiple ways of achieving the “correct” output that are unknown in advance, it’s not practical.

Summing It Up

Everyone fails! And yes, it’s discouraging. But if you make an effort to learn from your past failures rather than dwelling on them, you will be stronger in the long run. Don’t let a single failure, or even multiple failures discourage you too much. Brush it off, and figure out what you did wrong so that you don’t repeat the same mistakes the next time around.

Leave a Reply

Your email address will not be published. Required fields are marked *