5 **Based on materials by Katy Huff, Rachel Slaybaugh, and Anthony
8 ![image](media/test-in-production.jpg)
12 Software testing is a process by which one or more expected behaviors
13 and results from a piece of software are exercised and confirmed. Well
14 chosen tests will confirm expected code behavior for the extreme
15 boundaries of the input domains, output ranges, parametric combinations,
16 and other behavioral **edge cases**.
20 Unless you write flawless, bug-free, perfectly accurate, fully precise,
21 and predictable code **every time**, you must test your code in order to
22 trust it enough to answer in the affirmative to at least a few of the
25 - Does your code work?
27 - Does it do what you think it does? ([Patriot Missile Failure](http://www.ima.umn.edu/~arnold/disasters/patriot.html))
28 - Does it continue to work after changes are made?
29 - Does it continue to work after system configurations or libraries
31 - Does it respond properly for a full range of input parameters?
32 - What about **edge or corner cases**?
33 - What's the limit on that input parameter?
34 - How will it affect your
35 [publications](http://www.nature.com/news/2010/101013/full/467775a.html)?
39 *Verification* is the process of asking, "Have we built the software
40 correctly?" That is, is the code bug free, precise, accurate, and
45 *Validation* is the process of asking, "Have we built the right
46 software?" That is, is the code designed in such a way as to produce the
47 answers we are interested in, data we want, etc.
49 ## Uncertainty Quantification
51 *Uncertainty Quantification* is the process of asking, "Given that our
52 algorithm may not be deterministic, was our execution within acceptable
53 error bounds?" This is particularly important for anything which uses
54 random numbers, eg Monte Carlo methods.
58 Say we have an averaging function:
67 Tests could be implemented as runtime **exceptions in the function**:
75 raise TypeError("The number list was not a list of numbers.")
77 print "There was a problem evaluating the number list."
81 Sometimes tests they are functions alongside the function definitions
90 raise TypeError("The number list was not a list of numbers.")
92 print "There was a problem evaluating the number list."
97 assert mean([0, 0, 0, 0]) == 0
98 assert mean([0, 200]) == 100
99 assert mean([0, -200]) == -100
100 assert mean([0]) == 0
103 def test_floating_mean():
104 assert mean([1, 2]) == 1.5
107 Sometimes they are in an executable independent of the main executable.
113 length = len(numlist)
115 raise TypeError("The number list was not a list of numbers.")
117 print "There was a problem evaluating the number list."
121 Where, in a different file exists a test module:
127 assert mean([0, 0, 0, 0]) == 0
128 assert mean([0, 200]) == 100
129 assert mean([0, -200]) == -100
130 assert mean([0]) == 0
133 def test_floating_mean():
134 assert mean([1, 2]) == 1.5
137 # When should we test?
139 The three right answers are:
145 The longer answer is that testing either before or after your software
146 is written will improve your code, but testing after your program is
147 used for something important is too late.
149 If we have a robust set of tests, we can run them before adding
150 something new and after adding something new. If the tests give the same
151 results (as appropriate), we can have some assurance that we didn't
152 wreak anything. The same idea applies to making changes in your system
153 configuration, updating support codes, etc.
155 Another important feature of testing is that it helps you remember what
156 all the parts of your code do. If you are working on a large project
157 over three years and you end up with 200 classes, it may be hard to
158 remember what the widget class does in detail. If you have a test that
159 checks all of the widget's functionality, you can look at the test to
160 remember what it's supposed to do.
164 In a collaborative coding environment, where many developers contribute
165 to the same code base, developers should be responsible individually for
166 testing the functions they create and collectively for testing the code
169 Professionals often test their code, and take pride in test coverage,
170 the percent of their functions that they feel confident are
171 comprehensively tested.
173 # How are tests written?
175 The type of tests that are written is determined by the testing
176 framework you adopt. Don't worry, there are a lot of choices.
180 **Exceptions:** Exceptions can be thought of as type of runtime test.
181 They alert the user to exceptional behavior in the code. Often,
182 exceptions are related to functions that depend on input that is unknown
183 at compile time. Checks that occur within the code to handle exceptional
184 behavior that results from this type of input are called Exceptions.
186 **Unit Tests:** Unit tests are a type of test which test the fundamental
187 units of a program's functionality. Often, this is on the class or
188 function level of detail. However what defines a *code unit* is not
191 To test functions and classes, the interfaces (API) - rather than the
192 implementation - should be tested. Treating the implementation as a
193 black box, we can probe the expected behavior with boundary cases for
196 **System Tests:** System level tests are intended to test the code as a
197 whole. As opposed to unit tests, system tests ask for the behavior as a
198 whole. This sort of testing involves comparison with other validated
199 codes, analytical solutions, etc.
201 **Regression Tests:** A regression test ensures that new code does
202 change anything. If you change the default answer, for example, or add a
203 new question, you'll need to make sure that missing entries are still
206 **Integration Tests:** Integration tests query the ability of the code
207 to integrate well with the system configuration and third party
208 libraries and modules. This type of test is essential for codes that
209 depend on libraries which might be updated independently of your code or
210 when your code might be used by a number of users who may have various
211 versions of libraries.
213 **Test Suites:** Putting a series of unit tests into a collection of
214 modules creates, a test suite. Typically the suite as a whole is
215 executed (rather than each test individually) when verifying that the
216 code base still functions after changes have been made.
220 **Behavior:** The behavior you want to test. For example, you might want
221 to test the fun() function.
223 **Expected Result:** This might be a single number, a range of numbers,
224 a new fully defined object, a system state, an exception, etc. When we
225 run the fun() function, we expect to generate some fun. If we don't
226 generate any fun, the fun() function should fail its test.
227 Alternatively, if it does create some fun, the fun() function should
228 pass this test. The the expected result should known *a priori*. For
229 numerical functions, this is result is ideally analytically determined
230 even if the function being tested isn't.
232 **Assertions:** Require that some conditional be true. If the
233 conditional is false, the test fails.
235 **Fixtures:** Sometimes you have to do some legwork to create the
236 objects that are necessary to run one or many tests. These objects are
237 called fixtures as they are not really part of the test themselves but
238 rather involve getting the computer into the appropriate state.
240 For example, since fun varies a lot between people, the fun() function
241 is a method of the Person class. In order to check the fun function,
242 then, we need to create an appropriate Person object on which to run
245 **Setup and teardown:** Creating fixtures is often done in a call to a
246 setup function. Deleting them and other cleanup is done in a teardown
249 **The Big Picture:** Putting all this together, the testing algorithm is
258 But, sometimes it's the case that your tests change the fixtures. If so,
259 it's better for the setup() and teardown() functions to occur on either
260 side of each test. In that case, the testing algorithm should be:
278 # Nose: A Python Testing Framework
280 The testing framework we'll discuss today is called nose. However, there
281 are several other testing frameworks available in most language. Most
282 notably there is [JUnit](http://www.junit.org/) in Java which can
283 arguably attributed to inventing the testing framework.
285 ## Where do nose tests live?
287 Nose tests are files that begin with `Test-`, `Test_`, `test-`, or
288 `test_`. Specifically, these satisfy the testMatch regular expression
289 `[Tt]est[-_]`. (You can also teach nose to find tests by declaring them
290 in the unittest.TestCase subclasses chat you create in your code. You
291 can also create test functions which are not unittest.TestCase
292 subclasses if they are named with the configured testMatch regular
297 To write a nose test, we make assertions.
300 assert should_be_true()
301 assert not should_not_be_true()
304 Additionally, nose itself defines number of assert functions which can
305 be used to test more specific aspects of the code base.
308 from nose.tools import *
311 assert_almost_equal(a, b)
314 assert_raises(exception, func, *args, **kwargs)
315 assert_is_instance(a, b)
319 Moreover, numpy offers similar testing functions for arrays:
322 from numpy.testing import *
324 assert_array_equal(a, b)
325 assert_array_almost_equal(a, b)
329 ## Exercise: Writing tests for mean()
331 There are a few tests for the mean() function that we listed in this
332 lesson. What are some tests that should fail? Add at least three test
333 cases to this set. Edit the `test_mean.py` file which tests the mean()
334 function in `mean.py`.
336 *Hint:* Think about what form your input could take and what you should
337 do to handle it. Also, think about the type of the elements in the list.
338 What should be done if you pass a list of integers? What if you pass a
343 nosetests test_mean.py
345 # Test Driven Development
347 Test driven development (TDD) is a philosophy whereby the developer
348 creates code by **writing the tests first**. That is to say you write the
349 tests *before* writing the associated code!
351 This is an iterative process whereby you write a test then write the
352 minimum amount code to make the test pass. If a new feature is needed,
353 another test is written and the code is expanded to meet this new use
354 case. This continues until the code does what is needed.
356 TDD operates on the YAGNI principle (You Ain't Gonna Need It). People
357 who diligently follow TDD swear by its effectiveness. This development
358 style was put forth most strongly by [Kent Beck in
359 2002](http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530).
363 Say you want to write a fib() function which generates values of the
364 Fibonacci sequence of given indexes. You would - of course - start by
365 writing the test, possibly testing a single value:
368 from nose.tools import assert_equal
375 assert_equal(obs, exp)
378 You would *then* go ahead and write the actual function:
382 # you snarky so-and-so
386 And that is it right?! Well, not quite. This implementation fails for
387 most other values. Adding tests we see that:
393 assert_equal(obs, exp)
399 assert_equal(obs, exp)
403 assert_equal(obs, exp)
406 This extra test now requires that we bother to implement at least the
417 However, this function still falls over for `2 < n`. Time for more
424 assert_equal(obs, exp)
430 assert_equal(obs, exp)
434 assert_equal(obs, exp)
440 assert_equal(obs, exp)
444 assert_equal(obs, exp)
447 At this point, we had better go ahead and try do the right thing...
455 return fib(n - 1) + fib(n - 2)
458 Here it becomes very tempting to take an extended coffee break or
459 possibly a power lunch. But then you remember those pesky negative
460 numbers and floats. Perhaps the right thing to do here is to just be
467 assert_equal(obs, exp)
473 assert_equal(obs, exp)
477 assert_equal(obs, exp)
483 assert_equal(obs, exp)
487 assert_equal(obs, exp)
493 assert_equal(obs, exp)
497 assert_equal(obs, exp)
500 This means that it is time to add the appropriate case to the function
505 # sequence and you shall find
506 if n < 0 or int(n) != n:
507 return NotImplemented
508 elif n == 0 or n == 1:
511 return fib(n - 1) + fib(n - 2)
514 # Quality Assurance Exercise
516 Can you think of other tests to make for the fibonacci function? I promise there
519 Implement one new test in test_fib.py, run nosetests, and if it fails, implement
520 a more robust function for that case.
522 And thus - finally - we have a robust function together with working
527 **The Problem:** In 2D or 3D, we have two points (p1 and p2) which
528 define a line segment. Additionally there exists experimental data which
529 can be anywhere in the domain. Find the data point which is closest to
532 In the `close_line.py` file there are four different implementations
533 which all solve this problem. [You can read more about them
534 here.](http://inscight.org/2012/03/31/evolution_of_a_solution/) However,
535 there are no tests! Please write from scratch a `test_close_line.py`
536 file which tests the closest\_data\_to\_line() functions.
538 *Hint:* you can use one implementation function to test another. Below
539 is some sample data to help you get started.
541 ![image](media/evolution-of-a-solution-1.png)
547 p1 = np.array([0.0, 0.0])
548 p2 = np.array([1.0, 1.0])
549 data = np.array([[0.3, 0.6], [0.25, 0.5], [1.0, 0.75]])