From e8d13169a367183ae672a8e415eba75d49a86bad Mon Sep 17 00:00:00 2001 From: Mike Jackson Date: Fri, 15 Mar 2013 07:54:52 -0700 Subject: [PATCH] First complete draft of all material W. Trevor King: I dropped everything from the original 025e379 except for the Markdown testing/ modifications. --- testing/Conclusion.md | 15 +++ testing/README.md | 286 ++---------------------------------------- testing/RealWorld.md | 57 +++++++++ testing/TDD.md | 59 +++++++++ testing/Writing.md | 146 +++++++++++++++++++++ 5 files changed, 290 insertions(+), 273 deletions(-) create mode 100755 testing/Conclusion.md create mode 100755 testing/RealWorld.md create mode 100755 testing/TDD.md create mode 100755 testing/Writing.md diff --git a/testing/Conclusion.md b/testing/Conclusion.md new file mode 100755 index 0000000..58c6530 --- /dev/null +++ b/testing/Conclusion.md @@ -0,0 +1,15 @@ +## Conclusion + +Testing + +* Saves us time. +* Gives us confidence that our code does what we want and expect it to. +* Promotes trust that our code, and so our research, is correct. + +If in doubt, remember [Geoffrey Chang](http://en.wikipedia.org/wiki/Geoffrey_Chang) and in the words of Bruce Eckel, in [Thinking in Java, 3rd Edition](http://www.mindview.net/Books/TIJ/), "If it's not tested, it's broken". + +## Find out more... + +* [Software Carpentry](http://software-carpentry.org/)'s online [testing](http://software-carpentry.org/4_0/test/index.html) lectures. +* A discussion on [is it worthwhile to write unit tests for scientific research codes?](http://scicomp.stackexchange.com/questions/206/is-it-worthwhile-to-write-unit-tests-for-scientific-research-codes) +* G. Wilson, D. A. Aruliah, C. T. Brown, N. P. Chue Hong, M. Davis, R. T. Guy, S. H. D. Haddock, K. Huff, I. M. Mitchell, M. Plumbley, B. Waugh, E. P. White, P. Wilson (2012) "[Best Practices for Scientific Computing](http://arxiv.org/abs/1210.0530)", arXiv:1210.0530 [cs.MS]. diff --git a/testing/README.md b/testing/README.md index a23aac6..82e98a1 100755 --- a/testing/README.md +++ b/testing/README.md @@ -70,283 +70,23 @@ But if this is not compelling, then, if nothing else, writing tests is an invest * We can detect more quickly whether refactoring, optimisation or parallelisation has introduced bugs. * We can run our tests while doing other, more interesting, things. -## Let's start writing some tests +## Fixing things before we test... -In the file `dna.py` we have a Python dictionary that stores the molecular weights of the 4 standard DNA nucleotides, A, T, C and G, +Before we test our code, it can be very productive to get a colleague to look at it for us...why? - NUCLEOTIDES = {'A':131.2, 'T':304.2, 'C':289.2, 'G':329.2} +> **What we know about software development - code reviews work** -and a Python function that takes a DNA sequence as input and returns its molecular weight, which is the sum of the weights for each nucelotide in the sequence, - - def calculate_weight(sequence): - """ - Calculate the molecular weight of a DNA sequence. - @param sequence: DNA sequence expressed as an upper-case string. - @return molecular weight. - """ - weight = 0.0 - for ch in sequence: - weight += NUCLEOTIDES[ch] - return weight +> Fagan (1976) discovered that a rigorous inspection can remove 60-90% of errors before the first test is run. +> M.E., Fagan (1976). [Design and Code inspections to reduce errors in program development](http://www.mfagan.com/pdfs/ibmfagan.pdf). IBM Systems Journal 15 (3): pp. 182-211. -We can calculate the molecular weight of a sequence by, - - weight = calculate_weight('GATGCTGTGGATAA') - print weight +> **What we know about software development - code reviews should be about 60 minutes long** -We can add a test to our code as follows, +> Cohen (2006) discovered that all the value of a code review comes within the first hour, after which reviewers can become exhausted and the issues they find become ever more trivial. +> J. Cohen (2006). [Best Kept Secrets of Peer Code Review](http://smartbear.com/SmartBear/media/pdfs/best-kept-secrets-of-peer-code-review.pdf). SmartBear, 2006. ISBN-10: 1599160676. ISBN-13: 978-1599160672. - def calculate_weight(sequence): - """ - Calculate the molecular weight of a DNA sequence. - - @param sequence: DNA sequence expressed as an upper-case string. - @return molecular weight. - """ - weight = 0.0 - try: - for ch in sequence: - weight += NUCLEOTIDES[ch] - return weight - except TypeError: - print 'The input is not a sequence e.g. a string or list' - -If the input is not a string, or a list of characters then the `for...in` statement will *raise an exception* which is *caught* by the `except` block. For example, - - print calculate_weight(123) - -This is a *runtime test*. It alerts the user to exceptional behavior in the code. Often, exceptions are related to functions that depend on input that is unknown at compile time. Such tests make our code robust and allows our code to behave gracefully - they anticipate problematic values and handle them. - -But these tests don't test our functions behaviour or whether it's implemented correctly. So, we can add some tests, - - print calculate_weight('A') - print calculate_weight('G') - print calculate_weight('GA') - -But we'd have to visually inspect the results to see they are as expected. So, let's have the computer do that for us and make our lives easier, and save us time in checking, - - assert calculate_weight('A') == 131.2 - assert calculate_weight('G') == 329.2 - assert calculate_weight('GA') == 460.4 - -`assert` checks whether a condition is true and, if not, raises an exception. - -We explicitly list the expected weights in each statement. But, by doing this there is a risk that we mistype one. A good design principle is to define constant values in one place only. As we already have defined them in `nucleotides` we can just refer to that, - - assert calculate_weight('A') == NUCLEOTIDES['A'] - assert calculate_weight('G') == NUCLEOTIDES['G'] - assert calculate_weight('GA') == NUCLEOTIDES['G'] + NUCLEOTIDES['A'] - -But this isn't very modular, and modularity is a good design principle, so let's define some test functions, - - def test_a(): - assert calculate_weight('A') == NUCLEOTIDES['A'] - def test_g(): - assert calculate_weight('G') == NUCLEOTIDES['G'] - def test_ga(): - assert calculate_weight('GA') == NUCLEOTIDES['GA'] + NUCLEOTIDES['A'] - - test_a() - test_g() - test_ga() - -And, rather than have our tests and code in the same file, let's separate them out. So, let's create - - $ nano test_dna.py - -Now, our function and nucleotides data are in `dna.py` and we want to refer to them in `test_dna.py` file, we need to *import* them. We can do this as, - - from dna import calculate_weight - from dna import NUCLEOTIDES - -Then we can add all our test functions and function calls to this file. And run the tests, - - $ python test_dna.py - -## `nose` - a Python test framework - -`nose` is a test framework for Python that will automatically find, run and report on tests written in Python. It is an example of what has been termed an *[xUnit test framework](http://en.wikipedia.org/wiki/XUnit)*, perhaps the most famous being JUnit for Java. - -To use `nose`, we write test functions, as we've been doing, with the prefix `test_` and put these in files, likewise prefixed by `test_`. The prefixes `Test-`, `Test_` and `test-` can also be used. - -Typically, a test function, - -* Sets up some inputs and the associated expected outputs. The expected outputs might be a single number, a range of numbers, some text, a file, a set of files, or whatever. -* Runs the function or component being tested on the inputs to get some actual outputs. -* Checks that the actual outputs match the expected outputs. We use assertions as part of this checking. We can check both that conditions hold and that conditions do not hold. - -Python `assert` allows us to check, - - assert should_be_true() - assert not should_not_be_true() - -`nose` defines additional functions which can be used to check for a rich range of conditions e.g.. - - from nose.tools import * - - assert_equal(a, b) - assert_almost_equal(a, b, 3) - assert_true(a) - assert_false(a) - assert_raises(exception, func, *args, **kwargs) - ... - -`assert_raises` is used for where we want to test that an exception is raised if, for example, we give a function a bad input. - -To run `nose` for our tests, we can do, - - $ nosetests test_dna.py - -Each `.` corresponds to a successful test. And to prove `nose` is finding our tests, let's remove the function calls from `test_dna.py` and try again, - - $ nosetests test_dna.py - -nosetests can output an "xUnit" test report, - - $ nosetests --with-xunit test_dna.py - $ cat nosetests.xml - -This is a standard format that that is supported by a number of xUnit frameworks which can then be converted to HTML and presented online. - -## Write some more tests - -Let's spend a few minutes coming up with some more tests for `calculate_weight`. Consider, - -* What haven't we tested for so far? -* Have we covered all the nucleotides? -* Have we covered all the types of string we can expect? -* In addition to test functions, other types of runtime test could we add to `calculate_weight`? - -## When 1 + 1 = 2.000000000000001 - -Computers don't do floating point arithmetic too well. This can make simple tests for the equality of two floating point values problematic due to imprecision in the values being compared. We can get round this by comparing to within a given threshold, or delta, for example we may consider *expected* and *actual* to be equal if *expected - actual < 0.000000000001*. - -Test frameworks such as `nose`, often provide functions to handle this for us. For example, to test that 2 numbers are equal when rounded to a given number of decimal places, - - $ python - >>> from nose.tools import assert_almost_equal - >>> assert_almost_equal(1.000001, 1.000002, 0) - >>> assert_almost_equal(1.000001, 1.000002, 1) - >>> assert_almost_equal(1.000001, 1.000002, 3) - >>> assert_almost_equal(1.000001, 1.000002, 6) - ... - AssertionError: 1.000001 != 1.000002 within 6 places - -## Testing in practice - -The example we've looked at is based on one function. Suppose we have a complex legacy code of 10000s of lines and which takes many input files and produces many output files. Exactly the same approach can be used as above - we run our code on a set of input files and check whether the output files match what you'd expect. For example, we could, - -* Run the code on a set of inputs. -* Save the outputs. -* Refactor the code e.g. to optimise it or parallelise it. -* Run the code on the inputs. -* Check that the outputs match the saved outputs. - -This was the approach taken by EPCC and the Colon Cancer Genetics Group (CCGG) of the MRC Human Genetics Unit at the Western General as part of an [Oncology](http://www.edikt.org/edikt2/OncologyActivity) project to optimise and parallelise a FORTRAN genetics code. - -The [Muon Ion Cooling Experiment](http://www.mice.iit.edu/) (MICE) have a large number of tests written in Python. They use [Jenkins](), a *continuous integration server* to build their code and trigger the running of the tests which are then [published online](https://micewww.pp.rl.ac.uk/tab/show/maus). - -## When should we test? - -We should test, - -* Early, and not wait till after we've used it to generate data for our important paper, or given it to someone else to use. -* Often, so that we know that any changes we've made to our code, or to things that our code needs (e.g. libraries, configuration files etc.) haven't introduced any bugs. - -But, when should we finish writing tests? How much is enough? - -> **What we know about software development - we can't test everything** - -> "It is nearly impossible to test software at the level of 100 percent of its logic paths", fact 32 in R. L. Glass (2002) [Facts and Fallacies of Software Engineering](http://www.amazon.com/Facts-Fallacies-Software-Engineering-Robert/dp/0321117425) ([PDF](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.2037&rep=rep1&type=pdf)). - -We can't test everything but that's no excuse for testing nothing! How much to test is something to be learned by experience, so think of it as analogous to when you finish proof reading a paper, over and over, before sending it to a conference. If you find bugs when you use your code, you did too little, so consider what you might have done and how to address this next time. - -Tests, like code, should ideally be reviewed by a colleague which helps avoid tests that, - -* Pass when they should fail, false positives. -* Fail when they should pass, false negatives. -* Don't test anything. - -For example, - - def test_critical_correctness(): - # TODO - will complete this tomorrow! - pass - -Yes, tests like this *do* occur on projects! - -## Test Driven Development - -Traditionally, we'd write our code, then write the tests. [Test driven development](http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530) (TDD), proposed by Kent Beck, is a philosophy that turns this on its head - we write code by *writing the tests first*, then write the code to make the tests pass. If a new feature is needed, another test is written and the code is expanded to meet this new use case. This continues until the code does what is needed. This can be summarised as red-green-refactor: - - * Red - write tests based on requirements. They fail as there is no code! - * Green - write/modify code to get tests to pass. - * Refactor code - clean it up. - -By writing tests first, we're forced to think about what our code should do. In contrast, in writing our code then tests, we risk testing what the code actually does, rather than what it should do. - -TDD operates on the YAGNI principle (You Ain't Gonna Need It) to avoid developing code for which there is no need. - -## TDD of a DNA complement function - -Given a DNA sequence consisting of A, C, T and G, we can create its complementary DNA, cDNA, by applying a mapping to each nucleotide in turn, - -* A => T -* C => G -* T => A -* G => C - -For example, given DNA strand GTCA, the cDNA is CAGT. - -So, let's write a `complement` function that creates the cDNA strand, given a DNA strand in a string. We'll use TDD, so to start, let's create a file `test_cdna.py` and add a test, - - from cdna import complement - - def test_complement_a(): - assert_equals complement('A') == 'T' - -And let's run the test, - - $ nosetests test_cdna.py - -Which fails as we have no function! So, let's create a file `cdna.py`. Our initial function to get the tests to pass could be, - - def complement(sequence): - return 'T' - -This is simplistic, but the test passes. Now let's add another test, - - def test_complement_c(): - assert complement('C') == 'G' - -To get both our tests to pass, we can change our function to be, - - def complement(sequence): - if (sequence == 'A'): - return 'T' - else: - return 'G' - -Now, add some more tests. Don't worry about `complement` just now. - -Let's discuss the tests you've come up with. - -Now update `complement` to make your tests pass. You may want to reuse some of the logic of `calculate_weight`! - -When we're done, not only do we have a working function, we also have a set of tests. There's no risk of us leaving the tests "till later" and then never having time to write them. - -## Conclusion - -Testing - -* Saves us time. -* Gives us confidence that our code does what we want and expect it to. -* Promotes trust that our code, and so our research, is correct. - -If in doubt, remember [Geoffrey Chang](http://en.wikipedia.org/wiki/Geoffrey_Chang) and in the words of Bruce Eckel, in [Thinking in Java, 3rd Edition](http://www.mindview.net/Books/TIJ/), "If it's not tested, it's broken". - -## Find out more... - -* [Software Carpentry](http://software-carpentry.org/)'s online [testing](http://software-carpentry.org/4_0/test/index.html) lectures. -* A discussion on [is it worthwhile to write unit tests for scientific research codes?](http://scicomp.stackexchange.com/questions/206/is-it-worthwhile-to-write-unit-tests-for-scientific-research-codes) +## Let's dive in... +* [Let's start writing some tests](Writing.md) +* [Testing in practice](RealWorld.md) +* [Test-driven development](TDD.md) +* [Conclusions and further information](Conclusion.md) diff --git a/testing/RealWorld.md b/testing/RealWorld.md new file mode 100755 index 0000000..b516743 --- /dev/null +++ b/testing/RealWorld.md @@ -0,0 +1,57 @@ +## Testing in practice + +The example we've looked at is based on one function. Suppose we have a complex legacy code of 10000s of lines and which takes many input files and produces many output files. Exactly the same approach can be used as above - we run our code on a set of input files and check whether the output files match what you'd expect. For example, we could, + +* Run the code on a set of inputs. +* Save the outputs. +* Refactor the code e.g. to optimise it or parallelise it. +* Run the code on the inputs. +* Check that the outputs match the saved outputs. + +This was the approach taken by EPCC and the Colon Cancer Genetics Group (CCGG) of the MRC Human Genetics Unit at the Western General as part of an [Oncology](http://www.edikt.org/edikt2/OncologyActivity) project to optimise and parallelise a FORTRAN genetics code. + +The [Muon Ion Cooling Experiment](http://www.mice.iit.edu/) (MICE) have a large number of tests written in Python. They use [Jenkins](), a *continuous integration server* to build their code and trigger the running of the tests which are then [published online](https://micewww.pp.rl.ac.uk/tab/show/maus). + +## When 1 + 1 = 2.000000000000001 + +Computers don't do floating point arithmetic too well. This can make simple tests for the equality of two floating point values problematic due to imprecision in the values being compared. We can get round this by comparing to within a given threshold, or delta, for example we may consider *expected* and *actual* to be equal if *expected - actual < 0.000000000001*. + +Test frameworks such as `nose`, often provide functions to handle this for us. For example, to test that 2 numbers are equal when rounded to a given number of decimal places, + + $ python + >>> from nose.tools import assert_almost_equal + >>> assert_almost_equal(1.000001, 1.000002, 0) + >>> assert_almost_equal(1.000001, 1.000002, 1) + >>> assert_almost_equal(1.000001, 1.000002, 3) + >>> assert_almost_equal(1.000001, 1.000002, 6) + ... + AssertionError: 1.000001 != 1.000002 within 6 places + +## When should we test? + +We should test, + +* Early, and not wait till after we've used it to generate data for our important paper, or given it to someone else to use. +* Often, so that we know that any changes we've made to our code, or to things that our code needs (e.g. libraries, configuration files etc.) haven't introduced any bugs. + +But, when should we finish writing tests? How much is enough? + +> **What we know about software development - we can't test everything** + +> "It is nearly impossible to test software at the level of 100 percent of its logic paths", fact 32 in R. L. Glass (2002) [Facts and Fallacies of Software Engineering](http://www.amazon.com/Facts-Fallacies-Software-Engineering-Robert/dp/0321117425) ([PDF](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.2037&rep=rep1&type=pdf)). + +We can't test everything but that's no excuse for testing nothing! How much to test is something to be learned by experience, so think of it as analogous to when you finish proof reading a paper, over and over, before sending it to a conference. If you find bugs when you use your code, you did too little, so consider what you might have done and how to address this next time. + +Tests, like code, should ideally be reviewed by a colleague which helps avoid tests that, + +* Pass when they should fail, false positives. +* Fail when they should pass, false negatives. +* Don't test anything. + +For example, + + def test_critical_correctness(): + # TODO - will complete this tomorrow! + pass + +Yes, tests like this *do* occur on projects! diff --git a/testing/TDD.md b/testing/TDD.md new file mode 100755 index 0000000..9191f61 --- /dev/null +++ b/testing/TDD.md @@ -0,0 +1,59 @@ +## Test Driven Development + +Traditionally, we'd write our code, then write the tests. [Test driven development](http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530) (TDD), proposed by Kent Beck, is a philosophy that turns this on its head - we write code by *writing the tests first*, then write the code to make the tests pass. If a new feature is needed, another test is written and the code is expanded to meet this new use case. This continues until the code does what is needed. This can be summarised as red-green-refactor: + + * Red - write tests based on requirements. They fail as there is no code! + * Green - write/modify code to get tests to pass. + * Refactor code - clean it up. + +By writing tests first, we're forced to think about what our code should do. In contrast, in writing our code then tests, we risk testing what the code actually does, rather than what it should do. + +TDD operates on the YAGNI principle (You Ain't Gonna Need It) to avoid developing code for which there is no need. + +## TDD of a DNA complement function + +Given a DNA sequence consisting of A, C, T and G, we can create its complementary DNA, cDNA, by applying a mapping to each nucleotide in turn, + +* A => T +* C => G +* T => A +* G => C + +For example, given DNA strand GTCA, the cDNA is CAGT. + +So, let's write a `complement` function that creates the cDNA strand, given a DNA strand in a string. We'll use TDD, so to start, let's create a file `test_cdna.py` and add a test, + + from cdna import complement + + def test_complement_a(): + assert_equals complement('A') == 'T' + +And let's run the test, + + $ nosetests test_cdna.py + +Which fails as we have no function! So, let's create a file `cdna.py`. Our initial function to get the tests to pass could be, + + def complement(sequence): + return 'T' + +This is simplistic, but the test passes. Now let's add another test, + + def test_complement_c(): + assert complement('C') == 'G' + +To get both our tests to pass, we can change our function to be, + + def complement(sequence): + if (sequence == 'A'): + return 'T' + else: + return 'G' + +Now, add some more tests. Don't worry about `complement` just now. + +Let's discuss the tests you've come up with. + +Now update `complement` to make your tests pass. You may want to reuse some of the logic of `calculate_weight`! + +When we're done, not only do we have a working function, we also have a set of tests. There's no risk of us leaving the tests "till later" and then never having time to write them. diff --git a/testing/Writing.md b/testing/Writing.md new file mode 100755 index 0000000..f98d22d --- /dev/null +++ b/testing/Writing.md @@ -0,0 +1,146 @@ +## Let's start writing some tests + +In the file `dna.py` we have a Python dictionary that stores the molecular weights of the 4 standard DNA nucleotides, A, T, C and G, + + NUCLEOTIDES = {'A':131.2, 'T':304.2, 'C':289.2, 'G':329.2} + +and a Python function that takes a DNA sequence as input and returns its molecular weight, which is the sum of the weights for each nucelotide in the sequence, + + def calculate_weight(sequence): + """ + Calculate the molecular weight of a DNA sequence. + @param sequence: DNA sequence expressed as an upper-case string. + @return molecular weight. + """ + weight = 0.0 + for ch in sequence: + weight += NUCLEOTIDES[ch] + return weight + +We can calculate the molecular weight of a sequence by, + + weight = calculate_weight('GATGCTGTGGATAA') + print weight + +We can add a test to our code as follows, + + def calculate_weight(sequence): + """ + Calculate the molecular weight of a DNA sequence. + + @param sequence: DNA sequence expressed as an upper-case string. + @return molecular weight. + """ + weight = 0.0 + try: + for ch in sequence: + weight += NUCLEOTIDES[ch] + return weight + except TypeError: + print 'The input is not a sequence e.g. a string or list' + +If the input is not a string, or a list of characters then the `for...in` statement will *raise an exception* which is *caught* by the `except` block. For example, + + print calculate_weight(123) + +This is a *runtime test*. It alerts the user to exceptional behavior in the code. Often, exceptions are related to functions that depend on input that is unknown at compile time. Such tests make our code robust and allows our code to behave gracefully - they anticipate problematic values and handle them. + +But these tests don't test our functions behaviour or whether it's implemented correctly. So, we can add some tests, + + print calculate_weight('A') + print calculate_weight('G') + print calculate_weight('GA') + +But we'd have to visually inspect the results to see they are as expected. So, let's have the computer do that for us and make our lives easier, and save us time in checking, + + assert calculate_weight('A') == 131.2 + assert calculate_weight('G') == 329.2 + assert calculate_weight('GA') == 460.4 + +`assert` checks whether a condition is true and, if not, raises an exception. + +We explicitly list the expected weights in each statement. But, by doing this there is a risk that we mistype one. A good design principle is to define constant values in one place only. As we already have defined them in `nucleotides` we can just refer to that, + + assert calculate_weight('A') == NUCLEOTIDES['A'] + assert calculate_weight('G') == NUCLEOTIDES['G'] + assert calculate_weight('GA') == NUCLEOTIDES['G'] + NUCLEOTIDES['A'] + +But this isn't very modular, and modularity is a good design principle, so let's define some test functions, + + def test_a(): + assert calculate_weight('A') == NUCLEOTIDES['A'] + def test_g(): + assert calculate_weight('G') == NUCLEOTIDES['G'] + def test_ga(): + assert calculate_weight('GA') == NUCLEOTIDES['GA'] + NUCLEOTIDES['A'] + + test_a() + test_g() + test_ga() + +And, rather than have our tests and code in the same file, let's separate them out. So, let's create + + $ nano test_dna.py + +Now, our function and nucleotides data are in `dna.py` and we want to refer to them in `test_dna.py` file, we need to *import* them. We can do this as, + + from dna import calculate_weight + from dna import NUCLEOTIDES + +Then we can add all our test functions and function calls to this file. And run the tests, + + $ python test_dna.py + +## `nose` - a Python test framework + +`nose` is a test framework for Python that will automatically find, run and report on tests written in Python. It is an example of what has been termed an *[xUnit test framework](http://en.wikipedia.org/wiki/XUnit)*, perhaps the most famous being JUnit for Java. + +To use `nose`, we write test functions, as we've been doing, with the prefix `test_` and put these in files, likewise prefixed by `test_`. The prefixes `Test-`, `Test_` and `test-` can also be used. + +Typically, a test function, + +* Sets up some inputs and the associated expected outputs. The expected outputs might be a single number, a range of numbers, some text, a file, a set of files, or whatever. +* Runs the function or component being tested on the inputs to get some actual outputs. +* Checks that the actual outputs match the expected outputs. We use assertions as part of this checking. We can check both that conditions hold and that conditions do not hold. + +Python `assert` allows us to check, + + assert should_be_true() + assert not should_not_be_true() + +`nose` defines additional functions which can be used to check for a rich range of conditions e.g.. + + from nose.tools import * + + assert_equal(a, b) + assert_almost_equal(a, b, 3) + assert_true(a) + assert_false(a) + assert_raises(exception, func, *args, **kwargs) + ... + +`assert_raises` is used for where we want to test that an exception is raised if, for example, we give a function a bad input. + +To run `nose` for our tests, we can do, + + $ nosetests test_dna.py + +Each `.` corresponds to a successful test. And to prove `nose` is finding our tests, let's remove the function calls from `test_dna.py` and try again, + + $ nosetests test_dna.py + +nosetests can output an "xUnit" test report, + + $ nosetests --with-xunit test_dna.py + $ cat nosetests.xml + +This is a standard format that that is supported by a number of xUnit frameworks which can then be converted to HTML and presented online. + +## Write some more tests + +Let's spend a few minutes coming up with some more tests for `calculate_weight`. Consider, + +* What haven't we tested for so far? +* Have we covered all the nucleotides? +* Have we covered all the types of string we can expect? +* In addition to test functions, other types of runtime test could we add to `calculate_weight`? -- 2.26.2