From 04a1ac198837d519c574cdd553495fd542bb8b7c Mon Sep 17 00:00:00 2001 From: Greg Wilson Date: Tue, 23 Jul 2013 06:36:11 -0400 Subject: [PATCH] Importing more material from The Hacker Within W. Trevor King: I dropped everything from the original 6e7b321 except for the lessons/thw-testing/ renames. Conflicts: lessons/index.html lessons/thw-python-debugging/basic_exceptions/index_error.py lessons/thw-python-debugging/basic_exceptions/io_error.py lessons/thw-python-debugging/basic_exceptions/key_error.py lessons/thw-python-debugging/basic_exceptions/name_error.py lessons/thw-python-debugging/basic_exceptions/syntax_error.py lessons/thw-python-debugging/basic_exceptions/type_error.py lessons/thw-python-debugging/basic_exceptions/value_error.py lessons/thw-python-debugging/conways_game_of_life/1_conway_pre_linted.py lessons/thw-python-debugging/conways_game_of_life/2_conway_pre_formatted.py lessons/thw-python-debugging/conways_game_of_life/3_conway_pre_debugged.py lessons/thw-python-debugging/conways_game_of_life/4_conway_pre_profiled.py lessons/thw-python-debugging/conways_game_of_life/5_conway_final.py lessons/thw-python-debugging/conways_game_of_life/Conway's Game of Life - Debugging Example.ipynb lessons/thw-python-debugging/examples/linting_example.py lessons/thw-python-debugging/examples/pdb_example.py lessons/thw-python-debugging/examples/profiler_example.py lessons/thw-python-debugging/examples/segfault.py lessons/thw-python-debugging/examples/style_example.py lessons/thw-python-debugging/tutorial.md --- .../thw-testing}/close_line.py | 0 .../thw-testing}/evo_sol1.png | Bin .../testing => lessons/thw-testing}/mean.py | 0 .../thw-testing}/test_mean.py | 0 .../thw-testing}/test_prod.jpg | Bin .../thw-testing/tutorial.md | 13 +- python/testing/cheat-sheet.md | 167 ------------------ python/testing/exercises/test.markdown | 137 -------------- 8 files changed, 6 insertions(+), 311 deletions(-) rename {python/testing => lessons/thw-testing}/close_line.py (100%) rename {python/testing => lessons/thw-testing}/evo_sol1.png (100%) rename {python/testing => lessons/thw-testing}/mean.py (100%) rename {python/testing => lessons/thw-testing}/test_mean.py (100%) rename {python/testing => lessons/thw-testing}/test_prod.jpg (100%) rename python/testing/Readme.md => lessons/thw-testing/tutorial.md (99%) delete mode 100644 python/testing/cheat-sheet.md delete mode 100644 python/testing/exercises/test.markdown diff --git a/python/testing/close_line.py b/lessons/thw-testing/close_line.py similarity index 100% rename from python/testing/close_line.py rename to lessons/thw-testing/close_line.py diff --git a/python/testing/evo_sol1.png b/lessons/thw-testing/evo_sol1.png similarity index 100% rename from python/testing/evo_sol1.png rename to lessons/thw-testing/evo_sol1.png diff --git a/python/testing/mean.py b/lessons/thw-testing/mean.py similarity index 100% rename from python/testing/mean.py rename to lessons/thw-testing/mean.py diff --git a/python/testing/test_mean.py b/lessons/thw-testing/test_mean.py similarity index 100% rename from python/testing/test_mean.py rename to lessons/thw-testing/test_mean.py diff --git a/python/testing/test_prod.jpg b/lessons/thw-testing/test_prod.jpg similarity index 100% rename from python/testing/test_prod.jpg rename to lessons/thw-testing/test_prod.jpg diff --git a/python/testing/Readme.md b/lessons/thw-testing/tutorial.md similarity index 99% rename from python/testing/Readme.md rename to lessons/thw-testing/tutorial.md index 8d53b87..26dab3e 100644 --- a/python/testing/Readme.md +++ b/lessons/thw-testing/tutorial.md @@ -1,9 +1,9 @@ -# Testing - -* * * * * - -**Based on materials by Katy Huff, Rachel Slaybaugh, and Anthony -Scopatz** +--- +layout: lesson +root: ../.. +title: Testing Software +--- +**Based on materials by Katy Huff, Rachel Slaybaugh, and Anthony Scopatz** ![image](https://github.com/thehackerwithin/UofCSCBC2012/raw/scopz/5-Testing/test_prod.jpg) # What is testing? @@ -538,7 +538,6 @@ file which tests the closest\_data\_to\_line() functions. is some sample data to help you get started. ![image](https://github.com/thehackerwithin/UofCSCBC2012/raw/scopz/5-Testing/evo_sol1.png) -> - ```python import numpy as np diff --git a/python/testing/cheat-sheet.md b/python/testing/cheat-sheet.md deleted file mode 100644 index 36f3557..0000000 --- a/python/testing/cheat-sheet.md +++ /dev/null @@ -1,167 +0,0 @@ -Python Testing Cheat Sheet -========================== - -Why testing? ------------- - -1. Helps you to think about expected behavior, especially boundary cases, -2. documents expected behavior, -3. confidence recent changes didn't break anything that worked before, -4. confidence code is correct. - - -Defensive programming ---------------------- - -Using an assertion to ensure input is acceptable: - - def some_function(x): - assert x >= 0 - # ... continue safe in knowledge that x > 0 - -Adding an explanatory message to the assertion: - - assert x >= 0, "Function not defined for negative x." - -Alternatively, raise an exception to indicate what the problem is: - - def some_function(x): - if x < 0: - raise TypeError, "Function not defined for negative x." - return 0 - - -Unit testing with Nose ----------------------- - -To run tests, at the shell prompt, type - - nosetests - -By default, Nose will - -* look for test functions that have a name starting with `test` -* look for them in files with names starting with `test` -* look for such files in the current working directory, and in subdirectories with names starting with `test` - -There are some additional rules, and you can configure your own, but this should be enough to get started. - -### A simple test - - from nose.tools import assert_equal - - from mystatscode import mean - - def test_single_value(): - observed = mean([1.0]) - expected = 1.0 - assert_equal(observed, expected) - -### Other assertions - -Nose provides a range of assertions that can be used when a test is not just checking a simple equality, e.g. - - from nose.tools import assert_items_equal - - from mycode import find_factors - - def test_6(): - observed = find_factors(6) - expected = [2, 3] - assert_items_equal(observed, expected) # order of factors is not guaranteed - -To see the available assertions, and get help with them: - - import nose.tools - dir(nose.tools) # list assertions and other classes/functions - help(nose.tools.assert_set_equal) # get information about one of them - -### Floating point tests - -When comparing floating-point numbers for equality, allow some tolerance for small differences due to -the way values are represented and rounded. -* assertGreater, assertLess - - from nose.tools import assert_almost_equal - - from mycode import hypotenuse - - def test_hypotenuse_345(): - observed = hypotenuse(3.0, 4.0) - expected = 5.0 - assert_almost_equal(observed, expected) - -### Testing exceptions - -Testing that a method raises the appropriate exception when the input is invalid: - - from nose.tools import raises - - from mystatscode import mean - - @raises(TypeError) - def test_not_a_list(): - observed = mean(1) - -### Fixtures - -A *fixture* is what the test function uses as input, e.g. values, objects and arrays. - -To set up a fixture once before any tests are run, define a method called `setup` in the same files -as the test functions. This can assign values to global variables for use in the test functions. - - long_list = None - - def setup(): - long_list = [0] - # append more values to long_list... - -If the global variables assigned in `setup` might be modified by some of the test functions, the set-up -step must be executed once before each test function is called: - - from nose.tools import with_setup - - from mycode import mean, clear - - long_list = None - - def setup_each(): - long_list = [0] - # append more values to long_list... - - @with_setup(setup_each) - def test_mean_long_list(): - observed = mean(long_list) - expected = 0.0 - assert_equal(observed, expected) - - @with_setup(setup_each) - def test_clear_long_list(): - clear(long_list) - assert_equal(len(long_list), 0) - - - -Test-driven deveopment ----------------------- - -***Red.*** Write test function that checks one new functionality you want to add to your code. -- tests have to fail. - -***Green.*** Write minimal code that implements desired features until all tests pass. - -***Refactor.*** Improve code wrt. readability and speed. Constantly check that tests still pass. - -***Commit.*** Commit working code to version control. - -Repeat. - - -General advice --------------- - -* Perfect test-case coverage is impossible. -* Try to test distinct functionalities. -* If you find a bug yet undiscovered by previous test, make it a new test case. - - - diff --git a/python/testing/exercises/test.markdown b/python/testing/exercises/test.markdown deleted file mode 100644 index fb25fe1..0000000 --- a/python/testing/exercises/test.markdown +++ /dev/null @@ -1,137 +0,0 @@ -The following exercises do not contain solutions. Yet. Instead, we will be -asking you to submit your solutions to these exercises and then we will post up -solutions at the start of next week. We encourage you to discuss your approaches -or solutions on the course forum! - -To submit your exercises, please create a `testing` folder in your personal -folder in the course repository. Place all of the code and files for theses -exercises in that folder and be sure to check it in. - - -## Exercise 1: Mileage - -The function 'convert_mileage' converts miles per gallon (US style) to liters -per 100 km (metric style): - -```python - gal_to_litre = 3.78541178 - mile_to_km = 1.609344 - - def convert_mileage(mpg): - '''Converts miles per gallon to liters per 100 km''' - litres_per_100_km = 100 / mpg / mile_to_km * gal_to_litre - return litres_per_100_km -``` - -Create a subdirectory in your version control directory called `testing`, then -copy this function into a file in that directory called `mileage.py`. Add more -code to that file to repeatedly ask the user for a mileage in miles per gallon, -and output the mileage in liters per 100 km, until the user enters the string -"`q`". You will need to use the `float()` function to convert from string to a -floating point number. Use the '`if __name__ == "__main__":`' trick to ensure -that the module can be imported without executing your testing code. - -1. Copy `mileage.py` to create `tryexcept.py` Add a try/except block to the new -program to display a helpful message instead of crashing when users enter -invalid input (such as the number "0" or the name of their favorite hockey -team). - -2. Reading the function again, you realize that accepting 0 or negative values -make no sense and should be reported as an error. Look at the exceptions defined -in the `exceptions` module (use the built-in `help(...)` or `dir(...)` -functions) and decide which of Python's built-in exceptions is most appropriate -to use for invalid input. Create a copy of 'tryexcept.py' called 'raiser.py' -that raises this exception; modify the main body of your program to catch it; -and add a comment inside the file explaining why you chose the exception you -did. (Note: you have to call this file `raiser.py`, not `raise.py` because -'import raise' is an error.  Can you see why?) - -3. [According to -Google](http://www.google.ca/search?q=20+miles+per+gallon+in+litres+per+100+km&gbv=1), -20 miles per gallon are equivalent to 11.7607292 liters per 100 km. Use these -values to write a unit test. Keep in mind that these floating values are subject -to truncation and rounding errors. Save the test case in a file called -`test_mileage.py` and run it using the `nosetests` command.  Note: -`test_mileage.py` should use '`from raiser import convert_mileage`' to get the -final version of your mileage conversion function. - -4. Now add a second test case, for 40 miles per gallon equivalent to 5.88036458 -liters per 100 km and run the tests again.  Unless you have already fixed the -error that was present in the initial function, your test should fail.  Find -and fix the error; submit your new function in a file called 'final_mileage.py'. - - -## Exercise 2: Testing Averages - -The results of a set of experiments are stored in a file, where the _i-th_ line -stores the results of the _i-th_ experiment as a comma-separated list of -integers. A student is assigned the task of finding the experiment with the -smallest average value. She writes the following code: - -```python - def avg_line(line): - values = line.split(',') - count = 0 - total = 0 - for value in values: - total += int(value) - count += 1 - return total / count - - def min_avg(file_name): - contents = open(file_name) - averages = [] - for (i, line) in enumerate(contents): - averages.append((avg_line(line), i)) - contents.close() - averages.sort() - min_avg, experiment_number = averages[0] - return experiment_number -``` - -1. Refactor `min_avg` so that it can be tested without depending on external -files. Submit your code in a file called `first_averages.py`. - -2. Write Nose test cases for both functions. Consider what should happen if the -file is empty. Submit your tests in a file called `test_first_averages.py`. -Note: you may assume for now that all input is well formatted, i.e., you do -_not_ have to worry about empty lines, lines containing the names of hockey -teams, etc. - -3. The given specification is ambiguous: what should the result be if two or -more experiments are tied for the minimum average? Copy 'first_averages.py' to -create a new file 'second_averages.py'; modify it to handle this case; add a -comment to the top explaining what rule you decided to use; and create a file -'test_second_averages.py' that tests your changes. - -4. Another student proposed an alternative implementation of the min_avg -function: - -```python - def min_avg(file_name): - contents = open(file_name).readlines() - min_avg = avg_line(contents[0]) - min_index = 0 - for (i,line) in enumerate(contents): - current_avg = avg_line(line) - if current_avg <= min_avg: - min_avg = current_avg - min_index = i - return min_index -``` - -This implementation also finds an experiment with the smallest average, but -possibly a different one than the your function. Modify your test cases so that -both your implementation and this one will pass. (Hint: use the 'in' operator.) - -5. One way to avoid the ambiguity of this specification is to define a -'min_avg_all' function instead, which returns a list with all the experiments -with the smallest average, and let the caller select one. Write tests for the -'min_avg_all' function, considering the following situations: an empty file, -exactly one experiment with minimum average, and more than one experiment with -minimum average. Keep in mind that in the last case, implementations could -return the list in different order. Write the tests the file "test_averages.py". -Use the same data as for the previous tests, if possible. You should use -variables to avoid code duplication. You don't need to implement the -'min_avg_all' function, but your test cases should be comprehensive enough to -serve as a specification for it. -- 2.26.2