Intermediate Research Software Development

Setting the Scene

Overview

Teaching: 15 min
Exercises: 0 min
Questions
  • What are we teaching in this course?

  • What motivated the selection of topics covered in the course?

Objectives
  • Setting the scene and expectations

  • Making sure everyone has all the necessary software installed

Introduction

So, you have gained basic software development skills either by self-learning or attending, e.g., a novice Software Carpentry course. You have been applying those skills for a while by writing code to help with your work and you feel comfortable developing code and troubleshooting problems. However, your software has now reached a point where there’s too much code to be kept in one script. Perhaps it’s involving more researchers (developers) and users, and more collaborative development effort is needed to add new functionality while ensuring previous development efforts remain functional and maintainable.

This course provides the next step in software development - it teaches some intermediate software engineering skills and best practices to help you restructure existing code and design more robust, reusable and maintainable code, automate the process of testing and verifying software correctness and support collaborations with others in a way that mimics a typical software development process within a team.

The course uses a number of different software development tools and techniques interchangeably as you would in a real life. We had to make some choices about topics and tools to teach here, based on established best practices, ease of tool installation for the audience, length of the course and other considerations. Tools used here are not mandated though: alternatives exist and we point some of them out along the way. Over time, you will develop a preference for certain tools and programming languages based on your personal taste or based on what is commonly used by your group, collaborators or community. However, the topics covered should give you a solid foundation for working on software development in a team and producing high quality software that is easier to develop and sustain in the future by yourself and others. Skills and tools taught here, while Python-specific, are transferable to other similar tools and programming languages.

The course is organised into the following sections:

Course overview diagram

Section 1: Setting up Software Environment

In the first section we are going to set up our working environment and familiarise ourselves with various tools and techniques for software development in a typical collaborative code development cycle:

Section 2: Verifying Software Correctness at Scale

Once we know our way around different code development tools, techniques and conventions, in this section we learn:

Section 3: Software Development as a Process

In this section, we step away from writing code for a bit to look at software from a higher level as a process of development and its components:

Section 4: Collaborative Software Development for Reuse

Advancing from developing code as an individual, in this section you will start working with your fellow learners on a group project (as you would do when collaborating on a software project in a team), and learn:

Section 5: Managing and Improving Software Over Its Lifetime

Finally, we move beyond just software development to managing a collaborative software project and will look into:

Before We Start

A few notes before we start.

Prerequisite Knowledge

This is an intermediate-level software development course intended for people who have already been developing code in Python (or other languages) and applying it to their own problems after gaining basic software development skills. So, it is expected for you to have some prerequisite knowledge on the topics covered, as outlined at the beginning of the lesson. Check out this quiz to help you test your prior knowledge and determine if this course is for you.

Setup, Common Issues & Fixes

Have you setup and installed all the tools and accounts required for this course? Check the list of common issues, fixes & tips if you experience any problems running any of the tools you installed - your issue may be solved there.

Compulsory and Optional Exercises

Exercises are a crucial part of this course and the narrative. They are used to reinforce the points taught and give you an opportunity to practice things on your own. Please do not be tempted to skip exercises as that will get your local software project out of sync with the course and break the narrative. Exercises that are clearly marked as “optional” can be skipped without breaking things but we advise you to go through them too, if time allows. All exercises contain solutions but, wherever possible, try and work out a solution on your own.

Outdated Screenshots

Throughout this lesson we will make use and show content from Graphical User Interface (GUI) tools (PyCharm and GitHub). These are evolving tools and platforms, always adding new features and new visual elements. Screenshots in the lesson may then become out-of-sync, refer to or show content that no longer exists or is different to what you see on your machine. If during the lesson you find screenshots that no longer match what you see or have a big discrepancy with what you see, please open an issue describing what you see and how it differs from the lesson content. Feel free to add as many screenshots as necessary to clarify the issue.

Key Points

  • This lesson focuses on core, intermediate skills covering the whole software development life-cycle that will be of most use to anyone working collaboratively on code.

  • For code development in teams - you need more than just the right tools and languages. You need a strategy (best practices) for how you’ll use these tools as a team.

  • The lesson follows on from the novice Software Carpentry lesson, but this is not a prerequisite for attending as long as you have some basic Python, command line and Git skills and you have been using them for a while to write code to help with your work.


Section 1: Setting Up Environment For Collaborative Code Development

Overview

Teaching: 10 min
Exercises: 0 min
Questions
  • What tools are needed to collaborate on code development effectively?

Objectives
  • Provide an overview of all the different tools that will be used in this course.

The first section of the course is dedicated to setting up your environment for collaborative software development and introducing the project that we will be working on throughout the course. In order to build working (research) software efficiently and to do it in collaboration with others rather than in isolation, you will have to get comfortable with using a number of different tools interchangeably as they’ll make your life a lot easier. There are many options when it comes to deciding which software development tools to use for your daily tasks - we will use a few of them in this course that we believe make a difference. There are sometimes multiple tools for the job - we select one to use but mention alternatives too. As you get more comfortable with different tools and their alternatives, you will select the one that is right for you based on your personal preferences or based on what your collaborators are using.

Tools needed to collaborate on code development effectively

Here is an overview of the tools we will be using.

Setup, Common Issues & Fixes

Have you setup and installed all the tools and accounts required for this course? Check the list of common issues, fixes & tips if you experience any problems running any of the tools you installed - your issue may be solved there.

Command Line & Python Virtual Development Environment

We will use the command line (also known as the command line shell/prompt/console) to run our Python code and interact with the version control tool Git and software sharing platform GitHub. We will also use command line tools venv and pip to set up a Python virtual development environment and isolate our software project from other Python projects we may work on.

Note: some Windows users experience the issue where Python hangs from Git Bash (i.e. typing python causes it to just hang with no error message or output) - see the solution to this issue.

Integrated Development Environment (IDE)

An IDE integrates a number of tools that we need to develop a software project that goes beyond a single script - including a smart code editor, a code compiler/interpreter, a debugger, etc. It will help you write well-formatted and readable code that conforms to code style guides (such as PEP8 for Python) more efficiently by giving relevant and intelligent suggestions for code completion and refactoring. IDEs often integrate command line console and version control tools - we teach them separately in this course as this knowledge can be ported to other programming languages and command line tools you may use in the future (but is applicable to the integrated versions too).

We will use PyCharm in this course - a free, open source IDE.

Git & GitHub

Git is a free and open source distributed version control system designed to save every change made to a (software) project, allowing others to collaborate and contribute. In this course, we use Git to version control our code in conjunction with GitHub for code backup and sharing. GitHub is one of the leading integrated products and social platforms for modern software development, monitoring and management - it will help us with version control, issue management, code review, code testing/Continuous Integration, and collaborative development. An important concept in collaborative development is version control workflows (i.e. how to effectively use version control on a project with others).

Python Coding Style

Most programming languages will have associated standards and conventions for how the source code should be formatted and styled. Although this sounds pedantic, it is important for maintaining the consistency and readability of code across a project. Therefore, one should be aware of these guidelines and adhere to whatever the project you are working on has specified. In Python, we will be looking at a convention called PEP8.

Let’s get started with setting up our software development environment!

Key Points

  • In order to develop (write, test, debug, backup) code efficiently, you need to use a number of different tools.

  • When there is a choice of tools for a task you will have to decide which tool is right for you, which may be a matter of personal preference or what the team or community you belong to is using.


Introduction to Our Software Project

Overview

Teaching: 20 min
Exercises: 10 min
Questions
  • What is the design architecture of our example software project?

  • Why is splitting code into smaller functional units (modules) good when designing software?

Objectives
  • Use Git to obtain a working copy of our software project from GitHub.

  • Inspect the structure and architecture of our software project.

  • Understand Model-View-Controller (MVC) architecture in software design and its use in our project.

Patient Inflammation Study Project

So, you have joined a software development team that has been working on the patient inflammation study project developed in Python and stored on GitHub. The project analyses the data to study the effect of a new treatment for arthritis by analysing the inflammation levels in patients who have been given this treatment. It reuses the inflammation datasets from the Software Carpentry Python novice lesson.

Snapshot of the inflammation dataset

Inflammation study pipeline from the Software Carpentry Python novice lesson

What Does Patient Inflammation Data Contain?

Each dataset records inflammation measurements from a separate clinical trial of the drug, and each dataset contains information for 60 patients, who had their inflammation levels recorded for 40 days whilst participating in the trial (a snapshot of one of the data files is shown in diagram above).

Each of the data files uses the popular comma-separated (CSV) format to represent the data, where:

  • Each row holds inflammation measurements for a single patient,
  • Each column represents a successive day in the trial,
  • Each cell represents an inflammation reading on a given day for a patient (in some arbitrary units of inflammation measurement).

The project is not finished and contains some errors. You will be working on your own and in collaboration with others to fix and build on top of the existing code during the course.

Downloading Our Software Project

To start working on the project, you will first create a fork of the software project repository from GitHub within your own GitHub account and then obtain a local copy of that project (from your GitHub) on your machine.

  1. Make sure you have a GitHub account and that you have set up your SSH key pair for authentication with GitHub, as explained in Setup.
  2. Log into your GitHub account.
  3. Go to the software project repository in GitHub.

    Software project fork repository in GitHub

  4. Click the Fork button towards the top right of the repository’s GitHub page to create a fork of the repository under your GitHub account (you will need to be signed into GitHub for the Fork button to work). Note that each participant is creating their own fork to work on. Also, we are not copying from a template but creating a fork (remember you can have only one fork but can have multiple copies of a repository in GitHub).
  5. Make sure to select your personal account and set the name of the project to python-intermediate-inflammation (you can call it anything you like, but it may be easier for future group exercises if everyone uses the same name). Ensure that you uncheck the Copy the main branch only button. This will guarantee we get some other branches needed for later exercises, but for the minute you can ignore them.

    Making a fork of the software project repository in GitHub

  6. Click the Create fork button and wait for GitHub to import the copy of the repository under your account.
  7. Locate the forked repository under your own GitHub account. You should be taken there automatically after confirming the fork operation, but if not, you can click your username top left to be taken to your user page, and then select the Repositories tab, where you can search for this new fork.

    View of your own fork of the software repository in GitHub

Exercise: Obtain the Software Project Locally

Using the command line, clone the copied repository from your GitHub account into the home directory on your computer using SSH. Which command(s) would you use to get a detailed list of contents of the directory you have just cloned?

Solution

  1. Find the SSH URL of the software project repository to clone from your GitHub account. Make sure you do not clone the original repository but rather your own fork, as you should be able to push commits to it later on. Also make sure you select the SSH tab and not the HTTPS one - you’ll be able to clone with HTTPS, but not to send your changes back to GitHub!

URL to clone the repository in GitHub

  1. Make sure you are located in your home directory in the command line with:
     $ cd ~
    
  2. From your home directory in the command line, do:
     $ git clone git@github.com:<YOUR_GITHUB_USERNAME>/python-intermediate-inflammation.git
    

    Make sure you are cloning your fork of the software project and not the original repository.

  3. Navigate into the cloned repository folder in your command line with:
     $ cd python-intermediate-inflammation
    

    Note: If you have accidentally copied the HTTPS URL of your repository instead of the SSH one, you can easily fix that from your project folder in the command line with:

     $ git remote set-url origin git@github.com:<YOUR_GITHUB_USERNAME>/python-intermediate-inflammation.git
    

Our Software Project Structure

Let’s inspect the content of the software project from the command line. From the root directory of the project, you can use the command ls -l to get a more detailed list of the contents. You should see something similar to the following.

$ cd ~/python-intermediate-inflammation
$ ls -l
total 24
-rw-r--r--   1 carpentry  users  1055 20 Apr 15:41 README.md
drwxr-xr-x  18 carpentry  users   576 20 Apr 15:41 data
drwxr-xr-x   5 carpentry  users   160 20 Apr 15:41 inflammation
-rw-r--r--   1 carpentry  users  1122 20 Apr 15:41 inflammation-analysis.py
drwxr-xr-x   4 carpentry  users   128 20 Apr 15:41 tests

As can be seen from the above, our software project contains the README file (that typically describes the project, its usage, installation, authors and how to contribute), Python script inflammation-analysis.py, and three directories - inflammation, data and tests.

The Python script inflammation-analysis.py provides the main entry point in the application, and on closer inspection, we can see that the inflammation directory contains two more Python scripts - views.py and models.py. We will have a more detailed look into these shortly.

$ ls -l inflammation
total 24
-rw-r--r--  1 alex  staff   71 29 Jun 09:59 __init__.py
-rw-r--r--  1 alex  staff  838 29 Jun 09:59 models.py
-rw-r--r--  1 alex  staff  649 25 Jun 13:13 views.py

Directory data contains several files with patients’ daily inflammation information (along with some other files):

$ ls -l data
total 264
-rw-r--r--  1 alex  staff   5365 25 Jun 13:13 inflammation-01.csv
-rw-r--r--  1 alex  staff   5314 25 Jun 13:13 inflammation-02.csv
-rw-r--r--  1 alex  staff   5127 25 Jun 13:13 inflammation-03.csv
-rw-r--r--  1 alex  staff   5367 25 Jun 13:13 inflammation-04.csv
-rw-r--r--  1 alex  staff   5345 25 Jun 13:13 inflammation-05.csv
-rw-r--r--  1 alex  staff   5330 25 Jun 13:13 inflammation-06.csv
-rw-r--r--  1 alex  staff   5342 25 Jun 13:13 inflammation-07.csv
-rw-r--r--  1 alex  staff   5127 25 Jun 13:13 inflammation-08.csv
-rw-r--r--  1 alex  staff   5327 25 Jun 13:13 inflammation-09.csv
-rw-r--r--  1 alex  staff   5342 25 Jun 13:13 inflammation-10.csv
-rw-r--r--  1 alex  staff   5127 25 Jun 13:13 inflammation-11.csv
-rw-r--r--  1 alex  staff   5340 25 Jun 13:13 inflammation-12.csv
-rw-r--r--  1 alex  staff  22554 25 Jun 13:13 python-novice-inflammation-data.zip
-rw-r--r--  1 alex  staff     12 25 Jun 13:13 small-01.csv
-rw-r--r--  1 alex  staff     15 25 Jun 13:13 small-02.csv
-rw-r--r--  1 alex  staff     12 25 Jun 13:13 small-03.csv

As previously mentioned, each of the inflammation data files contains separate trial data for 60 patients over 40 days.

Exercise: Have a Peek at the Data

Which command(s) would you use to list the contents or a first few lines of data/inflammation-01.csv file?

Solution

  1. To list the entire content of a file from the project root do: cat data/inflammation-01.csv.
  2. To list the first 5 lines of a file from the project root do: head -n 5 data/inflammation-01.csv.
0,0,1,3,2,3,6,4,5,7,2,4,11,11,3,8,8,16,5,13,16,5,8,8,6,9,10,10,9,3,3,5,3,5,4,5,3,3,0,1
0,1,1,2,2,5,1,7,4,2,5,5,4,6,6,4,16,11,14,16,14,14,8,17,4,14,13,7,6,3,7,7,5,6,3,4,2,2,1,1
0,1,1,1,4,1,6,4,6,3,6,5,6,4,14,13,13,9,12,19,9,10,15,10,9,10,10,7,5,6,8,6,6,4,3,5,2,1,1,1
0,0,0,1,4,5,6,3,8,7,9,10,8,6,5,12,15,5,10,5,8,13,18,17,14,9,13,4,10,11,10,8,8,6,5,5,2,0,2,0
0,0,1,0,3,2,5,4,8,2,9,3,3,10,12,9,14,11,13,8,6,18,11,9,13,11,8,5,5,2,8,5,3,5,4,1,3,1,1,0

Directory tests contains several tests that have been implemented already. We will be adding more tests during the course as our code grows.

An important thing to note here is that the structure of the project is not arbitrary. One of the big differences between novice and intermediate software development is planning the structure of your code. This structure includes software components and behavioural interactions between them (including how these components are laid out in a directory and file structure). A novice will often make up the structure of their code as they go along. However, for more advanced software development, we need to plan this structure - called a software architecture - beforehand.

Let’s have a more detailed look into what a software architecture is and which architecture is used by our software project before we start adding more code to it.

Software Architecture

A software architecture is the fundamental structure of a software system that is decided at the beginning of project development based on its requirements and cannot be changed that easily once implemented. It refers to a “bigger picture” of a software system that describes high-level components (modules) of the system and how they interact.

In software design and development, large systems or programs are often decomposed into a set of smaller modules each with a subset of functionality. Typical examples of modules in programming are software libraries; some software libraries, such as numpy and matplotlib in Python, are bigger modules that contain several smaller sub-modules. Another example of modules are classes in object-oriented programming languages.

Programming Modules and Interfaces

Although modules are self-contained and independent elements to a large extent (they can depend on other modules), there are well-defined ways of how they interact with one another. These rules of interaction are called programming interfaces - they define how other modules (clients) can use a particular module. Typically, an interface to a module includes rules on how a module can take input from and how it gives output back to its clients. A client can be a human, in which case we also call these user interfaces. Even smaller functional units such as functions/methods have clearly defined interfaces - a function/method’s definition (also known as a signature) states what parameters it can take as input and what it returns as an output.

There are various software architectures around defining different ways of dividing the code into smaller modules with well defined roles, for example:

Model-View-Controller (MVC) Architecture

MVC architecture divides the related program logic into three interconnected modules:

Model represents the data used by a program and also contains operations/rules for manipulating and changing the data in the model. This may be a database, a file, a single data object or a series of objects - for example a table representing patients’ data.

View is the means of displaying data to users/clients within an application (i.e. provides visualisation of the state of the model). For example, displaying a window with input fields and buttons (Graphical User Interface, GUI) or textual options within a command line (Command Line Interface, CLI) are examples of Views. They include anything that the user can see from the application. While building GUIs is not the topic of this course, we will cover building CLIs in Python in later episodes.

Controller manipulates both the Model and the View. It accepts input from the View and performs the corresponding action on the Model (changing the state of the model) and then updates the View accordingly. For example, on user request, Controller updates a picture on a user’s GitHub profile and then modifies the View by displaying the updated profile back to the user.

MVC Examples

MVC architecture can be applied in scientific applications in the following manner. Model comprises those parts of the application that deal with some type of scientific processing or manipulation of the data, e.g. numerical algorithm, simulation, DNA. View is a visualisation, or format, of the output, e.g. graphical plot, diagram, chart, data table, file. Controller is the part that ties the scientific processing and output parts together, mediating input and passing it to the model or view, e.g. command line options, mouse clicks, input files. For example, the diagram below depicts the use of MVC architecture for the DNA Guide Graphical User Interface application.

MVC example of a DNA Guide Graphical User Interface application

Exercise: MVC Application Examples From your Work

Think of some other examples from your work or life where MVC architecture may be suitable or have a discussion with your fellow learners.

Solution

MVC architecture is a popular choice when designing web and mobile applications. Users interact with a web/mobile application by sending various requests to it. Forms to collect users inputs/requests together with the info returned and displayed to the user as a result represent the View. Requests are processed by the Controller, which interacts with the Model to retrieve or update the underlying data. For example, a user may request to view its profile. The Controller retrieves the account information for the user from the Model and passes it to the View for rendering. The user may further interact with the application by asking it to update its personal information. Controller verifies the correctness of the information (e.g. the password satisfies certain criteria, postal address and phone number are in the correct format, etc.) and passes it to the Model for permanent storage. The View is then updated accordingly and the user sees its updated profile details.

Note that not everything fits into the MVC architecture but it is still good to think about how things could be split into smaller units. For a few more examples, have a look at this short article on MVC from CodeAcademy.

Separation of Concerns

Separation of concerns is important when designing software architectures in order to reduce the code’s complexity. Note, however, there are limits to everything - and MVC architecture is no exception. Controller often transcends into Model and View and a clear separation is sometimes difficult to maintain. For example, the Command Line Interface provides both the View (what user sees and how they interact with the command line) and the Controller (invoking of a command) aspects of a CLI application. In Web applications, Controller often manipulates the data (received from the Model) before displaying it to the user or passing it from the user to the Model.

Our Project’s MVC Architecture

Our software project uses the MVC architecture. The file inflammation-analysis.py is the Controller module that performs basic statistical analysis over patient data and provides the main entry point into the application. The View and Model modules are contained in the files views.py and models.py, respectively, and are conveniently named. Data underlying the Model is contained within the directory data - as we have seen already it contains several files with patients’ daily inflammation information.

We will revisit the software architecture and MVC topics once again in later episodes when we talk in more detail about software’s business/user/solution requirements and software design. We now proceed to set up our virtual development environment and start working with the code using a more convenient graphical tool - IDE PyCharm.

Key Points

  • Programming interfaces define how individual modules within a software application interact among themselves or how the application itself interacts with its users.

  • MVC is a software design architecture which divides the application into three interconnected modules: Model (data), View (user interface), and Controller (input/output and data manipulation).

  • The software project we use throughout this course is an example of an MVC application that manipulates patients’ inflammation data and performs basic statistical analysis using Python.


Virtual Environments For Software Development

Overview

Teaching: 30 min
Exercises: 0 min
Questions
  • What are virtual environments in software development and why you should use them?

  • How can we manage Python virtual environments and external (third-party) libraries?

Objectives
  • Set up a Python virtual environment for our software project using venv and pip.

  • Run our software from the command line.

Introduction

So far we have cloned our software project from GitHub and inspected its contents and architecture a bit. We now want to run our code to see what it does - let’s do that from the command line. For the most part of the course we will run our code and interact with Git from the command line. While we will develop and debug our code using the PyCharm IDE and it is possible to use Git from PyCharm too, typing commands in the command line allows you to familiarise yourself and learn it well. A bonus is that this knowledge is transferable to running code in other programming languages and is independent from any IDE you may use in the future.

If you have a little peek into our code (e.g. run cat inflammation/views.py from the project root), you will see the following two lines somewhere at the top.

from matplotlib import pyplot as plt
import numpy as np

This means that our code requires two external libraries (also called third-party packages or dependencies) - numpy and matplotlib. Python applications often use external libraries that don’t come as part of the standard Python distribution. This means that you will have to use a package manager tool to install them on your system. Applications will also sometimes need a specific version of an external library (e.g. because they were written to work with feature, class, or function that may have been updated in more recent versions), or a specific version of Python interpreter. This means that each Python application you work with may require a different setup and a set of dependencies so it is useful to be able to keep these configurations separate to avoid confusion between projects. The solution for this problem is to create a self-contained virtual environment per project, which contains a particular version of Python installation plus a number of additional external libraries.

Virtual environments are not just a feature of Python - most modern programming languages use them to isolate libraries for a specific project and make it easier to develop, run, test and share code with others. Even languages that don’t explicitly have virtual environments have other mechanisms that promote per-project library collections. In this episode, we learn how to set up a virtual environment to develop our code and manage our external dependencies.

Virtual Environments

So what exactly are virtual environments, and why use them?

A Python virtual environment helps us create an isolated working copy of a software project that uses a specific version of Python interpreter together with specific versions of a number of external libraries installed into that virtual environment. Python virtual environments are implemented as directories with a particular structure within software projects, containing links to specified dependencies allowing isolation from other software projects on your machine that may require different versions of Python or external libraries.

As more external libraries are added to your Python project over time, you can add them to its specific virtual environment and avoid a great deal of confusion by having separate (smaller) virtual environments for each project rather than one huge global environment with potential package version clashes. Another big motivator for using virtual environments is that they make sharing your code with others much easier (as we will see shortly). Here are some typical scenarios where the use of virtual environments is highly recommended (almost unavoidable):

You do not have to worry too much about specific versions of external libraries that your project depends on most of the time. Virtual environments also enable you to always use the latest available version without specifying it explicitly. They also enable you to use a specific older version of a package for your project, should you need to.

A Specific Python or Package Version is Only Ever Installed Once

Note that you will not have a separate Python or package installations for each of your projects - they will only ever be installed once on your system but will be referenced from different virtual environments.

Managing Python Virtual Environments

There are several commonly used command line tools for managing Python virtual environments:

While there are pros and cons for using each of the above, all will do the job of managing Python virtual environments for you and it may be a matter of personal preference which one you go for. In this course, we will use venv to create and manage our virtual environment (which is the preferred way for Python 3.3+). The upside is that venv virtual environments created from the command line are also recognised and picked up automatically by PyCharm IDE, as we will see in the next episode.

Managing External Packages

Part of managing your (virtual) working environment involves installing, updating and removing external packages on your system. The Python package manager tool pip is most commonly used for this - it interacts and obtains the packages from the central repository called Python Package Index (PyPI). pip can now be used with all Python distributions (including Anaconda).

A Note on Anaconda and conda

Anaconda is an open source Python distribution commonly used for scientific programming - it conveniently installs Python, package and environment management conda, and a number of commonly used scientific computing packages so you do not have to obtain them separately. conda is an independent command line tool (available separately from the Anaconda distribution too) with dual functionality: (1) it is a package manager that helps you find Python packages from remote package repositories and install them on your system, and (2) it is also a virtual environment manager. So, you can use conda for both tasks instead of using venv and pip.

Many Tools for the Job

Installing and managing Python distributions, external libraries and virtual environments is, well, complex. There is an abundance of tools for each task, each with its advantages and disadvantages, and there are different ways to achieve the same effect (and even different ways to install the same tool!). Note that each Python distribution comes with its own version of pip - and if you have several Python versions installed you have to be extra careful to use the correct pip to manage external packages for that Python version.

venv and pip are considered the de facto standards for virtual environment and package management for Python 3. However, the advantages of using Anaconda and conda are that you get (most of the) packages needed for scientific code development included with the distribution. If you are only collaborating with others who are also using Anaconda, you may find that conda satisfies all your needs. It is good, however, to be aware of all these tools, and use them accordingly. As you become more familiar with them you will realise that equivalent tools work in a similar way even though the command syntax may be different (and that there are equivalent tools for other programming languages too to which your knowledge can be ported).

Python environment hell XKCD comic

Python Environment Hell
From XKCD (Creative Commons Attribution-NonCommercial 2.5 License)

Let us have a look at how we can create and manage virtual environments from the command line using venv and manage packages using pip.

Creating Virtual Environments Using venv

Creating a virtual environment with venv is done by executing the following command:

$ python3 -m venv /path/to/new/virtual/environment

where /path/to/new/virtual/environment is a path to a directory where you want to place it - conventionally within your software project so they are co-located. This will create the target directory for the virtual environment (and any parent directories that don’t exist already).

For our project let’s create a virtual environment called “venv”. First, ensure you are within the project root directory, then:

$ python3 -m venv venv

If you list the contents of the newly created directory “venv”, on a Mac or Linux system (slightly different on Windows as explained below) you should see something like:

$ ls -l venv
total 8
drwxr-xr-x  12 alex  staff  384  5 Oct 11:47 bin
drwxr-xr-x   2 alex  staff   64  5 Oct 11:47 include
drwxr-xr-x   3 alex  staff   96  5 Oct 11:47 lib
-rw-r--r--   1 alex  staff   90  5 Oct 11:47 pyvenv.cfg

So, running the python3 -m venv venv command created the target directory called “venv” containing:

Naming Virtual Environments

What is a good name to use for a virtual environment? Using “venv” or “.venv” as the name for an environment and storing it within the project’s directory seems to be the recommended way - this way when you come across such a subdirectory within a software project, by convention you know it contains its virtual environment details. A slight downside is that all different virtual environments on your machine then use the same name and the current one is determined by the context of the path you are currently located in. A (non-conventional) alternative is to use your project name for the name of the virtual environment, with the downside that there is nothing to indicate that such a directory contains a virtual environment. In our case, we have settled to use the name “venv” instead of “.venv” since it is not a hidden directory and we want it to be displayed by the command line when listing directory contents (the “.” in its name that would, by convention, make it hidden). In the future, you will decide what naming convention works best for you. Here are some references for each of the naming conventions:

Once you’ve created a virtual environment, you will need to activate it.

On Mac or Linux, it is done as:

$ source venv/bin/activate
(venv) $

On Windows, recall that we have Scripts directory instead of bin and activating a virtual environment is done as:

$ source venv/Scripts/activate
(venv) $

Activating the virtual environment will change your command line’s prompt to show what virtual environment you are currently using (indicated by its name in round brackets at the start of the prompt), and modify the environment so that running Python will get you the particular version of Python configured in your virtual environment.

You can verify you are using your virtual environment’s version of Python by checking the path using the command which:

(venv) $ which python3
/home/alex/python-intermediate-inflammation/venv/bin/python3

When you’re done working on your project, you can exit the environment with:

(venv) $ deactivate

If you’ve just done the deactivate, ensure you reactivate the environment ready for the next part:

$ source venv/bin/activate
(venv) $

Python Within A Virtual Environment

Within a virtual environment, commands python and pip will refer to the version of Python you created the environment with. If you create a virtual environment with python3 -m venv venv, python will refer to python3 and pip will refer to pip3.

On some machines with Python 2 installed, python command may refer to the copy of Python 2 installed outside of the virtual environment instead, which can cause confusion. You can always check which version of Python you are using in your virtual environment with the command which python to be absolutely sure. We continue using python3 and pip3 in this material to avoid confusion for those users, but commands python and pip may work for you as expected.

Note that, since our software project is being tracked by Git, the newly created virtual environment will show up in version control - we will see how to handle it using Git in one of the subsequent episodes.

Installing External Packages Using pip

We noticed earlier that our code depends on two external packages/libraries - numpy and matplotlib. In order for the code to run on your machine, you need to install these two dependencies into your virtual environment.

To install the latest version of a package with pip you use pip’s install command and specify the package’s name, e.g.:

(venv) $ pip3 install numpy
(venv) $ pip3 install matplotlib

or like this to install multiple packages at once for short:

(venv) $ pip3 install numpy matplotlib

How About python3 -m pip install?

Why are we not using pip as an argument to python3 command, in the same way we did with venv (i.e. python3 -m venv)? python3 -m pip install should be used according to the official Pip documentation; other official documentation still seems to have a mixture of usages. Core Python developer Brett Cannon offers a more detailed explanation of edge cases when the two options may produce different results and recommends python3 -m pip install. We kept the old-style command (pip3 install) as it seems more prevalent among developers at the moment - but it may be a convention that will soon change and certainly something you should consider.

If you run the pip3 install command on a package that is already installed, pip will notice this and do nothing.

To install a specific version of a Python package give the package name followed by == and the version number, e.g. pip3 install numpy==1.21.1.

To specify a minimum version of a Python package, you can do pip3 install numpy>=1.20.

To upgrade a package to the latest version, e.g. pip3 install --upgrade numpy.

To display information about a particular installed package do:

(venv) $ pip3 show numpy
Name: numpy
Version: 1.21.2
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
Author-email: None
License: BSD
Location: /Users/alex/work/SSI/Carpentries/python-intermediate-inflammation/inflammation/lib/python3.9/site-packages
Requires:
Required-by: matplotlib

To list all packages installed with pip (in your current virtual environment):

(venv) $ pip3 list
Package         Version
--------------- -------
cycler          0.11.0
fonttools       4.28.1
kiwisolver      1.3.2
matplotlib      3.5.0
numpy           1.21.4
packaging       21.2
Pillow          8.4.0
pip             21.1.3
pyparsing       2.4.7
python-dateutil 2.8.2
setuptools      57.0.0
setuptools-scm  6.3.2
six             1.16.0
tomli           1.2.2

To uninstall a package installed in the virtual environment do: pip3 uninstall package-name. You can also supply a list of packages to uninstall at the same time.

Exporting/Importing Virtual Environments Using pip

You are collaborating on a project with a team so, naturally, you will want to share your environment with your collaborators so they can easily ‘clone’ your software project with all of its dependencies and everyone can replicate equivalent virtual environments on their machines. pip has a handy way of exporting, saving and sharing virtual environments.

To export your active environment - use pip3 freeze command to produce a list of packages installed in the virtual environment. A common convention is to put this list in a requirements.txt file:

(venv) $ pip3 freeze > requirements.txt
(venv) $ cat requirements.txt
cycler==0.11.0
fonttools==4.28.1
kiwisolver==1.3.2
matplotlib==3.5.0
numpy==1.21.4
packaging==21.2
Pillow==8.4.0
pyparsing==2.4.7
python-dateutil==2.8.2
setuptools-scm==6.3.2
six==1.16.0
tomli==1.2.2

The first of the above commands will create a requirements.txt file in your current directory. Yours may look a little different, depending on the version of the packages you have installed, as well as any differences in the packages that they themselves use.

The requirements.txt file can then be committed to a version control system (we will see how to do this using Git in one of the following episodes) and get shipped as part of your software and shared with collaborators and/or users. They can then replicate your environment and install all the necessary packages from the project root as follows:

(venv) $ pip3 install -r requirements.txt

As your project grows - you may need to update your environment for a variety of reasons. For example, one of your project’s dependencies has just released a new version (dependency version number update), you need an additional package for data analysis (adding a new dependency) or you have found a better package and no longer need the older package (adding a new and removing an old dependency). What you need to do in this case (apart from installing the new and removing the packages that are no longer needed from your virtual environment) is update the contents of the requirements.txt file accordingly by re-issuing pip freeze command and propagate the updated requirements.txt file to your collaborators via your code sharing platform (e.g. GitHub).

Official Documentation

For a full list of options and commands, consult the official venv documentation and the Installing Python Modules with pip guide. Also check out the guide “Installing packages using pip and virtual environments”.

Running Python Scripts From Command Line

Congratulations! Your environment is now activated and set up to run our inflammation-analysis.py script from the command line.

You should already be located in the root of the python-intermediate-inflammation directory (if not, please navigate to it from the command line now). To run the script, type the following command:

(venv) $ python3 inflammation-analysis.py
usage: inflammation-analysis.py [-h] infiles [infiles ...]
inflammation-analysis.py: error: the following arguments are required: infiles

In the above command, we tell the command line two things:

  1. to find a Python interpreter (in this case, the one that was configured via the virtual environment), and
  2. to use it to run our script inflammation-analysis.py, which resides in the current directory.

As we can see, the Python interpreter ran our script, which threw an error - inflammation-analysis.py: error: the following arguments are required: infiles. It looks like the script expects a list of input files to process, so this is expected behaviour since we don’t supply any. We will fix this error in a moment.

Key Points

  • Virtual environments keep Python versions and dependencies required by different projects separate.

  • A virtual environment is itself a directory structure.

  • Use venv to create and manage Python virtual environments.

  • Use pip to install and manage Python external (third-party) libraries.

  • pip allows you to declare all dependencies for a project in a separate file (by convention called requirements.txt) which can be shared with collaborators/users and used to replicate a virtual environment.

  • Use pip3 freeze > requirements.txt to take snapshot of your project’s dependencies.

  • Use pip3 install -r requirements.txt to replicate someone else’s virtual environment on your machine from the requirements.txt file.


Integrated Software Development Environments

Overview

Teaching: 25 min
Exercises: 15 min
Questions
  • What are Integrated Development Environments (IDEs)?

  • What are the advantages of using IDEs for software development?

Objectives
  • Set up a (virtual) development environment in PyCharm

  • Use PyCharm to run a Python script

Introduction

As we have seen in the previous episode - even a simple software project is typically split into smaller functional units and modules, which are kept in separate files and subdirectories. As your code starts to grow and becomes more complex, it will involve many different files and various external libraries. You will need an application to help you manage all the complexities of, and provide you with some useful (visual) facilities for, the software development process. Such clever and useful graphical software development applications are called Integrated Development Environments (IDEs).

Integrated Development Environments

An IDE normally consists of at least a source code editor, build automation tools and a debugger. The boundaries between modern IDEs and other aspects of the broader software development process are often blurred. Nowadays IDEs also offer version control support, tools to construct graphical user interfaces (GUI) and web browser integration for web app development, source code inspection for dependencies and many other useful functionalities. The following is a list of the most commonly seen IDE features:

IDEs are extremely useful and modern software development would be very hard without them. There are a number of IDEs available for Python development; a good overview is available from the Python Project Wiki. In addition to IDEs, there are also a number of code editors that have Python support. Code editors can be as simple as a text editor with syntax highlighting and code formatting capabilities (e.g. GNU EMACS, Vi/Vim). Most good code editors can also execute code and control a debugger, and some can also interact with a version control system. Compared to an IDE, a good dedicated code editor is usually smaller and quicker, but often less feature-rich. You will have to decide which one is the best for you - in this course we will learn how to use PyCharm, a free, open source Python IDE. Some popular alternatives include free and open source IDE Spyder and Microsoft’s free Visual Studio Code.

Using the PyCharm IDE

Let’s open our project in PyCharm now and familiarise ourselves with some commonly used features.

Opening a Software Project

If you don’t have PyCharm running yet, start it up now. You can skip the initial configuration steps which just go through selecting a theme and other aspects. You should be presented with a dialog box that asks you what you want to do, e.g. Create New Project, Open, or Check out from Version Control.

Select Open and find the software project directory python-intermediate-inflammation you cloned earlier. This directory is now the current working directory for PyCharm, so when we run scripts from PyCharm, this is the directory they will run from.

PyCharm will show you a ‘Tip of the Day’ window which you can safely ignore and close for now. You may also get a warning ‘No Python interpreter configured for the project’ - we will deal with this shortly after we familiarise ourselves with the PyCharm environment. You will notice the IDE shows you a project/file navigator window on the left hand side, to traverse and select the files (and any subdirectories) within the working directory, and an editor window on the right. At the bottom, you would typically have a panel for version control, terminal (the command line within PyCharm) and a TODO list.

View of an opened project in PyCharm

Select the inflammation-analysis.py file in the project navigator on the left so that its contents are displayed in the editor window. You may notice a warning about the missing Python interpreter at the top of the editor panelshowing inflammation-analysis.py file - this is one of the first things you will have to configure for your project before you can do any work.

Missing Python Interpreter Warning in PyCharm

You may take the shortcut and click on one of the offered options above but we want to take you through the whole process of setting up your environment in PyCharm as this is important conceptually.

Configuring a Virtual Environment in PyCharm

Before you can run the code from PyCharm, you need to explicitly specify the path to the Python interpreter on your system. The same goes for any dependencies your code may have - you need to tell PyCharm where to find them - much like we did from the command line in the previous episode. Luckily for us, we have already set up a virtual environment for our project from the command line and PyCharm is clever enough to understand it.

Adding a Python Interpreter

  1. Select either PyCharm > Preferences (Mac) or File > Settings (Linux, Windows).
  2. In the preferences window that appears, select Project: python-intermediate-inflammation > Python Interpreter from the left. You’ll see a number of Python packages displayed as a list, and importantly above that, the current Python interpreter that is being used. These may be blank or set to <No interpreter>, or possibly the default version of Python installed on your system, e.g. Python 2.7 /usr/bin/python2.7, which we do not want to use in this instance.
  3. Select the cog-like button in the top right, then Add Local... (or Add... depending on your PyCharm version). An Add Python Interpreter window will appear.
  4. Select Virtualenv Environment from the list on the left and ensure that Existing environment checkbox is selected within the popup window. In the Interpreter field point to the Python 3 executable inside your virtual environment’s bin directory (make sure you navigate to it and select it from the file browser rather than just accept the default offered by PyCharm). Note that there is also an option to create a new virtual environment, but we are not using that option as we want to reuse the one we created from the command line in the previous episode. Configuring Python Interpreter in PyCharm
  5. Select Make available to all projects checkbox so we can also use this environment for other projects if we wish.
  6. Select OK in the Add Python Interpreter window. Back in the Preferences window, you should select “Python 3.9 (python-intermediate-inflammation)” or similar (that you’ve just added) from the Project Interpreter drop-down list.

Note that a number of external libraries have magically appeared under the “Python 3.9 (python-intermediate-inflammation)” interpreter, including numpy and matplotlib. PyCharm has recognised the virtual environment we created from the command line using venv and has added these libraries effectively replicating our virtual environment in PyCharm (referred to as “Python 3.9 (python-intermediate-inflammation)”).

Packages Currently Installed in a Virtual Environment in PyCharm

Also note that, although the names are not the same - this is one and the same virtual environment and changes done to it in PyCharm will propagate to the command line and vice versa. Let’s see this in action through the following exercise.

Exercise: Compare External Libraries in the Command Line and PyCharm

Can you recall two places where information about our project’s dependencies can be found from the command line? Compare that information with the equivalent configuration in PyCharm.

Hint: We can use an argument to pip, or find the packages directly in a subdirectory of our virtual environment directory “venv”.

Solution

From the previous episode, you may remember that we can get the list of packages in the current virtual environment using the pip3 list command:

(venv) $ pip3 list
Package         Version
--------------- -------
cycler          0.11.0
fonttools       4.28.1
kiwisolver      1.3.2
matplotlib      3.5.0
numpy           1.21.4
packaging       21.2
Pillow          8.4.0
pip             21.1.3
pyparsing       2.4.7
python-dateutil 2.8.2
setuptools      57.0.0
setuptools-scm  6.3.2
six             1.16.0
tomli           1.2.2

However, pip3 list shows all the packages in the virtual environment - if we want to see only the list of packages that we installed, we can use the pip3 freeze command instead:

(venv) $ pip3 freeze
cycler==0.11.0
fonttools==4.28.1
kiwisolver==1.3.2
matplotlib==3.5.0
numpy==1.21.4
packaging==21.2
Pillow==8.4.0
pyparsing==2.4.7
python-dateutil==2.8.2
setuptools-scm==6.3.2
six==1.16.0
tomli==1.2.2

We see pip in pip3 list but not in pip3 freeze as we did not install it using pip. Remember that we use pip3 freeze to update our requirements.txt file, to keep a list of the packages our virtual environment includes. Python will not do this automatically; we have to manually update the file when our requirements change using:

pip3 freeze > requirements.txt

If we want, we can also see the list of packages directly in the following subdirectory of venv:

(venv) $ ls -l venv/lib/python3.9/site-packages
total 1088
drwxr-xr-x  103 alex  staff    3296 17 Nov 11:55 PIL
drwxr-xr-x    9 alex  staff     288 17 Nov 11:55 Pillow-8.4.0.dist-info
drwxr-xr-x    6 alex  staff     192 17 Nov 11:55 __pycache__
drwxr-xr-x    5 alex  staff     160 17 Nov 11:53 _distutils_hack
drwxr-xr-x    8 alex  staff     256 17 Nov 11:55 cycler-0.11.0.dist-info
-rw-r--r--    1 alex  staff   14519 17 Nov 11:55 cycler.py
drwxr-xr-x   14 alex  staff     448 17 Nov 11:55 dateutil
-rw-r--r--    1 alex  staff     152 17 Nov 11:53 distutils-precedence.pth
drwxr-xr-x   31 alex  staff     992 17 Nov 11:55 fontTools
drwxr-xr-x    9 alex  staff     288 17 Nov 11:55 fonttools-4.28.1.dist-info
drwxr-xr-x    8 alex  staff     256 17 Nov 11:55 kiwisolver-1.3.2.dist-info
-rwxr-xr-x    1 alex  staff  216968 17 Nov 11:55 kiwisolver.cpython-39-darwin.so
drwxr-xr-x   92 alex  staff    2944 17 Nov 11:55 matplotlib
-rw-r--r--    1 alex  staff     569 17 Nov 11:55 matplotlib-3.5.0-py3.9-nspkg.pth
drwxr-xr-x   20 alex  staff     640 17 Nov 11:55 matplotlib-3.5.0.dist-info
drwxr-xr-x    7 alex  staff     224 17 Nov 11:55 mpl_toolkits
drwxr-xr-x   39 alex  staff    1248 17 Nov 11:55 numpy
drwxr-xr-x   11 alex  staff     352 17 Nov 11:55 numpy-1.21.4.dist-info
drwxr-xr-x   15 alex  staff     480 17 Nov 11:55 packaging
drwxr-xr-x   10 alex  staff     320 17 Nov 11:55 packaging-21.2.dist-info
drwxr-xr-x    8 alex  staff     256 17 Nov 11:53 pip
drwxr-xr-x   10 alex  staff     320 17 Nov 11:53 pip-21.1.3.dist-info
drwxr-xr-x    7 alex  staff     224 17 Nov 11:53 pkg_resources
-rw-r--r--    1 alex  staff      90 17 Nov 11:55 pylab.py
drwxr-xr-x    8 alex  staff     256 17 Nov 11:55 pyparsing-2.4.7.dist-info
-rw-r--r--    1 alex  staff  273365 17 Nov 11:55 pyparsing.py
drwxr-xr-x    9 alex  staff     288 17 Nov 11:55 python_dateutil-2.8.2.dist-info
drwxr-xr-x   41 alex  staff    1312 17 Nov 11:53 setuptools
drwxr-xr-x   11 alex  staff     352 17 Nov 11:53 setuptools-57.0.0.dist-info
drwxr-xr-x   19 alex  staff     608 17 Nov 11:55 setuptools_scm
drwxr-xr-x   10 alex  staff     320 17 Nov 11:55 setuptools_scm-6.3.2.dist-info
drwxr-xr-x    8 alex  staff     256 17 Nov 11:55 six-1.16.0.dist-info
-rw-r--r--    1 alex  staff   34549 17 Nov 11:55 six.py
drwxr-xr-x    8 alex  staff     256 17 Nov 11:55 tomli
drwxr-xr-x    7 alex  staff     224 17 Nov 11:55 tomli-1.2.2.dist-info

Finally, if you look at both the contents of venv/lib/python3.9/site-packages and requirements.txt and compare that with the packages shown in PyCharm’s Python Interpreter Configuration - you will see that they all contain equivalent information.

Adding an External Library

We have already added packages numpy and matplotlib to our virtual environment from the command line in the previous episode, so we are up-to-date with all external libraries we require at the moment. However, we will need library pytest soon to implement tests for our code. We will use this opportunity to install it from PyCharm in order to see an alternative way of doing this and how it propagates to the command line.

  1. Select either PyCharm > Preferences (Mac) or File > Settings (Linux, Windows).
  2. In the preferences window that appears, select Project: python-intermediate-inflammation > Project Interpreter from the left.
  3. Select the + icon at the top of the window. In the window that appears, search for the name of the library (pytest), select it from the list, then select Install Package. Once it finishes installing, you can close that window. Installing a package in PyCharm
  4. Select OK in the Preferences/Settings window.

It may take a few minutes for PyCharm to install it. After it is done, the pytest library is added to our virtual environment. You can also verify this from the command line by listing the venv/lib/python3.9/site-packages subdirectory. Note, however, that requirements.txt is not updated - as we mentioned earlier this is something you have to do manually. Let’s do this as an exercise.

Exercise: Update requirements.txt After Adding a New Dependency

Export the newly updated virtual environment into requirements.txt file.

Solution

Let’s verify first that the newly installed library pytest is appearing in our virtual environment but not in requirements.txt. First, let’s check the list of installed packages:

(venv) $ pip3 list
Package         Version
--------------- -------
attrs           21.4.0
cycler          0.11.0
fonttools       4.28.5
iniconfig       1.1.1
kiwisolver      1.3.2
matplotlib      3.5.1
numpy           1.22.0
packaging       21.3
Pillow          9.0.0
pip             20.0.2
pluggy          1.0.0
py              1.11.0
pyparsing       3.0.7
pytest          6.2.5
python-dateutil 2.8.2
setuptools      44.0.0
six             1.16.0
toml            0.10.2
tomli           2.0.0

We can see the pytest library appearing in the listing above. However, if we do:

(venv) $ cat requirements.txt
cycler==0.11.0
fonttools==4.28.1
kiwisolver==1.3.2
matplotlib==3.5.0
numpy==1.21.4
packaging==21.2
Pillow==8.4.0
pyparsing==2.4.7
python-dateutil==2.8.2
setuptools-scm==6.3.2
six==1.16.0
tomli==1.2.2

pytest is missing from requirements.txt. To add it, we need to update the file by repeating the command:

(venv) $ pip3 freeze > requirements.txt

pytest is now present in requirements.txt:

attrs==21.2.0
cycler==0.11.0
fonttools==4.28.1
iniconfig==1.1.1
kiwisolver==1.3.2
matplotlib==3.5.0
numpy==1.21.4
packaging==21.2
Pillow==8.4.0
pluggy==1.0.0
py==1.11.0
pyparsing==2.4.7
pytest==6.2.5
python-dateutil==2.8.2
setuptools-scm==6.3.2
six==1.16.0
toml==0.10.2
tomli==1.2.2

Adding a Run Configuration for Our Project

Having configured a virtual environment, we now need to tell PyCharm to use it for our project. This is done by adding a Run Configuration to a project:

  1. To add a new configuration for a project - select Run > Edit Configurations... from the top menu.
  2. Select Add new run configuration... then Python. Adding a Run Configuration in PyCharm
  3. In the new popup window, in the Script path field select the folder button and find and select inflammation-analysis.py. This tells PyCharm which script to run (i.e. what the main entry point to our application is). Run Configuration Popup in PyCharm
  4. In the same window, select “Python 3.9 (python-intermediate-inflammation)” (i.e. the virtual environment and interpreter you configured earlier in this episode) in the Python interpreter field.
  5. You can give this run configuration a name at the top of the window if you like - e.g. let’s name it inflammation analysis.
  6. You can optionally configure run parameters and environment variables in the same window - we do not need this at the moment.
  7. Select Apply to confirm these settings.

Virtual Environments & Run Configurations in PyCharm

We configured the Python interpreter to use for our project by pointing PyCharm to the virtual environment we created from the command line (which also includes external libraries our code needs to run). Recall that you can create several virtual environments based on the same Python interpreter but with different external libraries - this is helpful when you need to develop different types of applications. For example, you can create one virtual environment based on Python 3.9 to develop Django Web applications and another virtual environment based on the same Python 3.9 to work with scientific libraries.

Run Configurations in PyCharm are named sets of startup properties that define what to execute and what parameters (i.e. what additional configuration options) to use on top of virtual environments. You can vary these configurations each time your code is executed, which is particularly useful for running, debugging and testing your code.

Now you know how to configure and manipulate your environment in both tools (command line and PyCharm), which is a useful parallel to be aware of. Let’s have a look at some other features afforded to us by PyCharm.

Syntax Highlighting

The first thing you may notice is that code is displayed using different colours. Syntax highlighting is a feature that displays source code terms in different colours and fonts according to the syntax category the highlighted term belongs to. It also makes syntax errors visually distinct. Highlighting does not affect the meaning of the code itself - it’s intended only for humans to make reading code and finding errors easier.

Syntax Highlighting Functionality in PyCharm

Code Completion

As you start typing code, PyCharm will offer to complete some of the code for you in the form of an auto completion popup. This is a context-aware code completion feature that speeds up the process of coding (e.g. reducing typos and other common mistakes) by offering available variable names, functions from available packages, parameters of functions, hints related to syntax errors, etc.

Code Completion Functionality in PyCharm

Code Definition & Documentation References

You will often need code reference information to help you code. PyCharm shows this useful information, such as definitions of symbols (e.g. functions, parameters, classes, fields, and methods) and documentation references by means of quick popups and inline tooltips.

For a selected piece of code, you can access various code reference information from the View menu (or via various keyboard shortcuts), including:

Code References Functionality in PyCharm

You can search for a text string within a project, use different scopes to narrow your search process, use regular expressions for complex searches, include/exclude certain files from your search, find usages and occurrences. To find a search string in the whole project:

  1. From the main menu, select Edit | Find | Find in Path ... (or Edit | Find | Find in Files... depending on your version of PyCharm).
  2. Type your search string in the search field of the popup. Alternatively, in the editor, highlight the string you want to find and press Command-Shift-F (on Mac) or Control-Shift-F (on Windows). PyCharm places the highlighted string into the search field of the popup.

    Code Search Functionality in PyCharm If you need, specify the additional options in the popup. PyCharm will list the search strings and all the files that contain them.

  3. Check the results in the preview area of the dialog where you can replace the search string or select another string, or press Command-Shift-F (on Mac) or Control-Shift-F (on Windows) again to start a new search.
  4. To see the list of occurrences in a separate panel, click the Open in Find Window button in the bottom right corner. The find panel will appear at the bottom of the main window; use this panel and its options to group the results, preview them, and work with them further.

    Code Search Functionality in PyCharm

Version Control

PyCharm supports a directory-based versioning model, which means that each project directory can be associated with a different version control system. Our project was already under Git version control and PyCharm recognised it. It is also possible to add an unversioned project directory to version control directly from PyCharm.

During this course, we will do all our version control commands from the command line but it is worth noting that PyCharm supports a comprehensive subset of Git commands (i.e. it is possible to perform a set of common Git commands from PyCharm but not all). A very useful version control feature in PyCharm is graphically comparing changes you made locally to a file with the version of the file in a repository, a different commit version or a version in a different branch - this is something that cannot be done equally well from the text-based command line.

You can get a full documentation on PyCharm’s built-in version control support online.

Version Control Functionality in PyCharm

Running Scripts in PyCharm

We have configured our environment and explored some of the most commonly used PyCharm features and are now ready to run our script from PyCharm! To do so, right-click the inflammation-analysis.py file in the PyCharm project/file navigator on the left, and select Run 'inflammation analysis' (i.e. the Run Configuration we created earlier).

Running a script from PyCharm

The script will run in a terminal window at the bottom of the IDE window and display something like:

/Users/alex/work/python-intermediate-inflammation/venv/bin/python /Users/alex/work/python-intermediate-inflammation/inflammation-analysis.py
usage: inflammation-analysis.py [-h] infiles [infiles ...]
inflammation-analysis.py: error: the following arguments are required: infiles

Process finished with exit code 2

This is the same error we got when running the script from the command line. We will get back to this error shortly - for now, the good thing is that we managed to set up our project for development both from the command line and PyCharm and are getting the same outputs. Before we move on to fixing errors and writing more code, let’s have a look at the last set of tools for collaborative code development which we will be using in this course - Git and GitHub.

Key Points

  • An IDE is an application that provides a comprehensive set of facilities for software development, including syntax highlighting, code search and completion, version control, testing and debugging.

  • PyCharm recognises virtual environments configured from the command line using venv and pip.


Collaborative Software Development Using Git and GitHub

Overview

Teaching: 35 min
Exercises: 0 min
Questions
  • What are Git branches and why are they useful for code development?

  • What are some best practices when developing software collaboratively using Git?

Objectives
  • Commit changes in a software project to a local repository and publish them in a remote repository on GitHub

  • Create branches for managing different threads of code development

  • Learn to use feature branch workflow to effectively collaborate with a team on a software project

Introduction

So far we have checked out our software project from GitHub and used command line tools to configure a virtual environment for our project and run our code. We have also familiarised ourselves with PyCharm - a graphical tool we will use for code development, testing and debugging. We are now going to start using another set of tools from the collaborative code development toolbox - namely, the version control system Git and code sharing platform GitHub. These two will enable us to track changes to our code and share it with others.

You may recall that we have already made some changes to our project locally - we created a virtual environment in the directory called “venv” and exported it to the requirements.txt file. We should now decide which of those changes we want to check in and share with others in our team. This is a typical software development workflow - you work locally on code, test it to make sure it works correctly and as expected, then record your changes using version control and share your work with others via a shared and centrally backed-up repository.

Firstly, let’s remind ourselves how to work with Git from the command line.

Git Refresher

Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development but it can be used to track changes in files in general - it is particularly effective for tracking text-based files (e.g. source code files, CSV, Markdown, HTML, CSS, Tex, etc. files).

Git has several important characteristics:

The diagram below shows a typical software development lifecycle with Git and the commonly used commands to interact with different parts of Git infrastructure, such as:

Development lifecycle with Git

Software development lifecycle with Git from PNGWing

Checking-in Changes to Our Project

Let’s check-in the changes we have done to our project so far. The first thing to do upon navigating into our software project’s directory root is to check the current status of our local working directory and repository.

$ git status
On branch main
Your branch is up to date with 'origin/main'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)
	requirements.txt
	venv/

nothing added to commit but untracked files present (use "git add" to track)

As expected, Git is telling us that we have some untracked files - requirements.txt and directory “venv” - present in our working directory which we have not staged nor committed to our local repository yet. You do not want to commit the newly created directory “venv” and share it with others because this directory is specific to your machine and setup only (i.e. it contains local paths to libraries on your system that most likely would not work on any other machine). You do, however, want to share requirements.txt with your team as this file can be used to replicate the virtual environment on your collaborators’ systems.

To tell Git to intentionally ignore and not track certain files and directories, you need to specify them in the .gitignore text file in the project root. Our project already has .gitignore, but in cases where you do not have it - you can simply create it yourself. In our case, we want to tell Git to ignore the “venv” directory (and “.venv” as another naming convention for directories containing virtual environments) and stop notifying us about it. Edit your .gitignore file in PyCharm and add a line containing “venv/” and another one containing “.venv/”. It does not matter much in this case where within the file you add these lines, so let’s do it at the end. Your .gitignore should look something like this:

# IDEs
.vscode/
.idea/

# Intermediate Coverage file
.coverage

# Output files
*.png

# Python runtime
*.pyc
*.egg-info
.pytest_cache

# Virtual environments
venv/
.venv/

You may notice that we are already not tracking certain files and directories with useful comments about what exactly we are ignoring. You may also notice that each line in .gitignore is actually a pattern, so you can ignore multiple files that match a pattern (e.g. “*.png” will ignore all PNG files in the current directory).

If you run the git status command now, you will notice that Git has cleverly understood that you want to ignore changes to the “venv” directory so it is not warning us about it any more. However, it has now detected a change to .gitignore file that needs to be committed.

$ git status
On branch main
Your branch is up to date with 'origin/main'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   .gitignore

Untracked files:
  (use "git add <file>..." to include in what will be committed)
	requirements.txt

no changes added to commit (use "git add" and/or "git commit -a")

To commit the changes .gitignore and requirements.txt to the local repository, we first have to add these files to staging area to prepare them for committing. We can do that at the same time as:

$ git add .gitignore requirements.txt

Now we can commit them to the local repository with:

$ git commit -m "Initial commit of requirements.txt. Ignoring virtual env. folder."

Remember to use meaningful messages for your commits.

So far we have been working in isolation - all the changes we have done are still only stored locally on our individual machines. In order to share our work with others, we should push our changes to the remote repository on GitHub. Before we push our changes however, we should first do a git pull. This is considered best practice, since any changes made to the repository - notably by other people - may impact the changes we are about to push. This could occur, for example, by two collaborators making different changes to the same lines in a file. By pulling first, we are made aware of any changes made by others, in particular if there are any conflicts between their changes and ours.

$ git pull

Now we’ve ensured our repository is synchronised with the remote one, we can now push our changes. GitHub has recently strengthened authentication requirements for Git operations accessing GitHub from the command line over HTTPS. This means you cannot use passwords for authentication over HTTPS any more - you either need to set up and use a personal access token for additional security if you want to continue to use HTTPS, or switch to use private and public key pair over SSH before you can push remotely the changes you made locally. So, when you run the command below:

$ git push origin main

Authentication Errors

If you get a warning that HTTPS access is deprecated, or a token is required, then you accidentally cloned the repository using HTTPS and not SSH. You can fix this from the command line by resetting the remote repository URL setting on your local repo:

$ git remote set-url origin git@github.com:<YOUR_GITHUB_USERNAME>/python-intermediate-inflammation.git

In the above command, origin is an alias for the remote repository you used when cloning the project locally (it is called that by convention and set up automatically by Git when you run git clone remote_url command to replicate a remote repository locally); main is the name of our main (and currently only) development branch.

Git Remotes

Note that systems like Git allow us to synchronise work between any two or more copies of the same repository - the ones that are not located on your machine are “Git remotes” for you. In practice, though, it is easiest to agree with your collaborators to use one copy as a central hub (such as GitHub or GitLab), where everyone pushes their changes to. This also avoid risks associated with keeping the “central copy” on someone’s laptop. You can have more than one remote configured for your local repository, each of which generally is either read-only or read/write for you. Collaborating with others involves managing these remote repositories and pushing and pulling information to and from them when you need to share work.

git-distributed

Git - distributed version control system
From W3Docs (freely available)

Git Branches

When we do git status, Git also tells us that we are currently on the main branch of the project. A branch is one version of your project (the files in your repository) that can contain its own set of commits. We can create a new branch, make changes to the code which we then commit to the branch, and, once we are happy with those changes, merge them back to the main branch. To see what other branches are available, do:

$ git branch
* main

At the moment, there’s only one branch (main) and hence only one version of the code available. When you create a Git repository for the first time, by default you only get one version (i.e. branch) - main. Let’s have a look at why having different branches might be useful.

Feature Branch Software Development Workflow

While it is technically OK to commit your changes directly to main branch, and you may often find yourself doing so for some minor changes, the best practice is to use a new branch for each separate and self-contained unit/piece of work you want to add to the project. This unit of work is also often called a feature and the branch where you develop it is called a feature branch. Each feature branch should have its own meaningful name - indicating its purpose (e.g. “issue23-fix”). If we keep making changes and pushing them directly to main branch on GitHub, then anyone who downloads our software from there will get all of our work in progress - whether or not it’s ready to use! So, working on a separate branch for each feature you are adding is good for several reasons:

Branches are commonly used as part of a feature-branch workflow, shown in the diagram below.

Git feature branch workflow diagram

Git feature branches
Adapted from Git Tutorial by sillevl (Creative Commons Attribution 4.0 International License)

In the software development workflow, we typically have a main branch which is the version of the code that is tested, stable and reliable. Then, we normally have a development branch (called develop or dev by convention) that we use for work-in-progress code. As we work on adding new features to the code, we create new feature branches that first get merged into develop after a thorough testing process. After even more testing - develop branch will get merged into main. The points when feature branches are merged to develop, and develop to main depend entirely on the practice/strategy established in the team. For example, for smaller projects (e.g. if you are working alone on a project or in a very small team), feature branches sometimes get directly merged into main upon testing, skipping the develop branch step. In other projects, the merge into main happens only at the point of making a new software release. Whichever is the case for you, a good rule of thumb is - nothing that is broken should be in main.

Creating Branches

Let’s create a develop branch to work on:

$ git branch develop

This command does not give any output, but if we run git branch again, without giving it a new branch name, we can see the list of branches we have - including the new one we have just made.

$ git branch
    develop
  * main

The * indicates the currently active branch. So how do we switch to our new branch? We use the git checkout command with the name of the branch:

$ git checkout develop
Switched to branch 'develop'

Create and Switch to Branch Shortcut

A shortcut to create a new branch and immediately switch to it:

$ git checkout -b develop

Updating Branches

If we start updating and committing files now, the commits will happen on the develop branch and will not affect the version of the code in main. We add and commit things to develop branch in the same way as we do to main.

Let’s make a small modification to inflammation/models.py in PyCharm, and, say, change the spelling of “2d” to “2D” in docstrings for functions daily_mean(), daily_max() and daily_min().

If we do:

$ git status
   On branch develop
   Changes not staged for commit:
     (use "git add <file>..." to update what will be committed)
     (use "git checkout -- <file>..." to discard changes in working directory)

   	modified:   inflammation/models.py

   no changes added to commit (use "git add" and/or "git commit -a")

Git is telling us that we are on branch develop and which tracked files have been modified in our working directory.

We can now add and commit the changes in the usual way.

$ git add inflammation/models.py
$ git commit -m "Spelling fix"

Currently Active Branch

Remember, add and commit commands always act on the currently active branch. You have to be careful and aware of which branch you are working with at any given moment. git status can help with that, and you will find yourself invoking it very often.

Pushing New Branch Remotely

We push the contents of the develop branch to GitHub in the same way as we pushed the main branch. However, as we have just created this branch locally, it still does not exist in our remote repository. You can check that in GitHub by listing all branches.

Software project's main branch

To push a new local branch remotely for the first time, you could use the -u switch and the name of the branch you are creating and pushing to:

$ git push -u origin develop

Git Push With -u Switch

Using the -u switch with the git push command is a handy shortcut for: (1) creating the new remote branch and (2) setting your local branch to automatically track the remote one at the same time. You need to use the -u switch only once to set up that association between your branch and the remote one explicitly. After that you could simply use git push without specifying the remote repository, if you wished so. We still prefer to explicitly state this information in commands.

Let’s confirm that the new branch develop now exist remotely on GitHub too. From the < > Code tab in your repository in GitHub, click the branch dropdown menu (currently showing the default branch main). You should see your develop branch in the list too.

Software project's develop branch

Now the others can check out the develop branch too and continue to develop code on it.

After the initial push of the new branch, each next time we push to it in the usual manner (i.e. without the -u switch):

$ git push origin develop

What is the Relationship Between Originating and New Branches?

It’s natural to think that new branches have a parent/child relationship with their originating branch, but in actual Git terms, branches themselves do not have parents but single commits do. Any commit can have zero parents (a root, or initial, commit), one parent (a regular commit), or multiple parents (a merge commit), and using this structure, we can build a ‘view’ of branches from a set of commits and their relationships. A common way to look at it is that Git branches are really only lightweight, movable pointers to commits. So as a new commit is added to a branch, the branch pointer is moved to the new commit.

What this means is that when you accomplish a merge between two branches, Git is able to determine the common ‘commit ancestor’ through the commits in a ‘branch’, and use that common ancestor to determine which commits need to be merged onto the destination branch. It also means that, in theory, you could merge any branch with any other at any time… although it may not make sense to do so!

Merging Into Main Branch

Once you have tested your changes on the develop branch, you will want to merge them onto the main branch. To do so, make sure you have all your changes committed and switch to main:

$ git checkout main
Switched to branch 'main'
Your branch is up to date with 'origin/main'.

To merge the develop branch on top of main do:

$ git merge develop
Updating 05e1ffb..be60389
Fast-forward
 inflammation/models.py | 6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

If there are no conflicts, Git will merge the branches without complaining and replay all commits from develop on top of the last commit from main. If there are merge conflicts (e.g. a team collaborator modified the same portion of the same file you are working on and checked in their changes before you), the particular files with conflicts will be marked and you will need to resolve those conflicts and commit the changes before attempting to merge again. Since we have no conflicts, we can now push the main branch to the remote repository:

git push origin main

All Branches Are Equal

In Git, all branches are equal - there is nothing special about the main branch. It is called that by convention and is created by default, but it can also be called something else. A good example is gh-pages branch which is often the source branch for website projects hosted on GitHub (rather than main).

Keeping Main Branch Stable

Good software development practice is to keep the main branch stable while you and the team develop and test new functionalities on feature branches (which can be done in parallel and independently by different team members). The next step is to merge feature branches onto the develop branch, where more testing can occur to verify that the new features work well with the rest of the code (and not just in isolation). We talk more about different types of code testing in one of the following episodes.

Key Points

  • A branch is one version of your project that can contain its own set of commits.

  • Feature branches enable us to develop / explore / test new code features without affecting the stable main code.


Python Code Style Conventions

Overview

Teaching: 20 min
Exercises: 15 min
Questions
  • Why should you follow software code style conventions?

  • Who is setting code style conventions?

  • What code style conventions exist for Python?

Objectives
  • Understand the benefits of following community coding conventions

Introduction

We now have all the tools we need for software development and are raring to go. But before you dive into writing some more code and sharing it with others, ask yourself what kind of code should you be writing and publishing? It may be worth spending some time learning a bit about Python coding style conventions to make sure that your code is consistently formatted and readable by yourself and others.

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” - Martin Fowler, British software engineer, author and international speaker on software development

Python Coding Style Guide

One of the most important things we can do to make sure our code is readable by others (and ourselves a few months down the line) is to make sure that it is descriptive, cleanly and consistently formatted and uses sensible, descriptive names for variable, function and module names. In order to help us format our code, we generally follow guidelines known as a style guide. A style guide is a set of conventions that we agree upon with our colleagues or community, to ensure that everyone contributing to the same project is producing code which looks similar in style. While a group of developers may choose to write and agree upon a new style guide unique to each project, in practice many programming languages have a single style guide which is adopted almost universally by the communities around the world. In Python, although we do have a choice of style guides available, the PEP 8 style guide is most commonly used. PEP here stands for Python Enhancement Proposals; PEPs are design documents for the Python community, typically specifications or conventions for how to do something in Python, a description of a new feature in Python, etc.

Style consistency

One of the key insights from Guido van Rossum, one of the PEP 8 authors, is that code is read much more often than it is written. Style guidelines are intended to improve the readability of code and make it consistent across the wide spectrum of Python code. Consistency with the style guide is important. Consistency within a project is more important. Consistency within one module or function is the most important. However, know when to be inconsistent - sometimes style guide recommendations are just not applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best. And don’t hesitate to ask!

As we have already covered in the episode on PyCharm IDE, PyCharm highlights the language constructs (reserved words) and syntax errors to help us with coding. PyCharm also gives us recommendations for formatting the code - these recommendations are mostly taken from the PEP 8 style guide.

A full list of style guidelines for this style is available from the PEP 8 website; here we highlight a few.

Indentation

Python is a kind of language that uses indentation as a way of grouping statements that belong to a particular block of code. Spaces are the recommended indentation method in Python code. The guideline is to use 4 spaces per indentation level - so 4 spaces on level one, 8 spaces on level two and so on. Many people prefer the use of tabs to spaces to indent the code for many reasons (e.g. additional typing, easy to introduce an error by missing a single space character, accessibility for individuals using screen readers, etc.) and do not follow this guideline. Whether you decide to follow this guideline or not, be consistent and follow the style already used in the project.

Indentation in Python 2 vs Python 3

Python 2 allowed code indented with a mixture of tabs and spaces. Python 3 disallows mixing the use of tabs and spaces for indentation. Whichever you choose, be consistent throughout the project.

PyCharm has built-in support for converting tab indentation to spaces “under the hood” for Python code in order to conform to PEP8. So, you can type a tab character and PyCharm will automatically convert it to 4 spaces. You can control the amount of spaces that PyCharm uses to replace one tab character or you can decide to keep the tab character altogether and prevent automatic conversion. You can modify these settings in PyCharm’s Preferences>Editor>Code Style>Python (MacOS/Linux) or Settings>Editor>Code Style>Python (Windows).

Python code indentation settings in PyCharm

You can also tell the editor to show non-printable characters if you are ever unsure what character exactly is being used by selecting View>Active Editor>Show whitespace.

Python code whitespace settings in PyCharm

There are more complex rules on indenting single units of code that continue over several lines, e.g. function, list or dictionary definitions can all take more than one line. The preferred way of wrapping such long lines is by using Python’s implied line continuation inside delimiters such as parentheses (()), brackets ([]) and braces ({}), or a hanging indent.

# Add an extra level of indentation (extra 4 spaces) to distinguish arguments from the rest of the code that follows
def long_function_name(
        var_one, var_two, var_three,
        var_four):
    print(var_one)


# Aligned with opening delimiter
foo = long_function_name(var_one, var_two,
                         var_three, var_four)

# Use hanging indents to add an indentation level like paragraphs of text where all the lines in a paragraph are
# indented except the first one
foo = long_function_name(
    var_one, var_two,
    var_three, var_four)

# Using hanging indent again, but closing bracket aligned with the first non-blank character of the previous line
a_long_list = [
    [[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.66, 1], [0.66, 0.83, 1], [0.77, 0.88, 1]]
    ]

# Using hanging indent again, but closing bracket aligned with the start of the multiline contruct
a_long_list2 = [
    1,
    2,
    3,
    # ...
    79
]

More details on good and bad practices for continuation lines can be found in PEP 8 guideline on indentation.

Maximum Line Length

All lines should be up to 80 characters long; for lines containing comments or docstrings (to be covered later) the line length limit should be 73 - see this discussion for reasoning behind these numbers. Some teams strongly prefer a longer line length, and seemed to have settled on the length of 100. Long lines of code can be broken over multiple lines by wrapping expressions in delimiters, as mentioned above (preferred method), or using a backslash (\) at the end of the line to indicate line continuation (slightly less preferred method).

# Using delimiters ( ) to wrap a multi-line expression
if (a == True and
    b == False):

# Using a backslash (\) for line continuation
if a == True and \
    b == False:

Should a Line Break Before or After a Binary Operator?

Lines should break before binary operators so that the operators do not get scattered across different columns on the screen. In the example below, the eye does not have to do the extra work to tell which items are added and which are subtracted:

# PEP 8 compliant - easy to match operators with operands
income = (gross_wages
          + taxable_interest
          + (dividends - qualified_dividends)
          - ira_deduction
          - student_loan_interest)

Blank Lines

Top-level function and class definitions should be surrounded with two blank lines. Method definitions inside a class should be surrounded by a single blank line. You can use blank lines in functions, sparingly, to indicate logical sections.

Whitespace in Expressions and Statements

Avoid extraneous whitespace in the following situations:

String Quotes

In Python, single-quoted strings and double-quoted strings are the same. PEP8 does not make a recommendation for this apart from picking one rule and consistently sticking to it. When a string contains single or double quote characters, use the other one to avoid backslashes in the string as it improves readability.

Naming Conventions

There are a lot of different naming styles in use, including:

As with other style guide recommendations - consistency is key. Follow the one already established in the project, if there is one. If there isn’t, follow any standard language style (such as PEP8 for Python). Failing that, just pick one, document it and stick to it.

Some things to be wary of when naming things in the code:

Function, Variable, Class, Module, Package Naming in Python

  • Function and variable names should use lower_case_with_underscores
  • Avoid single character names in almost all instances.
  • Variable names should tell you what they store, and not just the type (e.g. name_of_patient is better than string)
  • Function names should tell you what the function does.
  • Class names should use the CapitalisedWords convention.
  • Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability.
  • Packages should also have short, all-lowercase names, although the use of underscores is discouraged.

A more detailed guide on naming functions, modules, classes and variables is available from PEP8.

Comments

Comments allow us to provide the reader with additional information on what the code does - reading and understanding source code is slow, laborious and can lead to misinterpretation, plus it is always a good idea to keep others in mind when writing code. A good rule of thumb is to assume that someone will always read your code at a later date, and this includes a future version of yourself. It can be easy to forget why you did something a particular way in six months’ time. Write comments as complete sentences and in English unless you are 100% sure the code will never be read by people who don’t speak your language.

The Good, the Bad, and the Ugly Comments

As a side reading, check out the ‘Putting comments in code: the good, the bad, and the ugly’ blogpost. Remember - a comment should answer the ‘why’ question”. Occasionally the “what” question. The “how” question should be answered by the code itself.

Block comments generally apply to some (or all) code that follows them, and are indented to the same level as that code. Each line of a block comment starts with a # and a single space (unless it is indented text inside the comment).

def fahr_to_cels(fahr):
    # Block comment example: convert temperature in Fahrenheit to Celsius
    cels = (fahr + 32) * (5 / 9)
    return cels

An inline comment is a comment on the same line as a statement. Inline comments should be separated by at least two spaces from the statement. They should start with a # and a single space and should be used sparingly.

def fahr_to_cels(fahr):
    cels = (fahr + 32) * (5 / 9)  # Inline comment example: convert temperature in Fahrenheit to Celsius
    return cels

Python doesn’t have any multi-line comments, like you may have seen in other languages like C++ or Java. However, there are ways to do it using docstrings as we’ll see in a moment.

The reader should be able to understand a single function or method from its code and its comments, and should not have to look elsewhere in the code for clarification. The kind of things that need to be commented are:

However, there are some restrictions. Comments that simply restate what the code does are redundant, and comments must be accurate and updated with the code, because an incorrect comment causes more confusion than no comment at all.

Exercise: Improve Code Style of Our Project

Let’s look at improving the coding style of our project. First create a new feature branch called style-fixes off our develop branch and switch to it (from the project root):

$ git checkout develop
$ git checkout -b style-fixes

Next look at the inflammation-analysis.py file in PyCharm and identify where the above guidelines have not been followed. Fix the discovered inconsistencies and commit them to the feature branch.

Solution

Modify inflammation-analysis.py from PyCharm, which is helpfully marking inconsistencies with coding guidelines by underlying them. There are a few things to fix in inflammation-analysis.py, for example:

  1. Line 24 in inflammation-analysis.py is too long and not very readable. A better style would be to use multiple lines and hanging indent, with the closing brace `}’ aligned either with the first non-whitespace character of the last line of list or the first character of the line that starts the multiline construct or simply moved to the end of the previous line. All three acceptable modifications are shown below.

     # Using hanging indent, with the closing '}' aligned with the first non-blank character of the previous line
     view_data = {
         'average': models.daily_mean(inflammation_data),
         'max': models.daily_max(inflammation_data),
         'min': models.daily_min(inflammation_data)
         }
    
     # Using hanging indent with the, closing '}' aligned with the start of the multiline contruct
     view_data = {
         'average': models.daily_mean(inflammation_data),
         'max': models.daily_max(inflammation_data),
         'min': models.daily_min(inflammation_data)
     }
    
     # Using hanging indent where all the lines of the multiline contruct are indented except the first one
     view_data = {
         'average': models.daily_mean(inflammation_data),
         'max': models.daily_max(inflammation_data),
         'min': models.daily_min(inflammation_data)}
    
  2. Variable ‘InFiles’ in inflammation-analysis.py uses CapitalisedWords naming convention which is recommended for class names but not variable names. By convention, variable names should be in lowercase with optional underscores so you should rename the variable ‘InFiles’ to, e.g., ‘infiles’ or ‘in_files’.

  3. There is an extra blank line on line 20 in inflammation-analysis.py. Normally, you should not use blank lines in the middle of the code unless you want to separate logical units - in which case only one blank line is used. Note how PyCharm is warning us by underlying the whole line.

  4. Only one blank line after the end of definition of function main and the rest of the code on line 30 in inflammation-analysis.py - should be two blank lines. Note how PyCharm is warning us by underlying the whole line.

Finally, let’s add and commit our changes to the feature branch. We will check the status of our working directory first.

$ git status
On branch style-fixes
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified:   inflammation-analysis.py

no changes added to commit (use "git add" and/or "git commit -a")

Git tells us we are on branch style-fixes and that we have unstaged and uncommited changes to inflammation-analysis.py. Let’s commit them to the local repository.

$ git add inflammation-analysis.py
$ git commit -m "Code style fixes."

Optional Exercise: Improve Code Style of Your Other Python Projects

If you have another Python project, check to which extent it conforms to PEP8 coding style.

Documentation Strings aka Docstrings

If the first thing in a function is a string that is not assigned to a variable, that string is attached to the function as its documentation. Consider the following code implementing function for calculating the nth Fibonacci number:

def fibonacci(n):
    """Calculate the nth Fibonacci number.

    A recursive implementation of Fibonacci array elements.

    :param n: integer
    :raises ValueError: raised if n is less than zero
    :returns: Fibonacci number
    """
    if n < 0:
        raise ValueError('Fibonacci is not defined for N < 0')
    if n == 0:
        return 0
    if n == 1:
        return 1

    return fibonacci(n - 1) + fibonacci(n - 2)

Note here we are explicitly documenting our input variables, what is returned by the function, and also when the ValueError exception is raised. Along with a helpful description of what the function does, this information can act as a contract for readers to understand what to expect in terms of behaviour when using the function, as well as how to use it.

A special comment string like this is called a docstring. We do not need to use triple quotes when writing one, but if we do, we can break the text across multiple lines. Docstrings can also be used at the start of a Python module (a file containing a number of Python functions) or at the start of a Python class (containing a number of methods) to list their contents as a reference. You should not confuse docstrings with comments though - docstrings are context-dependent and should only be used in specific locations (e.g. at the top of a module and immediately after class and def keywords as mentioned). Using triple quoted strings in locations where they will not be interpreted as docstrings or using triple quotes as a way to ‘quickly’ comment out an entire block of code is considered bad practice.

In our example case, we used the Sphynx/ReadTheDocs docstring style formatting for the param, raises and returns - other docstring formats exist as well.

Python PEP 257 - Recommendations for Docstrings

PEP 257 is another one of Python Enhancement Proposals and this one deals with docstring conventions to standardise how they are used. For example, on the subject of module-level docstrings, PEP 257 says:

The docstring for a module should generally list
the classes,
exceptions
and functions
(and any other objects)
that are exported by the module, with a one-line summary of each.
(These summaries generally give less detail than the summary line in the object's docstring.)
The docstring for a package
(i.e., the docstring of the package's `__init__.py` module)
should also list the modules and subpackages exported by the package.

Note that __init__.py file used to be a required part of a package (pre Python 3.3) where a package was typically implemented as a directory containing an __init__.py file which got implicitly executed when a package was imported.

So, at the beginning of a module file we can just add a docstring explaining the nature of a module. For example, if fibonacci() was included in a module with other functions, our module could have at the start of it:

"""A module for generating numerical sequences of numbers that occur in nature.

Functions:
  fibonacci - returns the Fibonacci number for a given integer
  golden_ratio - returns the golden ratio number to a given Fibonacci iteration
  ...
"""
...

The docstring for a function or a module is returned when calling the help function and passing its name - for example from the interactive Python console/terminal available from the command line or when rendering code documentation online (e.g. see Python documentation). PyCharm also displays the docstring for a function/module in a little help popup window when using tab-completion.

help(fibonacci)

Exercise: Fix the Docstrings

Look into models.py in PyCharm and improve docstrings for functions daily_mean , daily_min, daily_max. Commit those changes to feature branch style-fixes.

Solution

For example, the improved docstrings for the above functions would contain explanations for parameters and return values.

def daily_mean(data):
   """Calculate the daily mean of a 2D inflammation data array for each day.

   :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days).
   :returns: An array of mean values of measurements for each day.
   """
   return np.mean(data, axis=0)
def daily_max(data):
   """Calculate the daily maximum of a 2D inflammation data array for each day.

   :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days).
   :returns: An array of max values of measurements for each day.
   """
   return np.max(data, axis=0)
def daily_min(data):
   """Calculate the daily minimum of a 2D inflammation data array for each day.

   :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days).
   :returns: An array of minimum values of measurements for each day.
   """
   return np.min(data, axis=0)

Once we are happy with modifications, as usual before staging and commit our changes, we check the status of our working directory:

$ git status
On branch style-fixes
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified:   inflammation/models.py

no changes added to commit (use "git add" and/or "git commit -a")

As expected, Git tells us we are on branch style-fixes and that we have unstaged and uncommited changes to inflammation/models.py. Let’s commit them to the local repository.

$ git add inflammation/models.py
$ git commit -m "Docstring improvements."

In the previous exercises, we made some code improvements on feature branch style-fixes. We have committed our changes locally but have not pushed this branch remotely for others to have a look at our code before we merge it onto the develop branch. Let’s do that now, namely:

Here is a set commands that will achieve the above set of actions (remember to use git status often in between other Git commands to double check which branch you are on and its status):

$ git push -u origin style-fixes
$ git checkout develop
$ git merge style-fixes
$ git push origin develop
$ git checkout main
$ git merge develop
$ git push origin main

Typical Code Development Cycle

What you’ve done in the exercises in this episode mimics a typical software development workflow - you work locally on code on a feature branch, test it to make sure it works correctly and as expected, then record your changes using version control and share your work with others via a centrally backed-up repository. Other team members work on their feature branches in parallel and similarly share their work with colleagues for discussions. Different feature branches from around the team get merged onto the development branch, often in small and quick development cycles. After further testing and verifying that no code has been broken by the new features - the development branch gets merged onto the stable main branch, where new features finally resurface to end-users in bigger “software release” cycles.

Key Points

  • Always assume that someone else will read your code at a later date, including yourself.

  • Community coding conventions help you create more readable software projects that are easier to contribute to.

  • Python Enhancement Proposals (or PEPs) describe a recommended convention or specification for how to do something in Python.

  • Style checking to ensure code conforms to coding conventions is often part of IDEs.

  • Consistency with the style guide is important - whichever style you choose.


Verifying Code Style Using Linters

Overview

Teaching: 15 min
Exercises: 10 min
Questions
  • What tools can help with maintaining a consistent code style?

  • How can we automate code style checking?

Objectives
  • Use code linting tools to verify a program’s adherence to a Python coding style convention.

Verifying Code Style Using Linters

We’ve seen how we can use PyCharm to help us format our Python code in a consistent style. This aids reusability, since consistent-looking code is easier to modify since it’s easier to read and understand. We can also use tools, called code linters, to identify consistency issues in a report-style. Linters analyse source code to identify and report on stylistic and even programming errors. Let’s look at a very well used one of these called pylint.

First, let’s ensure we are on the style-fixes branch once again.

$ git checkout style-fixes

Pylint is just a Python package so we can install it in our virtual environment using:

$ pip3 install pylint
$ pylint --version

We should see the version of Pylint, something like:

pylint 2.13.3
...

We should also update our requirements.txt with this new addition:

$ pip3 freeze > requirements.txt

Pylint is a command-line tool that can help our code in many ways:

Pylint can also identify code smells.

How Does Code Smell?

There are many ways that code can exhibit bad design whilst not breaking any rules and working correctly. A code smell is a characteristic that indicates that there is an underlying problem with source code, e.g. large classes or methods, methods with too many parameters, duplicated statements in both if and else blocks of conditionals, etc. They aren’t functional errors in the code, but rather are certain structures that violate principles of good design and impact design quality. They can also indicate that code is in need of maintenance and refactoring.

The phrase has its origins in Chapter 3 “Bad smells in code” by Kent Beck and Martin Fowler in Fowler, Martin (1999). Refactoring. Improving the Design of Existing Code. Addison-Wesley. ISBN 0-201-48567-2.

Pylint recommendations are given as warnings or errors, and Pylint also scores the code with an overall mark. We can look at a specific file (e.g. inflammation-analysis.py), or a package (e.g. inflammation). Let’s look at our inflammation package and code inside it (namely models.py and views.py). From the project root do:

$ pylint inflammation

You should see an output similar to the following:

************* Module inflammation.models
inflammation/models.py:5:82: C0303: Trailing whitespace (trailing-whitespace)
inflammation/models.py:6:66: C0303: Trailing whitespace (trailing-whitespace)
inflammation/models.py:34:0: C0305: Trailing newlines (trailing-newlines)
************* Module inflammation.views
inflammation/views.py:4:0: W0611: Unused numpy imported as np (unused-import)

------------------------------------------------------------------
Your code has been rated at 8.00/10 (previous run: 8.00/10, +0.00)

Your own outputs of the above commands may vary depending on how you have implemented and fixed the code in previous exercises and the coding style you have used.

The five digit codes, such as C0303, are unique identifiers for warnings, with the first character indicating the type of warning. There are five different types of warnings that Pylint looks for, and you can get a summary of them by doing:

$ pylint --long-help

Near the end you’ll see:

  Output:
    Using the default text output, the message format is :
    MESSAGE_TYPE: LINE_NUM:[OBJECT:] MESSAGE
    There are 5 kind of message types :
    * (C) convention, for programming standard violation
    * (R) refactor, for bad code smell
    * (W) warning, for python specific problems
    * (E) error, for probable bugs in the code
    * (F) fatal, if an error occurred which prevented pylint from doing
    further processing.

So for an example of a Pylint Python-specific warning, see the “W0611: Unused numpy imported as np (unused-import)” warning.

It is important to note that while tools such as Pylint are great at giving you a starting point to consider how to improve your code, they won’t find everything that may be wrong with it.

How Does Pylint Calculate the Score?

The Python formula used is (with the variables representing numbers of each type of infraction and statement indicating the total number of statements):

10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)

For example, with a total of 31 statements of models.py and views.py, with a count of the errors shown above, we get a score of 8.00. Note whilst there is a maximum score of 10, given the formula, there is no minimum score - it’s quite possible to get a negative score!

Exercise: Further Improve Code Style of Our Project

Select and fix a few of the issues with our code that Pylint detected. Make sure you do not break the rest of the code in the process and that the code still runs. After making any changes, run Pylint again to verify you’ve resolved these issues.

Make sure you commit and push requirements.txt and any file with further code style improvements you did and merge onto your development and main branches.

$ git add requirements.txt
$ git commit -m "Added Pylint library"
$ git push origin style-fixes
$ git checkout develop
$ git merge style-fixes
$ git push origin develop
$ git checkout main
$ git merge develop
$ git push origin main

Optional Exercise: Improve Code Style of Your Other Python Projects

If you have a Python project you are working on or you worked on in the past, run it past Pylint to see what issues with your code are detected, if any.

It is possible to automate these kind of code checks with GitHub’s Continuous Integration service GitHub Actions - we will come back to automated linting in the episode on “Diagnosing Issues and Improving Robustness”.

Key Points

  • Use linting tools on the command line (or via continuous integration) to automatically check your code style.


Section 2: Ensuring Correctness of Software at Scale

Overview

Teaching: 5 min
Exercises: 0 min
Questions
  • What should we do to ensure our code is correct?

Objectives
  • Introduce the testing tools, techniques, and infrastructure that will be used in this section.

We’ve just set up a suitable environment for the development of our software project and are ready to start coding. However, we want to make sure that the new code we contribute to the project is actually correct and is not breaking any of the existing code. So, in this section, we’ll look at testing approaches that can help us ensure that the software we write is behaving as intended, and how we can diagnose and fix issues once faults are found. Using such approaches requires us to change our practice of development. This can take time, but potentially saves us considerable time in the medium to long term by allowing us to more comprehensively and rapidly find such faults, as well as giving us greater confidence in the correctness of our code - so we should try and employ such practices early on. We will also make use of techniques and infrastructure that allow us to do this in a scalable, automated and more performant way as our codebase grows.

Tools for scaled software testing

In this section we will:

Key Points

  • Using testing requires us to change our practice of code development, but saves time in the long run by allowing us to more comprehensively and rapidly find faults in code, as well as giving us greater confidence in the correctness of our code.

  • The use of test techniques and infrastructures such as parameterisation and Continuous Integration can help scale and further automate our testing process.


Automatically Testing Software

Overview

Teaching: 30 min
Exercises: 20 min
Questions
  • Does the code we develop work the way it should do?

  • Can we (and others) verify these assertions for themselves?

  • To what extent are we confident of the accuracy of results that appear in publications?

Objectives
  • Explain the reasons why testing is important

  • Describe the three main types of tests and what each are used for

  • Implement and run unit tests to verify the correct behaviour of program functions

Introduction

Being able to demonstrate that a process generates the right results is important in any field of research, whether it’s software generating those results or not. So when writing software we need to ask ourselves some key questions:

If we are unable to demonstrate that our software fulfills these criteria, why would anyone use it? Having well-defined tests for our software is useful for this, but manually testing software can prove an expensive process.

Automation can help, and automation where possible is a good thing - it enables us to define a potentially complex process in a repeatable way that is far less prone to error than manual approaches. Once defined, automation can also save us a lot of effort, particularly in the long run. In this episode we’ll look into techniques of automated testing to improve the predictability of a software change, make development more productive, and help us produce code that works as expected and produces desired results.

What Is Software Testing?

For the sake of argument, if each line we write has a 99% chance of being right, then a 70-line program will be wrong more than half the time. We need to do better than that, which means we need to test our software to catch these mistakes.

We can and should extensively test our software manually, and manual testing is well-suited to testing aspects such as graphical user interfaces and reconciling visual outputs against inputs. However, even with a good test plan, manual testing is very time consuming and prone to error. Another style of testing is automated testing, where we write code that tests the functions of our software. Since computers are very good and efficient at automating repetitive tasks, we should take advantage of this wherever possible.

There are three main types of automated tests:

For the purposes of this course, we’ll focus on unit tests. But the principles and practices we’ll talk about can be built on and applied to the other types of tests too.

Set Up a New Feature Branch for Writing Tests

We’re going to look at how to run some existing tests and also write some new ones, so let’s ensure we’re initially on our develop branch we created earlier. And then, we’ll create a new feature branch called test-suite off the develop branch - a common term we use to refer to sets of tests - that we’ll use for our test writing work:

$ git checkout develop
$ git branch test-suite
$ git checkout test-suite

Good practice is to write our tests around the same time we write our code on a feature branch. But since the code already exists, we’re creating a feature branch for just these extra tests. Git branches are designed to be lightweight, and where necessary, transient, and use of branches for even small bits of work is encouraged.

Later on, once we’ve finished writing these tests and are convinced they work properly, we’ll merge our test-suite branch back into develop.

Inflammation Data Analysis

Let’s go back to our patient inflammation software project. Recall that it is based on a clinical trial of inflammation in patients who have been given a new treatment for arthritis. There are a number of datasets in the data directory recording inflammation information in patients (each file representing a different trial), and are each stored in comma-separated values (CSV) format: each row holds information for a single patient, and the columns represent successive days when inflammation was measured in patients.

Let’s take a quick look at the data now from within the Python command line console. Change directory to the repository root (which should be in your home directory ~/python-intermediate-inflammation), ensure you have your virtual environment activated in your command line terminal (particularly if opening a new one), and then start the Python console by invoking the Python interpreter without any parameters, e.g.:

$ cd ~/python-intermediate-inflammation
$ source venv/bin/activate
$ python3

The last command will start the Python console within your shell, which enables us to execute Python commands interactively. Inside the console enter the following:

import numpy as np
data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
data.shape
(60, 40)

The data in this case is two-dimensional - it has 60 rows (one for each patient) and 40 columns (one for each day). Each cell in the data represents an inflammation reading on a given day for a patient.

Our patient inflammation application has a number of statistical functions held in inflammation/models.py: daily_mean(), daily_max() and daily_min(), for calculating the mean average, the maximum, and the minimum values for a given number of rows in our data. For example, the daily_mean() function looks like this:

def daily_mean(data):
    """Calculate the daily mean of a 2D inflammation data array for each day.

    :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days).
    :returns: An array of mean values of measurements for each day.
    """
    return np.mean(data, axis=0)

Here, we use NumPy’s np.mean() function to calculate the mean vertically across the data (denoted by axis=0), which is then returned from the function. So, if data was a NumPy array of three rows like…

[[1, 2],
 [3, 4],
 [5, 6]]

…the function would return a 1D NumPy array of [3, 4] - each value representing the mean of each column (which are, coincidentally, the same values as the second row in the above data array).

To show this working with our patient data, we can use the function like this, passing the first four patient rows to the function in the Python console:

from inflammation.models import daily_mean

daily_mean(data[0:4])

Note we use a different form of import here - only importing the daily_mean function from our models instead of everything. This also has the effect that we can refer to the function using only its name, without needing to include the module name too (i.e. inflammation.models.daily_mean()).

The above code will return the mean inflammation for each day column across the first four patients (as a 1D NumPy array of shape (40, 0)):

array([ 0.  ,  0.5 ,  1.5 ,  1.75,  2.5 ,  1.75,  3.75,  3.  ,  5.25,
        6.25,  7.  ,  7.  ,  7.  ,  8.  ,  5.75,  7.75,  8.5 , 11.  ,
        9.75, 10.25, 15.  ,  8.75,  9.75, 10.  ,  8.  , 10.25,  8.  ,
        5.5 ,  8.  ,  6.  ,  5.  ,  4.75,  4.75,  4.  ,  3.25,  4.  ,
        1.75,  2.25,  0.75,  0.75])

The other statistical functions are similar. Note that in real situations functions we write are often likely to be more complicated than these, but simplicity here allows us to reason about what’s happening - and what we need to test - more easily.

Let’s now look into how we can test each of our application’s statistical functions to ensure they are functioning correctly.

Writing Tests to Verify Correct Behaviour

One Way to Do It?

One way to test our functions would be to write a series of checks or tests, each executing a function we want to test with known inputs against known valid results, and throw an error if we encounter a result that is incorrect. So, referring back to our simple daily_mean() example above, we could use [[1, 2], [3, 4], [5, 6]] as an input to that function and check whether the result equals [3, 4]:

import numpy.testing as npt

test_input = np.array([[1, 2], [3, 4], [5, 6]])
test_result = np.array([3, 4])
npt.assert_array_equal(daily_mean(test_input), test_result)

So we use the assert_array_equal() function - part of NumPy’s testing library - to test that our calculated result is the same as our expected result. This function explicitly checks the array’s shape and elements are the same, and throws an AssertionError if they are not. In particular, note that we can’t just use == or other Python equality methods, since these won’t work properly with NumPy arrays in all cases.

We could then add to this with other tests that use and test against other values, and end up with something like:

test_input = np.array([[2, 0], [4, 0]])
test_result = np.array([2, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)

test_input = np.array([[0, 0], [0, 0], [0, 0]])
test_result = np.array([0, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)

test_input = np.array([[1, 2], [3, 4], [5, 6]])
test_result = np.array([3, 4])
npt.assert_array_equal(daily_mean(test_input), test_result)

However, if we were to enter these in this order, we’ll find we get the following after the first test:

...
AssertionError:
Arrays are not equal

Mismatched elements: 1 / 2 (50%)
Max absolute difference: 1.
Max relative difference: 0.5
 x: array([3., 0.])
 y: array([2, 0])

This tells us that one element between our generated and expected arrays doesn’t match, and shows us the different arrays.

We could put these tests in a separate script to automate the running of these tests. But a Python script halts at the first failed assertion, so the second and third tests aren’t run at all. It would be more helpful if we could get data from all of our tests every time they’re run, since the more information we have, the faster we’re likely to be able to track down bugs. It would also be helpful to have some kind of summary report: if our set of tests - known as a test suite - includes thirty or forty tests (as it well might for a complex function or library that’s widely used), we’d like to know how many passed or failed.

Going back to our failed first test, what was the issue? As it turns out, the test itself was incorrect, and should have read:

test_input = np.array([[2, 0], [4, 0]])
test_result = np.array([3, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)

Which highlights an important point: as well as making sure our code is returning correct answers, we also need to ensure the tests themselves are also correct. Otherwise, we may go on to fix our code only to return an incorrect result that appears to be correct. So a good rule is to make tests simple enough to understand so we can reason about both the correctness of our tests as well as our code. Otherwise, our tests hold little value.

Using a Testing Framework

Keeping these things in mind, here’s a different approach that builds on the ideas we’ve seen so far but uses a unit testing framework. In such a framework we define our tests we want to run as functions, and the framework automatically runs each of these functions in turn, summarising the outputs. And unlike our previous approach, it will run every test regardless of any encountered test failures.

Most people don’t enjoy writing tests, so if we want them to actually do it, it must be easy to:

Test results must also be reliable. If a testing tool says that code is working when it’s not, or reports problems when there actually aren’t any, people will lose faith in it and stop using it.

Look at tests/test_models.py:

"""Tests for statistics functions within the Model layer."""

import numpy as np
import numpy.testing as npt


def test_daily_mean_zeros():
    """Test that mean function works for an array of zeros."""
    from inflammation.models import daily_mean

    test_input = np.array([[0, 0],
                           [0, 0],
                           [0, 0]])
    test_result = np.array([0, 0])

    # Need to use NumPy testing functions to compare arrays
    npt.assert_array_equal(daily_mean(test_input), test_result)


def test_daily_mean_integers():
    """Test that mean function works for an array of positive integers."""
    from inflammation.models import daily_mean

    test_input = np.array([[1, 2],
                           [3, 4],
                           [5, 6]])
    test_result = np.array([3, 4])

    # Need to use NumPy testing functions to compare arrays
    npt.assert_array_equal(daily_mean(test_input), test_result)
...

Here, although we have specified two of our previous manual tests as separate functions, they run the same assertions. Each of these test functions, in a general sense, are called test cases - these are a specification of:

Also, we’re defining each of these things for a test case we can run independently that requires no manual intervention.

Going back to our list of requirements, how easy is it to run these tests? We can do this using a Python package called pytest. Pytest is a testing framework that allows you to write test cases using Python. You can use it to test things like Python functions, database operations, or even things like service APIs - essentially anything that has inputs and expected outputs. We’ll be using Pytest to write unit tests, but what you learn can scale to more complex functional testing for applications or libraries.

What About Unit Testing in Other Languages?

Other unit testing frameworks exist for Python, including Nose2 and Unittest, and the approach to unit testing can be translated to other languages as well, e.g. pFUnit for Fortran, JUnit for Java (the original unit testing framework), Catch or gtest for C++, etc.

Why Use pytest over unittest?

We could alternatively use another Python unit test framework, unittest, which has the advantage of being installed by default as part of Python. This is certainly a solid and established option, however pytest has many distinct advantages, particularly for learning, including:

  • unittest requires additional knowledge of object-oriented frameworks (covered later in the course) to write unit tests, whereas in pytest these are written in simpler functions so is easier to learn
  • Being written using simpler functions, pytest’s scripts are more concise and contain less boilerplate, and thus are easier to read
  • pytest output, particularly in regard to test failure output, is generally considered more helpful and readable
  • pytest has a vast ecosystem of plugins available if ever you need additional testing functionality
  • unittest-style unit tests can be run from pytest out of the box!

A common challenge, particularly at the intermediate level, is the selection of a suitable tool from many alternatives for a given task. Once you’ve become accustomed to object-oriented programming you may find unittest a better fit for a particular project or team, so you may want to revisit it at a later date!

Installing Pytest

If you have already installed pytest package in your virtual environment, you can skip this step. Otherwise, as we have seen, we have a couple of options for installing external libraries:

  1. via PyCharm (see “Adding an External Library” section in “Integrated Software Development Environments” episode), or
  2. via the command line (see “Installing External Libraries in an Environment With pip section in “Virtual Environments For Software Development” episode).

To do it via the command line - exit the Python console first (either with Ctrl-D or by typing exit()), then do:

$ pip3 install pytest

Whether we do this via PyCharm or the command line, the results are exactly the same: our virtual environment will now have the pytest package installed for use.

Running Tests

Now we can run these tests using pytest:

$ python -m pytest tests/test_models.py

Here, we use -m to invoke the pytest installed module, and specify the tests/test_models.py file to run the tests in that file explicitly.

Why Run Pytest Using python -m and Not pytest ?

Another way to run pytest is via its own command, so we could try to use pytest tests/test_models.py on the command line instead, but this would lead to a ModuleNotFoundError: No module named 'inflammation'. This is because using the python -m pytest method adds the current directory to its list of directories to search for modules, whilst using pytest does not - the inflammation subdirectory’s contents are not ‘seen’, hence the ModuleNotFoundError. There are ways to get around this with various methods, but we’ve used python -m for simplicity.

============================================== test session starts =====================================================
platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/alex/python-intermediate-inflammation
plugins: anyio-3.3.4
collected 2 items

tests/test_models.py ..                                                                                           [100%]

=============================================== 2 passed in 0.79s ======================================================

Pytest looks for functions whose names also start with the letters ‘test_’ and runs each one. Notice the .. after our test script:

So if we have many tests, we essentially get a report indicating which tests succeeded or failed. Going back to our list of requirements (the bullets under Using a Testing Framework), do we think these results are easy to understand?

Exercise: Write Some Unit Tests

We already have a couple of test cases in test/test_models.py that test the daily_mean() function. Looking at inflammation/models.py, write at least two new test cases that test the daily_max() and daily_min() functions, adding them to test/test_models.py. Here are some hints:

  • You could choose to format your functions very similarly to daily_mean(), defining test input and expected result arrays followed by the equality assertion.
  • Try to choose cases that are suitably different, and remember that these functions take a 2D array and return a 1D array with each element the result of analysing each column of the data.

Once added, run all the tests again with python -m pytest tests/test_models.py, and you should also see your new tests pass.

Solution

...
def test_daily_max():
    """Test that max function works for an array of positive integers."""
    from inflammation.models import daily_max

    test_input = np.array([[4, 2, 5],
                           [1, 6, 2],
                           [4, 1, 9]])
    test_result = np.array([4, 6, 9])

    npt.assert_array_equal(daily_max(test_input), test_result)


def test_daily_min():
    """Test that min function works for an array of positive and negative integers."""
    from inflammation.models import daily_min

    test_input = np.array([[ 4, -2, 5],
                           [ 1, -6, 2],
                           [-4, -1, 9]])
    test_result = np.array([-4, -6, 2])

    npt.assert_array_equal(daily_min(test_input), test_result)
...

The big advantage is that as our code develops we can update our test cases and commit them back, ensuring that ourselves (and others) always have a set of tests to verify our code at each step of development. This way, when we implement a new feature, we can check a) that the feature works using a test we write for it, and b) that the development of the new feature doesn’t break any existing functionality.

What About Testing for Errors?

There are some cases where seeing an error is actually the correct behaviour, and Python allows us to test for exceptions. Add this test in tests/test_models.py:

import pytest
...
def test_daily_min_string():
    """Test for TypeError when passing strings"""
    from inflammation.models import daily_min

    with pytest.raises(TypeError):
        error_expected = daily_min([['Hello', 'there'], ['General', 'Kenobi']])

Note that you need to import the pytest library at the top of our test_models.py file with import pytest so that we can use pytest’s raises() function.

Run all your tests as before.

Since we’ve installed pytest to our environment, we should also regenerate our requirements.txt:

$ pip3 freeze > requirements.txt

Finally, let’s commit our new test_models.py file, requirements.txt file, and test cases to our test-suite branch, and push this new branch and all its commits to GitHub:

$ git add requirements.txt tests/test_models.py
$ git commit -m "Add initial test cases for daily_max() and daily_min()"
$ git push -u origin test-suite

Why Should We Test Invalid Input Data?

Testing the behaviour of inputs, both valid and invalid, is a really good idea and is known as data validation. Even if you are developing command line software that cannot be exploited by malicious data entry, testing behaviour against invalid inputs prevents generation of erroneous results that could lead to serious misinterpretation (as well as saving time and compute cycles which may be expensive for longer-running applications). It is generally best not to assume your user’s inputs will always be rational.

Key Points

  • The three main types of automated tests are unit tests, functional tests and regression tests.

  • We can write unit tests to verify that functions generate expected output given a set of specific inputs.

  • It should be easy to add or change tests, understand and run them, and understand their results.

  • We can use a unit testing framework like Pytest to structure and simplify the writing of tests in Python.

  • We should test for expected errors in our code.

  • Testing program behaviour against both valid and invalid inputs is important and is known as data validation.


Scaling Up Unit Testing

Overview

Teaching: 10 min
Exercises: 5 min
Questions
  • How can we make it easier to write lots of tests?

  • How can we know how much of our code is being tested?

Objectives
  • Use parameterisation to automatically run tests over a set of inputs

  • Use code coverage to understand how much of our code is being tested using unit tests

Introduction

We’re starting to build up a number of tests that test the same function, but just with different parameters. However, continuing to write a new function for every single test case isn’t likely to scale well as our development progresses. How can we make our job of writing tests more efficient? And importantly, as the number of tests increases, how can we determine how much of our code base is actually being tested?

Parameterising Our Unit Tests

So far, we’ve been writing a single function for every new test we need. But when we simply want to use the same test code but with different data for another test, it would be great to be able to specify multiple sets of data to use with the same test code. Test parameterisation gives us this.

So instead of writing a separate function for each different test, we can parameterise the tests with multiple test inputs. For example, in tests/test_models.py let us rewrite the test_daily_mean_zeros() and test_daily_mean_integers() into a single test function:

@pytest.mark.parametrize(
    "test, expected",
    [
        ([ [0, 0], [0, 0], [0, 0] ], [0, 0]),
        ([ [1, 2], [3, 4], [5, 6] ], [3, 4]),
    ])
def test_daily_mean(test, expected):
    """Test mean function works for array of zeroes and positive integers."""
    from inflammation.models import daily_mean
    npt.assert_array_equal(daily_mean(np.array(test)), np.array(expected))

Here, we use Pytest’s mark capability to add metadata to this specific test - in this case, marking that it’s a parameterised test. parameterize() function is actually a Python decorator. A decorator, when applied to a function, adds some functionality to it when it is called, and here, what we want to do is specify multiple input and expected output test cases so the function is called over each of these inputs automatically when this test is called.

We specify these as arguments to the parameterize() decorator, firstly indicating the names of these arguments that will be passed to the function (test, expected), and secondly the actual arguments themselves that correspond to each of these names - the input data (the test argument), and the expected result (the expected argument). In this case, we are passing in two tests to test_daily_mean() which will be run sequentially.

So our first test will run daily_mean() on [ [0, 0], [0, 0], [0, 0] ] (our test argument), and check to see if it equals [0, 0] (our expected argument). Similarly, our second test will run daily_mean() with [ [1, 2], [3, 4], [5, 6] ] and check it produces [3, 4].

The big plus here is that we don’t need to write separate functions for each of the tests - our test code can remain compact and readable as we write more tests and adding more tests scales better as our code becomes more complex.

Exercise: Write Parameterised Unit Tests

Rewrite your test functions for daily_max() and daily_min() to be parameterised, adding in new test cases for each of them.

Solution

...
@pytest.mark.parametrize(
    "test, expected",
    [
        ([ [0, 0, 0], [0, 0, 0], [0, 0, 0] ], [0, 0, 0]),
        ([ [4, 2, 5], [1, 6, 2], [4, 1, 9] ], [4, 6, 9]),
        ([ [4, -2, 5], [1, -6, 2], [-4, -1, 9] ], [4, -1, 9]),
    ])
def test_daily_max(test, expected):
    """Test max function works for zeroes, positive integers, mix of positive/negative integers."""
    from inflammation.models import daily_max
    npt.assert_array_equal(daily_max(np.array(test)), np.array(expected))


@pytest.mark.parametrize(
    "test, expected",
    [
        ([ [0, 0, 0], [0, 0, 0], [0, 0, 0] ], [0, 0, 0]),
        ([ [4, 2, 5], [1, 6, 2], [4, 1, 9] ], [1, 1, 2]),
        ([ [4, -2, 5], [1, -6, 2], [-4, -1, 9] ], [-4, -6, 2]),
    ])
def test_daily_min(test, expected):
    """Test min function works for zeroes, positive integers, mix of positive/negative integers."""
    from inflammation.models import daily_min
    npt.assert_array_equal(daily_min(np.array(test)), np.array(expected))
...

Try them out!

Let’s commit our revised test_models.py file and test cases to our test-suite branch (but don’t push them to the remote repository just yet!):

$ git add tests/test_models.py
$ git commit -m "Add parameterisation mean, min, max test cases"

Code Coverage - How Much of Our Code is Tested?

Pytest can’t think of test cases for us. We still have to decide what to test and how many tests to run. Our best guide here is economics: we want the tests that are most likely to give us useful information that we don’t already have. For example, if daily_mean(np.array([[2, 0], [4, 0]]))) works, there’s probably not much point testing daily_mean(np.array([[3, 0], [4, 0]]))), since it’s hard to think of a bug that would show up in one case but not in the other.

Now, we should try to choose tests that are as different from each other as possible, so that we force the code we’re testing to execute in all the different ways it can - to ensure our tests have a high degree of code coverage.

A simple way to check the code coverage for a set of tests is to use pytest to tell us how many statements in our code are being tested. By installing a Python package to our virtual environment called pytest-cov that is used by Pytest and using that, we can find this out:

$ pip3 install pytest-cov
$ python -m pytest --cov=inflammation.models tests/test_models.py

So here, we specify the additional named argument --cov to pytest specifying the code to analyse for test coverage.

============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/alex/python-intermediate-inflammation
plugins: anyio-3.3.4, cov-3.0.0
collected 9 items

tests/test_models.py .........                                            [100%]

---------- coverage: platform darwin, python 3.9.6-final-0 -----------
Name                     Stmts   Miss  Cover
--------------------------------------------
inflammation/models.py       9      1    89%
--------------------------------------------
TOTAL                        9      1    89%

============================== 9 passed in 0.26s ===============================

Here we can see that our tests are doing very well - 89% of statements in inflammation/models.py have been executed. But which statements are not being tested? The additional argument --cov-report term-missing can tell us:

$ python -m pytest --cov=inflammation.models --cov-report term-missing tests/test_models.py
...
Name                     Stmts   Miss  Cover   Missing
------------------------------------------------------
inflammation/models.py       9      1    89%   18
------------------------------------------------------
TOTAL                        9      1    89%
...

So there’s still one statement not being tested at line 18, and it turns out it’s in the function load_csv(). Here we should consider whether or not to write a test for this function, and, in general, any other functions that may not be tested. Of course, if there are hundreds or thousands of lines that are not covered it may not be feasible to write tests for them all. But we should prioritise the ones for which we write tests, considering how often they’re used, how complex they are, and importantly, the extent to which they affect our program’s results.

Again, we should also update our requirements.txt file with our latest package environment, which now also includes pytest-cov, and commit it:

$ pip3 freeze > requirements.txt
$ cat requirements.txt

You’ll notice pytest-cov and coverage have been added. Let’s commit this file and push our new branch to GitHub:

$ git add requirements.txt
$ git commit -m "Add coverage support"
$ git push origin test-suite

What about Testing Against Indeterminate Output?

What if your implementation depends on a degree of random behaviour? This can be desired within a number of applications, particularly in simulations (for example, molecular simulations) or other stochastic behavioural models of complex systems. So how can you test against such systems if the outputs are different when given the same inputs?

One way is to remove the randomness during testing. For those portions of your code that use a language feature or library to generate a random number, you can instead produce a known sequence of numbers instead when testing, to make the results deterministic and hence easier to test against. You could encapsulate this different behaviour in separate functions, methods, or classes and call the appropriate one depending on whether you are testing or not. This is essentially a type of mocking, where you are creating a “mock” version that mimics some behaviour for the purposes of testing.

Another way is to control the randomness during testing to provide results that are deterministic - the same each time. Implementations of randomness in computing languages, including Python, are actually never truly random - they are pseudorandom: the sequence of ‘random’ numbers are typically generated using a mathematical algorithm. A seed value is used to initialise an implementation’s random number generator, and from that point, the sequence of numbers is actually deterministic. Many implementations just use the system time as the default seed, but you can set your own. By doing so, the generated sequence of numbers is the same, e.g. using Python’s random library to randomly select a sample of ten numbers from a sequence between 0-99:

import random

random.seed(1)
print(random.sample(range(0, 100), 10))
random.seed(1)
print(random.sample(range(0, 100), 10))

Will produce:

[17, 72, 97, 8, 32, 15, 63, 57, 60, 83]
[17, 72, 97, 8, 32, 15, 63, 57, 60, 83]

So since your program’s randomness is essentially eliminated, your tests can be written to test against the known output. The trick of course, is to ensure that the output being testing against is definitively correct!

The other thing you can do while keeping the random behaviour, is to test the output data against expected constraints of that output. For example, if you know that all data should be within particular ranges, or within a particular statistical distribution type (e.g. normal distribution over time), you can test against that, conducting multiple test runs that take advantage of the randomness to fill the known “space” of expected results. Note that this isn’t as precise or complete, and bear in mind this could mean you need to run a lot of tests which may take considerable time.

Test Driven Development

In the previous episode we learnt how to create unit tests to make sure our code is behaving as we intended. Test Driven Development (TDD) is an extension of this. If we can define a set of tests for everything our code needs to do, then why not treat those tests as the specification.

When doing Test Driven Development, we write our tests first and only write enough code to make the tests pass. We tend to do this at the level of individual features - define the feature, write the tests, write the code. The main advantages are:

You may also see this process called Red, Green, Refactor: ‘Red’ for the failing tests, ‘Green’ for the code that makes them pass, then ‘Refactor’ (tidy up) the result.

For the challenges from here on, try to first convert the specification into a unit test, then try writing the code to pass the test.

Limits to Testing

Like any other piece of experimental apparatus, a complex program requires a much higher investment in testing than a simple one. Putting it another way, a small script that is only going to be used once, to produce one figure, probably doesn’t need separate testing: its output is either correct or not. A linear algebra library that will be used by thousands of people in twice that number of applications over the course of a decade, on the other hand, definitely does. The key is identify and prioritise against what will most affect the code’s ability to generate accurate results.

It’s also important to remember that unit testing cannot catch every bug in an application, no matter how many tests you write. To mitigate this manual testing is also important. Also remember to test using as much input data as you can, since very often code is developed and tested against the same small sets of data. Increasing the amount of data you test against - from numerous sources - gives you greater confidence that the results are correct.

Our software will inevitably increase in complexity as it develops. Using automated testing where appropriate can save us considerable time, especially in the long term, and allows others to verify against correct behaviour.

Key Points

  • We can assign multiple inputs to tests using parametrisation.

  • It’s important to understand the coverage of our tests across our code.

  • Writing unit tests takes time, so apply them where it makes the most sense.


Continuous Integration for Automated Testing

Overview

Teaching: 45 min
Exercises: 0 min
Questions
  • How can I automate the testing of my repository’s code in a way that scales well?

  • What can I do to make testing across multiple platforms easier?

Objectives
  • Describe the benefits of using Continuous Integration for further automation of testing

  • Enable GitHub Actions Continuous Integration for public open source repositories

  • Use continuous integration to automatically run unit tests and code coverage when changes are committed to a version control repository

  • Use a build matrix to specify combinations of operating systems and Python versions to run tests over

Introduction

So far we’ve been manually running our tests as we require. Once we’ve made a change, or added a new feature with accompanying tests, we can re-run our tests, giving ourselves (and others who wish to run them) increased confidence that everything is working as expected. Now we’re going to take further advantage of automation in a way that helps testing scale across a development team with very little overhead, using Continuous Integration.

What is Continuous Integration?

The automated testing we’ve done so far only takes into account the state of the repository we have on our own machines. In a software project involving multiple developers working and pushing changes on a repository, it would be great to know holistically how all these changes are affecting our codebase without everyone having to pull down all the changes and test them. If we also take into account the testing required on different target user platforms for our software and the changes being made to many repository branches, the effort required to conduct testing at this scale can quickly become intractable for a research project to sustain.

Continuous Integration (CI) aims to reduce this burden by further automation, and automation - wherever possible - helps us to reduce errors and makes predictable processes more efficient. The idea is that when a new change is committed to a repository, CI clones the repository, builds it if necessary, and runs any tests. Once complete, it presents a report to let you see what happened.

There are many CI infrastructures and services, free and paid for, and subject to change as they evolve their features. We’ll be looking at GitHub Actions - which unsurprisingly is available as part of GitHub.

Continuous Integration with GitHub Actions

A Quick Look at YAML

YAML is a text format used by GitHub Action workflow files. It is also increasingly used for configuration files and storing other types of data, so it’s worth taking a bit of time looking into this file format.

YAML (a recursive acronym which stands for “YAML Ain’t Markup Language”) is a language designed to be human readable. A few basic things you need to know about YAML to get started with GitHub Actions are key-value pairs, arrays, maps and multi-line strings.

So firstly, YAML files are essentially made up of key-value pairs, in the form key: value, for example:

name: Kilimanjaro
height_metres: 5892
first_scaled_by: Hans Meyer

In general, you don’t need quotes for strings, but you can use them when you want to explicitly distinguish between numbers and strings, e.g. height_metres: "5892" would be a string, but in the above example it is an integer. It turns out Hans Meyer isn’t the only first ascender of Kilimanjaro, so one way to add this person as another value to this key is by using YAML arrays, like this:

first_scaled_by:
- Hans Meyer
- Ludwig Purtscheller

An alternative to this format for arrays is the following, which would have the same meaning:

first_scaled_by: [Hans Meyer, Ludwig Purtscheller]

If we wanted to express more information for one of these values we could use a feature known as maps (dictionaries/hashes), which allow us to define nested, hierarchical data structures, e.g.

...
height:
  value: 5892
  unit: metres
  measured:
    year: 2008
    by: Kilimanjaro 2008 Precise Height Measurement Expedition
...

So here, height itself is made up of three keys value, unit, and measured, with the last of these being another nested key with the keys year and by. Note the convention of using two spaces for tabs, instead of Python’s four.

We can also combine maps and arrays to describe more complex data. Let’s say we want to add more detail to our list of initial ascenders:

...
first_scaled_by:
- name: Hans Meyer
  date_of_birth: 22-03-1858
  nationality: German
- name: Ludwig Purtscheller
  date_of_birth: 22-03-1858
  nationality: Austrian

So here we have a YAML array of our two mountaineers, each with additional keys offering more information.

GitHub Actions also makes use of | symbol to indicate a multi-line string that preserves new lines. For example:

shakespeare_couplet: |
  Good night, good night. Parting is such sweet sorrow
  That I shall say good night till it be morrow.

They key shakespeare_couplet would hold the full two line string, preserving the new line after sorrow.

As we’ll see shortly, GitHub Actions workflows will use all of these.

Defining Our Workflow

With a GitHub repository there’s a way we can set up CI to run our tests automatically when we commit changes. Let’s do this now by adding a new file to our repository whilst on the test-suite branch. First, create the new directories .github/workflows:

$ mkdir -p .github/workflows

This directory is used specifically for GitHub Actions, allowing us to specify any number of workflows that can be run under a variety of conditions, which is also written using YAML. So let’s add a new YAML file called main.yml (note it’s extension is .yml without the a) within the new .github/workflows directory:

name: CI

# We can specify which Github events will trigger a CI build
on: push

# now define a single job 'build' (but could define more)
jobs:

  build:

    # we can also specify the OS to run tests on
    runs-on: ubuntu-latest

    # a job is a seq of steps
    steps:

    # Next we need to checkout out repository, and set up Python
    # A 'name' is just an optional label shown in the log - helpful to clarify progress - and can be anything
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Set up Python 3.9
      uses: actions/setup-python@v2
      with:
        python-version: "3.9"

    - name: Install Python dependencies
      run: |
        python3 -m pip install --upgrade pip
        pip3 install -r requirements.txt

    - name: Test with PyTest
      run: |
        python -m pytest --cov=inflammation.models tests/test_models.py

Note: be sure to create this file as main.yml within the newly created .github/workflows directory, or it won’t work!

So as well as giving our workflow a name - CI - we indicate with on that we want this workflow to run when we push commits to our repository. The workflow itself is made of a single job named build, and we could define any number of jobs after this one if we wanted, and each one would run in parallel.

Next, we define what our build job will do. With runs-on we first state which operating systems we want to use, in this case just Ubuntu for now. We’ll be looking at ways we can scale this up to testing on more systems later.

Lastly, we define the steps that our job will undertake in turn, to set up the job’s environment and run our tests. You can think of the job’s environment initially as a blank slate: much like a freshly installed machine (albeit virtual) with very little installed on it, we need to prepare it with what it needs to be able to run our tests. Each of these steps are:

What about other Actions?

Our workflow here uses standard GitHub Actions (indicated by actions/*). Beyond the standard set of actions, others are available via the GitHub Marketplace. It contains many third-party actions (as well as apps) that you can use with GitHub for many tasks across many programming languages, particularly for setting up environments for running tests, code analysis and other tools, setting up and using infrastructure (for things like Docker or Amazon’s AWS cloud), or even managing repository issues. You can even contribute your own.

Triggering a Build on GitHub Actions

Now if we commit and push this change a CI run will be triggered:

$ git add .github
$ git commit -m "Add GitHub Actions configuration"
$ git push

Since we are only committing the GitHub Actions configuration file to the test-suite branch for the moment, only the contents of this branch will be used for CI. We can pass this file upstream into other branches (i.e. via merges) when we’re happy it works, which will then allow the process to run automatically on these other branches. This again highlights the usefulness of the feature-branch model - we can work in isolation on a feature until it’s ready to be passed upstream without disrupting development on other branches, and in the case of CI, we’re starting to see its scaling benefits across a larger scale development team working across potentially many branches.

Checking Build Progress and Reports

Handily, we can see the progress of the build from our repository on GitHub by selecting the test-suite branch from the dropdown menu (which currently says main), and then selecting commits (located just above the code directory listing on the right, alongside the last commit message and a small image of a timer).

Continuous Integration with GitHub Actions - Initial Build

You’ll see a list of commits for this branch, and likely see an orange marker next to the latest commit (clicking on it yields Some checks haven’t completed yet) meaning the build is still in progress. This is a useful view, as over time, it will give you a history of commits, who did them, and whether the commit resulted in a successful build or not.

Hopefully after a while, the marker will turn into a green tick indicating a successful build. Clicking it gives you even more information about the build, and selecting Details link takes you to a complete log of the build and its output.

Continuous Integration with GitHub Actions - Build Log

The logs are actually truncated; selecting the arrows next to the entries - which are the name labels we specified in the main.yml file - will expand them with more detail, including the output from the actions performed.

Continuous Integration with GitHub Actions - Build Details

GitHub Actions offers these continuous integration features as a completely free service for public repositories, and supplies 2000 build minutes a month on as many private repositories that you like. Paid levels are available too.

Scaling Up Testing Using Build Matrices

Now we have our CI configured and building, we can use a feature called build matrices which really shows the value of using CI to test at scale.

Suppose the intended users of our software use either Ubuntu, Mac OS, or Windows, and either have Python version 3.8, 3.9 or 3.10 installed, and we want to support all of these. Assuming we have a suitable test suite, it would take a considerable amount of time to set up testing platforms to run our tests across all these platform combinations. Fortunately, CI can do the hard work for us very easily.

Using a build matrix we can specify testing environments and parameters (such as operating system, Python version, etc.) and new jobs will be created that run our tests for each permutation of these.

Let’s see how this is done using GitHub Actions. To support this, we define a strategy as a matrix of operating systems and Python versions within build. We then use matrix.os and matrix.python-version to reference these configuration possibilities instead of using hardcoded values - replacing the runs-on and python-version parameters to refer to the values from the matrix. So, our .github/workflows/main.yml should look like the following:

...
    strategy:
      matrix:
        os: [ubuntu-latest, macos-latest, windows-latest]
        python-version: ["3.8", "3.9", "3.10"]

    runs-on: ${{ matrix.os }}

...

    # a job is a seq of steps
    steps:

    # Next we need to checkout out repository, and set up Python
    # A 'name' is just an optional label shown in the log - helpful to clarify progress - and can be anything
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: ${{ matrix.python-version }}
...

The ${{ }} are used as a means to reference configuration values from the matrix. This way, every possible permutation of Python versions 3.8, 3.9, and 3.10 with the latest versions of Ubuntu, Mac OS and Windows operating systems will be tested and we can expect 9 build jobs in total.

Let’s commit and push this change and see what happens:

$ git add .github/workflows/main.yml
$ git commit -m "Add GA build matrix for os and Python version"
$ git push

If we go to our GitHub build now, we can see that a new job has been created for each permutation.

Continuous Integration with GitHub Actions - Build Matrix

Note all jobs running in parallel (up to the limit allowed by our account) which potentially saves us a lot of time waiting for testing results. Overall, this approach allows us to massively scale our automated testing across platforms we wish to test.

Merging Back to develop Branch

Now we’re happy with our test suite, we can merge this work (which currently only exist on our test-suite branch) with our parent develop branch. Again, this reflects us working with impunity on a logical unit of work, involving multiple commits, on a separate feature branch until it’s ready to be escalated to the develop branch:

$ git checkout develop
$ git merge test-suite

Then, assuming no conflicts we can push these changes back to the remote repository as we’ve done before:

$ git push origin develop

Now these changes have migrated to our parent develop branch, develop will also inherit the configuration to run CI builds, so these will run automatically on this branch as well.

This highlights a big benefit of CI when you perform merges (and apply pull requests). As new branch code is merged into upstream branches like develop and main these newly integrated code changes are automatically tested together with existing code - which of course may also have changed in the meantime!

Key Points

  • Continuous Integration can run tests automatically to verify changes as code develops in our repository.

  • CI builds are typically triggered by commits pushed to a repository.

  • We need to write a configuration file to inform a CI service what to do for a build.

  • We can specify a build matrix to specify multiple platforms and programming language versions to test against

  • Builds can be enabled and configured separately for each branch.

  • We can run - and get reports from - different CI infrastructure builds simultaneously.


Diagnosing Issues and Improving Robustness

Overview

Teaching: 30 min
Exercises: 20 min
Questions
  • Once we know our program has errors, how can we locate them in the code?

  • How can we make our programs more resilient to failure?

Objectives
  • Use a debugger to explore behaviour of a running program

  • Describe and identify edge and corner test cases and explain why they are important

  • Apply error handling and defensive programming techniques to improve robustness of a program

  • Integrate linting tool style checking into a continuous integration job

Introduction

Unit testing can tell us something is wrong in our code and give a rough idea of where the error is by which test(s) are failing. But it does not tell us exactly where the problem is (i.e. what line of code), or how it came about. To give us a better idea of what is going on, we can:

But such approaches are often time consuming and sometimes not enough to fully pinpoint the issue. In complex programs, like simulation codes, we often need to get inside the code while it is running and explore. This is where using a debugger can be useful.

Setting the Scene

Let us add a new function called patient_normalise() to our inflammation example to normalise a given inflammation data array so that all entries fall between 0 and 1. (Make sure you create a new feature branch for this work off your develop branch.) To normalise each patient’s inflammation data we need to divide it by the maximum inflammation experienced by that patient. To do so, we can add the following code to inflammation/models.py:

def patient_normalise(data):
    """Normalise patient data from a 2D inflammation data array."""
    max = np.max(data, axis=0)
    return data / max[:, np.newaxis]

Note: there are intentional mistakes in the above code, which will be detected by further testing and code style checking below so bear with us for the moment!

In the code above, we first go row by row and find the maximum inflammation value for each patient and store these values in a 1-dimensional NumPy array max. We then want to use NumPy’s element-wise division, to divide each value in every row of inflammation data (belonging to the same patient) by the maximum value for that patient stored in the 1D array max. However, we cannot do that division automatically as data is a 2D array (of shape (60, 40)) and max is a 1D array (of shape (60, )), which means that their shapes are not compatible.

NumPy arrays of incompatible shapes

Hence, to make sure that we can perform this division and get the expected result, we need to convert max to be a 2D array by using the newaxis index operator to insert a new axis into max, making it a 2D array of shape (60, 1).

NumPy arrays' shapes after adding a new_axis

Now the division will give us the expected result. Even though the shapes are not identical, NumPy’s automatic broadcasting (adjustment of shapes) will make sure that the shape of the 2D max array is now “stretched” (“broadcast”) to match that of data - i.e. (60, 40), and element-wise division can be performed.

NumPy arrays' shapes after broadcasting

Broadcasting

The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Be careful, though, to understand how the arrays get stretched to avoid getting unexpected results.

Note there is an assumption in this calculation that the minimum value we want is always zero. This is a sensible assumption for this particular application, since the zero value is a special case indicating that a patient experienced no inflammation on a particular day.

Let us now add a new test in tests/test_models.py to check that the normalisation function is correct for some test data.

@pytest.mark.parametrize(
    "test, expected",
    [
        ([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]])
    ])
def test_patient_normalise(test, expected):
    """Test normalisation works for arrays of one and positive integers.
       Assumption that test accuracy of two decimal places is sufficient."""
    from inflammation.models import patient_normalise
    npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)

Note that we are using the assert_almost_equal() Numpy testing function instead of assert_array_equal(), since it allows us to test against values that are almost equal. This is very useful when we have numbers with arbitrary decimal places and are only concerned with a certain degree of precision, like the test case above, where we make the assumption that a test accuracy of two decimal places is sufficient.

Run the tests again using python -m pytest tests/test_models.py and you will note that the new test is failing, with an error message that does not give many clues as to what went wrong.

E       AssertionError:
E       Arrays are not almost equal to 2 decimals
E
E       Mismatched elements: 6 / 9 (66.7%)
E       Max absolute difference: 0.57142857
E       Max relative difference: 1.345
E        x: array([[0.14, 0.29, 0.43],
E              [0.5 , 0.62, 0.75],
E              [0.78, 0.89, 1.  ]])
E        y: array([[0.33, 0.67, 1.  ],
E              [0.67, 0.83, 1.  ],
E              [0.78, 0.89, 1.  ]])

tests/test_models.py:53: AssertionError

Let us use a debugger at this point to see what is going on and why the function failed.

Debugging in PyCharm

Think of debugging like performing exploratory surgery - on code! Debuggers allow us to peer at the internal workings of a program, such as variables and other state, as it performs its functions.

Running Tests Within PyCharm

Firstly, to make it easier to track what’s going on, we can set up PyCharm to run and debug our tests instead of running them from the command line. If you have not done so already, you will first need to enable the Pytest framework in PyCharm. You can do this by:

  1. Select either PyCharm > Preferences (Mac) or File > Settings (Linux, Windows).
  2. Then, in the preferences window that appears, select Tools -> Python integrated tools > from the left.
  3. Under Testing, for Default test runner select pytest.
  4. Select OK.

Setting up test framework in PyCharm

We can now run pytest over our tests in PyCharm, similarly to how we ran our inflammation-analysis.py script before. Right-click the test_models.py file under the tests directory in the file navigation window on the left, and select Run 'pytest in test_model...'. You’ll see the results of the tests appear in PyCharm in a bottom panel. If you scroll down in that panel you should see the failed test_patient_normalise() test result looking something like the following:

Running pytest in PyCharm

We can also run our test functions individually. First, let’s check that our PyCharm running and testing configurations are correct. Select Run > Edit Configurations... from the PyCharm menu, and you should see something like the following:

Ensuring testing configurations in PyCharm are correct

PyCharm allows us to configure multiple ways of running our code. Looking at the figure above, the first of these - inflammation-analysis under Python - was configured when we set up how to run our script from within PyCharm. The second - pytest in test_models.py under Python tests - is our recent test configuration. If you see just these, you’re good to go. We don’t need any others, so select any others you see and click the - button at the top to remove them. This will avoid any confusion when running our tests separately. Click OK when done.

Buffered Output

Whenever a Python program prints text to the terminal or to a file, it first stores this text in an output buffer. When the buffer becomes full or is flushed, the contents of the buffer are written to the terminal / file in one go and the buffer is cleared. This is usually done to increase performance by effectively converting multiple output operations into just one. Printing text to the terminal is a relatively slow operation, so in some cases this can make quite a big difference to the total execution time of a program.

However, using buffered output can make debugging more difficult, as we can no longer be quite sure when a log message will be displayed. In order to make debugging simpler, PyCharm automatically adds the environment variable PYTHONUNBUFFERED we see in the screenshot above, which disables output buffering.

Now, if you select the green arrow next to a test function in our test_models.py script in PyCharm, and select Run 'pytest in test_model...', we can run just that test:

Running a single test in PyCharm

Click on the “run” button next to test_patient_normalise, and you will be able to see that PyCharm runs just that test function, and we see the same AssertionError that we saw before.

Running the Debugger

Now we want to use the debugger to investigate what is happening inside the patient_normalise function. To do this we will add a breakpoint in the code. A breakpoint will pause execution at that point allowing us to explore the state of the program.

To set a breakpoint, navigate to the models.py file and move your mouse to the return statement of the patient_normalise function. Click to just to the right of the line number for that line and a small red dot will appear, indicating that you have placed a breakpoint on that line.

Setting a breakpoint in PyCharm

Now if you select the green arrow next to the test_patient_normalise function and instead select Debug 'pytest in test_model...', you will notice that execution will be paused at the return statement of patient_normalise. In the debug panel that appears below, we can now investigate the exact state of the program prior to it executing this line of code.

In the debug panel below, in the Debugger tab you will be able to see two sections that looks something like the following:

Debugging in PyCharm

We also have the ability run any Python code we wish at this point to explore the state of the program even further! This is useful if you want to view a particular combination of variables, or perhaps a single element or slice of an array to see what went wrong. Select the Console tab in the panel (next to the Debugger tab), and you’ll be presented with a Python prompt. Try putting in the expression max[:, np.newaxis] into the console, and you will be able to see the column vector that we are dividing data by in the return line of the function.

Debugging in PyCharm

Now, looking at the max variable, we can see that something looks wrong, as the maximum values for each patient do not correspond to the data array. Recall that the input data array we are using for the function is

  [[1, 2, 3],
   [4, 5, 6],
   [7, 8, 9]]

So the maximum inflammation for each patient should be [3, 6, 9], whereas the debugger shows [7, 8, 9]. You can see that the latter corresponds exactly to the last column of data, and we can immediately conclude that we took the maximum along the wrong axis of data. Now we have our answer, stop the debugging process by selecting the red square at the top right of the main PyCharm window.

So to fix the patient_normalise function in models.py, change axis=0 in the first line of the function to axis=1. With this fix in place, running all the tests again should result in all tests passing. Navigate back to test_models.py in PyCharm, right click test_models.py and select Run 'pytest in test_model...'. You should be rewarded with:

All tests in PyCharm are successful

NumPy Axis

Getting the axes right in NumPy is not trivial - the following tutorial offers a good explanation on how axes work when applying NumPy functions to arrays.

Corner or Edge Cases

The test case that we have currently written for patient_normalise is parameterised with a fairly standard data array. However, when writing your test cases, it is important to consider parameterising them by unusual or extreme values, in order to test all the edge or corner cases that your code could be exposed to in practice. Generally speaking, it is at these extreme cases that you will find your code failing, so it’s beneficial to test them beforehand.

What is considered an “edge case” for a given component depends on what that component is meant to do. In the case of patient_normalise function, the goal is to normalise a numeric array of numbers. For numerical values, extreme cases could be zeros, very large or small values, not-a-number (NaN) or infinity values. Since we are specifically considering an array of values, an edge case could be that all the numbers of the array are equal.

For all the given edge cases you might come up with, you should also consider their likelihood of occurrence. It is often too much effort to exhaustively test a given function against every possible input, so you should prioritise edge cases that are likely to occur. For our patient_normalise function, some common edge cases might be the occurrence of zeros, and the case where all the values of the array are the same.

When you are considering edge cases to test for, try also to think about what might break your code. For patient_normalise we can see that there is a division by the maximum inflammation value for each patient, so this will clearly break if we are dividing by zero here, resulting in NaN values in the normalised array.

With all this in mind, let us add a few edge cases to our parametrisation of test_patient_normalise. We will add two extra tests, corresponding to an input array of all 0, and an input array of all 1.

@pytest.mark.parametrize(
    "test, expected",
    [
        ([[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]),
        ([[1, 1, 1], [1, 1, 1], [1, 1, 1]], [[1, 1, 1], [1, 1, 1], [1, 1, 1]]),
        ([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]]),
    ])

Running the tests now from the command line results in the following assertion error, due to the division by zero as we predicted.

E           AssertionError:
E           Arrays are not almost equal to 2 decimals
E
E           x and y nan location mismatch:
E            x: array([[nan, nan, nan],
E                  [nan, nan, nan],
E                  [nan, nan, nan]])
E            y: array([[0, 0, 0],
E                  [0, 0, 0],
E                  [0, 0, 0]])

tests/test_models.py:88: AssertionError

How can we fix this? Luckily, there is a NumPy function that is useful here, np.isnan(), which we can use to replace all the NaN’s with our desired result, which is 0. We can also silence the run-time warning using np.errstate:

...
def patient_normalise(data):
    """
    Normalise patient data from a 2D inflammation data array.

    NaN values are ignored, and normalised to 0.

    Negative values are rounded to 0.
    """
    maxima = np.nanmax(data, axis=1)
    with np.errstate(invalid='ignore', divide='ignore'):
        normalised = data / maxima[:, np.newaxis]
    normalised[np.isnan(normalised)] = 0
    normalised[normalised < 0] = 0
    return normalised
...

Exercise: Exploring Tests for Edge Cases

Think of some more suitable edge cases to test our patient_normalise() function and add them to the parametrised tests. After you have finished remember to commit your changes.

Possible Solution

@pytest.mark.parametrize(
    "test, expected",
    [
        (
            [[0, 0, 0], [0, 0, 0], [0, 0, 0]],
            [[0, 0, 0], [0, 0, 0], [0, 0, 0]],
        ),
        (
            [[1, 1, 1], [1, 1, 1], [1, 1, 1]],
            [[1, 1, 1], [1, 1, 1], [1, 1, 1]],
        ),
        (
            [[float('nan'), 1, 1], [1, 1, 1], [1, 1, 1]],
            [[0, 1, 1], [1, 1, 1], [1, 1, 1]],
        ),
        (
            [[1, 2, 3], [4, 5, float('nan')], [7, 8, 9]],
            [[0.33, 0.67, 1], [0.8, 1, 0], [0.78, 0.89, 1]],
        ),
        (
            [[-1, 2, 3], [4, 5, 6], [7, 8, 9]],
            [[0, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
        ),
        (
            [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
            [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
        )
    ])
def test_patient_normalise(test, expected):
    """Test normalisation works for arrays of one and positive integers."""
    from inflammation.models import patient_normalise
    npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)
...

You could also, for example, test and handle the case of a whole row of NaNs.

Defensive Programming

In the previous section, we made a few design choices for our patient_normalise function:

  1. We are implicitly converting any NaN and negative values to 0,
  2. Normalising a constant 0 array of inflammation results in an identical array of 0s,
  3. We don’t warn the user of any of these situations.

This could have be handled differently. We might decide that we do not want to silently make these changes to the data, but instead to explicitly check that the input data satisfies a given set of assumptions (e.g. no negative values) and raise an error if this is not the case. Then we can proceed with the normalisation, confident that our normalisation function will work correctly.

Checking that input to a function is valid via a set of preconditions is one of the simplest forms of defensive programming which is used as a way of avoiding potential errors. Preconditions are checked at the beginning of the function to make sure that all assumptions are satisfied. These assumptions are often based on the value of the arguments, like we have already discussed. However, in a dynamic language like Python one of the more common preconditions is to check that the arguments of a function are of the correct type. Currently there is nothing stopping someone from calling patient_normalise with a string, a dictionary, or another object that is not an ndarray.

As an example, let us change the behaviour of the patient_normalise() function to raise an error on negative inflammation values. Edit the inflammation/models.py file, and add a precondition check to the beginning of the patient_normalise() function like so:

...
    if np.any(data < 0):
        raise ValueError('Inflammation values should not be negative')
...

We can then modify our test function in tests/test_models.py to check that the function raises the correct exception - a ValueError - when input to the test contains negative values (i.e. input case [[-1, 2, 3], [4, 5, 6], [7, 8, 9]]). The ValueError exception is part of the standard Python library and is used to indicate that the function received an argument of the right type, but of an inappropriate value.

@pytest.mark.parametrize(
    "test, expected, expect_raises",
    [
        ... # previous test cases here, with None for expect_raises, except for the next one - add ValueError
        ... # as an expected exception (since it has a negative input value)
        (
            [[-1, 2, 3], [4, 5, 6], [7, 8, 9]],
            [[0, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
            ValueError,
        ),
        (
            [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
            [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
            None,
        ),
    ])
def test_patient_normalise(test, expected, expect_raises):
    """Test normalisation works for arrays of one and positive integers."""
    from inflammation.models import patient_normalise
    if expect_raises is not None:
        with pytest.raises(expect_raises):
            npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)
    else:
        npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)

Be sure to commit your changes so far and push them to GitHub.

Optional Exercise: Add a Precondition to Check the Correct Type and Shape of Data

Add preconditions to check that data is an ndarray object and that it is of the correct shape. Add corresponding tests to check that the function raises the correct exception. You will find the Python function isinstance useful here, as well as the Python exception TypeError. Once you are done, commit your new files, and push the new commits to your remote repository on GitHub.

Solution

In inflammation/models.py:

...
def patient_normalise(data):
    """
    Normalise patient data between 0 and 1 of a 2D inflammation data array.

    Any NaN values are ignored, and normalised to 0

    :param data: 2D array of inflammation data
    :type data: ndarray

    """
    if not isinstance(data, np.ndarray):
        raise TypeError('data input should be ndarray')
    if len(data.shape) != 2:
        raise ValueError('inflammation array should be 2-dimensional')
    if np.any(data < 0):
        raise ValueError('inflammation values should be non-negative')
    max = np.nanmax(data, axis=1)
    with np.errstate(invalid='ignore', divide='ignore'):
        normalised = data / max[:, np.newaxis]
    normalised[np.isnan(normalised)] = 0
    return normalised
...

In test/test_models.py:

...
@pytest.mark.parametrize(
    "test, expected, expect_raises",
    [
        ...
        (
            'hello',
            None,
            TypeError,
        ),
        (
            3,
            None,
            TypeError,
        ),
        (
            [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
            [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
            None,
        )
    ])
def test_patient_normalise(test, expected, expect_raises):
    """Test normalisation works for arrays of one and positive integers."""
    from inflammation.models import patient_normalise
    if isinstance(test, list):
        test = np.array(test)
    if expect_raises is not None:
        with pytest.raises(expect_raises):
            npt.assert_almost_equal(patient_normalise(test), np.array(expected), decimal=2)
    else:
        npt.assert_almost_equal(patient_normalise(test), np.array(expected), decimal=2)
...

Note the conversion from list to np.array has been moved out of the call to npt.assert_almost_equal() within the test function, and is now only applied to list items (rather than all items). This allows for greater flexibility with our test inputs, since this wouldn’t work in the test case that uses a string.

If you do the challenge, again, be sure to commit your changes and push them to GitHub.

You should not take it too far by trying to code preconditions for every conceivable eventuality. You should aim to strike a balance between making sure you secure your function against incorrect use, and writing an overly complicated and expensive function that handles cases that are likely never going to occur. For example, it would be sensible to validate the shape of your inflammation data array when it is actually read from the csv file (in load_csv), and therefore there is no reason to test this again in patient_normalise. You can also decide against adding explicit preconditions in your code, and instead state the assumptions and limitations of your code for users of your code in the docstring and rely on them to invoke your code correctly. This approach is useful when explicitly checking the precondition is too costly.

Improving Robustness with Automated Code Style Checks

Let’s re-run Pylint over our project after having added some more code to it. From the project root do:

$ pylint inflammation

You may see something like the following in Pylint’s output:

************* Module inflammation.models
...
inflammation/models.py:60:4: W0622: Redefining built-in 'max' (redefined-builtin)
...

The above output indicates that by using the local variable called max in the patient_normalise function, we have redefined a built-in Python function called max. This isn’t a good idea and may have some undesired effects (e.g. if you redefine a built-in name in a global scope you may cause yourself some trouble which may be difficult to trace).

Exercise: Fix Code Style Errors

Rename our local variable max to something else (e.g. call it max_data), then rerun your tests and commit these latest changes and push them to GitHub using our usual feature branch workflow. Make sure your develop and main branches are up to date.

It may be hard to remember to run linter tools every now and then. Luckily, we can now add this Pylint execution to our continuous integration builds as one of the extra tasks. Since we’re adding an extra feature to our CI workflow, let’s start this from a new feature branch from the develop branch:

$ git checkout develop
$ git branch pylint-ci
$ git checkout pylint-ci

Then to add Pylint to our CI workflow, we can add the following step to our steps in .github/workflows/main.yml:

...
    - name: Check style with Pylint
      run: |
        python3 -m pylint --fail-under=0 --reports=y inflammation
...

Note we need to add --fail-under=0 otherwise the builds will fail if we don’t get a ‘perfect’ score of 10! This seems unlikely, so let’s be more pessimistic. We’ve also added --reports=y which will give us a more detailed report of the code analysis.

Then we can just add this to our repo and trigger a build:

$ git add .github/workflows/main.yml
$ git commit -m "Add Pylint run to build"
$ git push

Then once complete, under the build(s) reports you should see an entry with the output from Pylint as before, but with an extended breakdown of the infractions by category as well as other metrics for the code, such as the number and line percentages of code, docstrings, comments, and empty lines.

So we specified a score of 0 as a minimum which is very low. If we decide as a team on a suitable minimum score for our codebase, we can specify this instead. There are also ways to specify specific style rules that shouldn’t be broken which will cause Pylint to fail, which could be even more useful if we want to mandate a consistent style.

We can specify overrides to Pylint’s rules in a file called .pylintrc which Pylint can helpfully generate for us. In our repository root directory:

$ pylint --generate-rcfile > .pylintrc

Looking at this file, you’ll see it’s already pre-populated. No behaviour is currently changed from the default by generating this file, but we can amend it to suit our team’s coding style. For example, a typical rule to customise - favoured by many projects - is the one involving line length. You’ll see it’s set to 100, so let’s set that to a more reasonable 120. While we’re at it, let’s also set our fail-under in this file:

...
# Specify a score threshold to be exceeded before program exits with error.
fail-under=0
...
# Maximum number of characters on a single line.
max-line-length=120
...

Don’t forget to remove the --fail-under argument to Pytest in our GitHub Actions configuration file too, since we don’t need it anymore.

Now when we run Pylint we won’t be penalised for having a reasonable line length. For some further hints and tips on how to approach using Pylint for a project, see this article.

Before moving on, be sure to commit all your changes and then merge to the develop and main branches in the usual manner, and push them all to GitHub.

Key Points

  • Unit testing can show us what does not work, but does not help us locate problems in code.

  • Use a debugger to help you locate problems in code.

  • A debugger allows us to pause code execution and examine its state by adding breakpoints to lines in code.

  • Use preconditions to ensure correct behaviour of code.

  • Ensure that unit tests check for edge and corner cases too.

  • Using linting tools to automatically flag suspicious programming language constructs and stylistic errors can help improve code robustness.


Section 3: Software Development as a Process

Overview

Teaching: 5 min
Exercises: 0 min
Questions
  • How can we design and write ‘good’ software that meets its goals and requirements?

Objectives
  • Describe the differences between writing code and engineering software.

  • Define the fundamental stages in a software development process.

  • List the benefits of following a process of software development.

In this section, we will take a step back from coding development practices and tools and look at the bigger picture of software as a process of development.

“If you fail to plan, you are planning to fail.” - Benjamin Franklin

Software design and architecture

Writing Code vs Engineering Software

Traditionally in academia, software - and the process of writing it - is often seen as a necessary but throwaway artefact in research. For example, there may be research questions for a given research project, code is created to answer those questions, the code is run over some data and analysed, and finally a publication is written based on those results. These steps are often taken informally.

The terms programming (or even coding) and software engineering are often used interchangeably. They are not. Programmers or coders tend to focus on one part of software development: implementation, more than any other. In academic research, often they are writing software for themselves, where they are their own stakeholders. And ideally, they write software from a design, that fulfils a research goal to publish research papers.

Someone who is engineering software takes a wider view:

The Software Development Process

The typical stages of a software development process can be categorised as follows:

The process of following these stages, particularly when undertaken in this order, is referred to as the waterfall model of software development: each stage’s outputs flow into the next stage sequentially.

Whether projects or people that develop software are aware of them or not, these stages are followed implicitly or explicitly in every software project. What is required for a project (during requirements gathering) is always considered, for example, even if it isn’t explored sufficiently or well understood.

Following a process of development offers some major benefits:

In this section we will place the actual writing of software (implementation) within the context of the typical software development process:

Key Points

  • Software engineering takes a wider view of software development beyond programming (or coding).

  • Ensuring requirements are sufficiently captured is critical to the success of any project.

  • Following a process makes development predictable, can save time, and helps ensure each stage of development is given sufficient consideration before proceeding to the next.


Software Requirements

Overview

Teaching: 15 min
Exercises: 30 min
Questions
  • Where do we start when beginning a new software project?

  • How can we capture and organise what is required for software to function as intended?

Objectives
  • Describe the different types of software requirements.

  • Explain the difference between functional and non-functional requirements.

  • Describe some of the different kinds of software and explain how the environment in which software is used constrains its design.

  • Derive new user and solution requirements from business requirements.

The requirements of our software are the basis on which the whole project rests - if we get the requirements wrong, we’ll build the wrong software. However, it’s unlikely that we’ll be able to determine all of the requirements upfront. Especially when working in a research context, requirements are flexible and may change as we develop our software.

Types of Requirements

Requirements can be categorised in many ways, but at a high level a useful way to split them is into business requirements, user requirements, and solution requirements. Let’s take a look at these now.

Business Requirements

Business requirements describe what is needed from the perspective of the organisation, and define the strategic path of the project, e.g. to increase profit margin or market share, or embark on a new research area or collaborative partnership. These are captured in something like a Business Requirements Specification.

For adapting our inflammation software project, example business requirements could include:

Exercise: New Business Requirements

Think of a new hypothetical business-level requirements for this software. This can be anything you like, but be sure to keep it at the high-level of the business itself.

Solution

One hypothetical new business requirement (BR3) could be extending our clinical trial system to keep track of doctors who are being involved in the project.

Another hypothetical new business requirement (BR4) may be adding a new parameter to the treatment and checking if it improves the effect of the drug being tested - e.g. taking it in conjunction with omega-3 fatty acids and/or increasing physical activity while taking the drug therapy.

User (or Stakeholder) Requirements

These define what particular stakeholder groups each expect from an eventual solution, essentially acting as a bridge between the higher-level business requirements and specific solution requirements. These are typically captured in a User Requirements Specification.

For our inflammation project, they could include things for trial managers such as (building on the business requirements):

Exercise: New User Requirements

Break down your new business requirements from the previous exercise into a number of logical user requirements, ensuring they stay above the level and detail of implementation.

Solution

For our business requirement BR3 from the previous exercise, the new user/stakeholder requirements may be the ability to see all the patients a doctor is being responsible for (UR3.1), and to find out a doctor looking after any individual patient (UR3.2).

For our business requirement BR4 from the previous exercise, the new user/stakeholder requirements may be the ability to see the effect of the drug with and without the additional parameters in all reports and graphs (UR4.1).

Solution Requirements

Solution (or product) requirements describe characteristics that software must have to satisfy the stakeholder requirements. They fall into two key categories:

Labelling Requirements

Note that the naming scheme we used for labelling our requirements is quite arbitrary - you should reference them in a way that is consistent and makes sense within your project and team.

The Importance of Non-functional Requirements

When considering software requirements, it’s very tempting to just think about the features users need. However, many design choices in a software project quite rightly depend on the users themselves and the environment in which the software is expected to run, and these aspects should be considered as part of the software’s non-functional requirements.

Exercise: Types of Software

Think about some software you are familiar with (could be software you have written yourself or by someone else) and how the environment it is used in have affected its design or development. Here are some examples of questions you can use to get started:

  • What environment does the software run in?
  • How do people interact with it?
  • Why do people use it?
  • What features of the software have been affected by these factors?
  • If the software needed to be used in a different environment, what difficulties might there be?

Some examples of design / development choices constrained by environment might be:

  • Mobile Apps
    • Must have graphical interface suitable for a touch display
    • Usually distributed via a controlled app store
    • Users will not (usually) modify / compile the software themselves
    • Should work on a range of hardware specifications with a range of Operating System (OS) versions
      • But OS is unlikely to be anything other than Android or iOS
    • Documentation probably in the software itself or on a Web page
    • Typically written in one of the platform preferred languages (e.g. Java, Kotlin, Swift)
  • Embedded Software
    • May have no user interface - user interface may be physical buttons
    • Usually distributed pre-installed on a physical device
    • Often runs on low power device with limited memory and CPU performance - must take care to use these resources efficiently
    • Exact specification of hardware is known - often not necessary to support multiple devices
    • Documentation probably in a technical manual with a separate user manual
    • May need to run continuously for the lifetime of the device
    • Typically written in a lower-level language (e.g. C) for better control of resources

Some More Examples

  • Desktop Application
    • Has a graphical interface for use with mouse and keyboard
    • May need to work on multiple, very different operating systems
    • May be intended for users to modify / compile themselves
    • Should work on a wide range of hardware configurations
    • Documentation probably either in a manual or in the software itself
  • Command-line Application - UNIX Tool
    • User interface is text based, probably via command-line arguments
    • Intended to be modified / compiled by users - though most will choose not to
    • Documentation has standard formats - also accessible from the command line
    • Should be usable as part of a pipeline
  • Command-line Application - High Performance Computing
    • Similar to a UNIX Tool
    • Usually supports running across multiple networked machines simultaneously
    • Usually operated via a scheduler - interface should be scriptable
    • May need to run on a wide range of hardware (e.g. different CPU architectures)
    • May need to process large amounts of data
    • Often entirely or partially written in a lower-level language for performance (e.g. C, C++, Fortran)
  • Web Application
    • Usually has components which run on server and components which run on the user’s device
    • Graphical interface should usually support both Desktop and Mobile devices
    • Client-side component should run on a range of browsers and operating systems
    • Documentation probably part of the software itself
    • Client-side component typically written in JavaScript

Exercise: New Solution Requirements

Now break down your new user requirements from the earlier exercise into a number of logical solution requirements (functional and non-functional), that address the detail required to be able to implement them in the software.

Solution

For our new hypothetical business requirement BR3, new functional solution requirements could be extending the clinical trial system to keep track of:

  • the names of all patients (SR3.1.1) and doctors (SR3.1.2) involved in the trial
  • the name of the doctor for a particular patient (SR3.1.3)
  • a group of patients being administered by a particular doctor (SR3.2.1).

Optional Exercise: Requirements for Your Software Project

Think back to a piece of code or software (either small or large) you’ve written, or which you have experience using. First, try to formulate a few of its key business requirements, then derive these into user and then solution requirements (in a similar fashion to the ones above in Types of Requirements).

Long- or Short-Lived Code?

Along with requirements, here’s something to consider early on. You, perhaps with others, may be developing open-source software with the intent that it will live on after your project completes. It could be important to you that your software is adopted and used by other projects as this may help you get future funding. It can make your software more attractive to potential users if they have the confidence that they can fix bugs that arise or add new features they need, if they can be assured that the evolution of the software is not dependant upon the lifetime of your project. The intended longevity and post-project role of software should be reflected in its requirements - particularly within its non-functional requirements - so be sure to consider these aspects.

On the other hand, you might want to knock together some code to prove a concept or to perform a quick calculation and then just discard it. But can you be sure you’ll never want to use it again? Maybe a few months from now you’ll realise you need it after all, or you’ll have a colleague say “I wish I had a…” and realise you’ve already made one. A little effort now could save you a lot in the future.

From Requirements to Implementation, via Design

In practice, these different types of requirements are sometimes confused and conflated when different classes of stakeholder are discussing them, which is understandable: each group of stakeholders has a different view of what is required from a project. The key is to understand the stakeholder’s perspective as to how their requirements should be classified and interpreted, and for that to be made explicit. A related misconception is that each of these types are simply requirements specified at different levels of detail. At each level, not only are the perspectives different, but so are the nature of the objectives and the language used to describe them, since they each reflect the perspective and language of their stakeholder group.

It’s often tempting to go right ahead and implement requirements within existing software, but this neglects a crucial step: do these new requirements fit within our existing design, or does our design need to be revisited? It may not need any changes at all, but if it doesn’t fit logically our design will need a bigger rethink so the new requirement can be implemented in a sensible way. We’ll look at this a bit later in this section, but simply adding new code without considering how the design and implementation need to change at a high level can make our software increasingly messy and difficult to change in the future.

Key Points

  • When writing software used for research, requirements will almost always change.

  • Consider non-functional requirements (how the software will behave) as well as functional requirements (what the software is supposed to do).

  • The environment in which users run our software has an effect on many design choices we might make.

  • Consider the expected longevity of any code before you write it.

  • The perspective and language of a particular requirement stakeholder group should be reflected in requirements for that group.


Software Architecture and Design

Overview

Teaching: 25 min
Exercises: 20 min
Questions
  • What should we consider when designing software?

  • What goals should we have when structuring our code?

  • What is refactoring?

Objectives
  • Know what goals we have when architecting and designing software.

  • Understand what an abstraction is, and when you should use one.

  • Understand what refactoring is.

Introduction

Typically when we start writing code, we write small scripts that we intend to use. We probably don’t imagine we will need to change the code in the future. We almost certainly don’t expect other people will need to understand and modify the code in the future. However, as projects grow in complexity and the number of people involved grows, it becomes important to think about how to structure code. Software Architecture and Design is all about thinking about ways to make the code be maintainable as projects grow.

Maintainable code is:

Writing code that meets these requirements is hard and takes practise. Further, in most contexts you will already have a piece of code that breaks some (or maybe all!) of these principles.

Group Exercise: Think about examples of good and bad code

Try to come up with examples of code that has been hard to understand - why?

Try to come up with examples of code that was easy to understand and modify - why?

In this episode we will explore techniques and processes that can help you continuously improve the quality of code so, over time, it tends towards more maintainable code.

We will look at:

Cognitive Load

When we are trying to understand a piece of code, in our heads we are storing what the different variables mean and what the lines of code will do. Cognitive load is a way of thinking about how much information we have to store in our heads to understand a piece of code.

The higher the cognitive load, the harder it is to understand the code. If it is too high, we might have to create diagrams to help us hold it all in our head or we might just decide we can’t understand it.

There are lots of ways to keep cognitive load down:

Abstractions

An abstraction, at its most basic level, is a technique to hide the details of one part of a system from another part of the system. We deal with abstractions all the time - when you press the break pedal on the car, you do not know how this manages both slowing down the engine and applying pressure on the breaks. The advantage of using this abstraction is, when something changes, for example the introduction of anti-lock breaking or an electric engine, the driver does not need to do anything differently - the detail of how the car breaks is abstracted away from them.

Abstractions are a fundamental part of software. For example, when you write Python code, you are dealing with an abstraction of the computer. You don’t need to understand how RAM functions. Instead, you just need to understand how variables work in Python.

In large projects it is vital to come up with good abstractions. A good abstraction makes code easier to read, as the reader doesn’t need to understand all the details of the project to understand one part. An abstraction lowers the cognitive load of a bit of code, as there is less to understand at once.

A good abstraction makes code easier to test, as it can be tested in isolation from everything else.

Finally, a good abstraction makes code easier to adapt, as the details of how a subsystem used to work are hidden from the user, so when they change, the user doesn’t need to know.

In this episode we are going to look at some code and introduce various different kinds of abstraction. However, fundamentally any abstraction should be serving these goals.

Refactoring

Often we are not working on brand new projects, but instead maintaining an existing piece of software. Often, this piece of software will be hard to maintain, perhaps because it has hard to understand, or doesn’t have any tests. In this situation, we want to adapt the code to make it more maintainable. This will allow greater confidence of the code, as well as making future development easier.

Refactoring is a process where some code is modified, such that its external behaviour remains unchanged, but the code itself is easier to read, test and extend.

When faced with a old piece of code that is hard to work with, that you need to modify, a good process to follow is:

  1. Have tests that verify the current behaviour
  2. Refactor the code in such a way that the new change will slot in cleanly.
  3. Make the desired change, which now fits in easily.

Notice, after step 2, the behaviour of the code should be totally identical. This allows you to test rigorously that the refactoring hasn’t changed/broken anything before making the intended change.

In this episode, we will be making some changes to an existing bit of code that is in need of refactoring.

The code for this episode

The code itself is a feature to the inflammation tool we’ve been working on.

In it, if the user adds --full-data-analysis then the program will scan the directory of one of the provided files, compare standard deviations across the data by day and plot a graph.

The main body of it exists in inflammation/compute_data.py in a function called analyse_data.

We are going to be refactoring and extending this over the remainder of this episode.

Group Exercise: What is bad about this code?

In what ways does this code not live up to the ideal properties of maintainable code? Think about ways in which you find it hard to understand. Think about the kinds of changes you might want to make to it, and what would make making those changes challenging.

Solution

You may have found others, but here are some of the things that make the code hard to read, test and maintain:

  • Hard to read: Everything is in a single function - reading it you have to understand how the file loading works at the same time as the analysis itself.
  • Hard to modify: If you want to use the data without using the graph you’d have to change it
  • Hard to modify or test: It is always analysing a fixed set of data stored on the disk
  • Hard to modify: It doesn’t have any tests meaning changes might break something

Keep the list you have created. At the end of this section we will revisit this and check that we have learnt ways to address the problems we found.

Key Points

  • How code is structured is important for helping future people understand and update it

  • By breaking down our software into components with a single responsibility, we avoid having to rewrite it all when requirements change. Such components can be as small as a single function, or be a software package in their own right.

  • These smaller components can be understood individually without having to understand the entire codebase at once.

  • When writing software used for research, requirements will almost always change.

  • ‘Good code is written so that is readable, understandable, covered by automated tests, not over complicated and does well what is intended to do.’


Refactoring functions to do just one thing

Overview

Teaching: 30 min
Exercises: 20 min
Questions
  • How do you refactor code without breaking it?

  • How do you write code that is easy to test?

  • What is functional programming?

  • Which situations/problems is functional programming well suited for?

Objectives
  • Understand how to refactor functions to be easier to test

  • Be able to write regressions tests to avoid breaking existing code

  • Understand what a pure function is.

Introduction

In this episode we will take some code and refactor it in a way which is going to make it easier to test. By having more tests, we can more confident of future changes having their intended effect. The change we will make will also end up making the code easier to understand.

Writing tests before refactoring

The process we are going to be following is:

  1. Write some tests that test the behaviour as it is now
  2. Refactor the code to be more testable
  3. Ensure that the original tests still pass

By writing the tests before we refactor, we can be confident we haven’t broken existing behaviour through the refactoring.

There is a bit of a chicken-and-the-egg problem here however. If the refactoring is to make it easier to write tests, how can we write tests before doing the refactoring?

The tricks to get around this trap are:

The best tests are ones that test single bits of code rigorously. However, with this code it isn’t possible to do that.

Instead we will make minimal changes to the code to make it a bit testable, for example returning the data instead of visualising it.

We will make the asserts verify whatever the outcome is currently, rather than worrying whether that is correct. These tests are to verify the behaviour doesn’t change rather than to check the current behaviour is correct. This kind of testing is called regression testing as we are testing for regressions in existing behaviour.

As with everything in this episode, there isn’t a hard and fast rule. Refactoring doesn’t change behaviour, but sometimes to make it possible to verify you’re not changing the important behaviour you have to make some small tweaks to write the tests at all.

Exercise: Write regression tests before refactoring

Add a new test file called test_compute_data.py in the tests folder. Add and complete this regression test to verify the current output of analyse_data is unchanged by the refactorings we are going to do:

def test_analyse_data():
    from inflammation.compute_data import analyse_data
    path = Path.cwd() / "../data"
    result = analyse_data(path)

    # TODO: add an assert for the value of result

Use assert_array_almost_equal from the numpy.testing library to compare arrays of floating point numbers.

You will need to modify analyse_data to not create a graph and instead return the data.

Hint

You might find it helpful to assert the results equal some made up array, observe the test failing and copy and paste the correct result into the test.

Solution

One approach we can take is to:

  • comment out the visualize (as this will cause our test to hang)
  • return the data instead, so we can write asserts on the data
  • See what the calculated value is, and assert that it is the same Putting this together, you can write a test that looks something like:
import numpy.testing as npt
from pathlib import Path

def test_analyse_data():
    from inflammation.compute_data import analyse_data
    path = Path.cwd() / "../data"
    result = analyse_data(path)
    expected_output = [0.,0.22510286,0.18157299,0.1264423,0.9495481,0.27118211,
                       0.25104719,0.22330897,0.89680503,0.21573875,1.24235548,0.63042094,
                       1.57511696,2.18850242,0.3729574,0.69395538,2.52365162,0.3179312,
                       1.22850657,1.63149639,2.45861227,1.55556052,2.8214853,0.92117578,
                       0.76176979,2.18346188,0.55368435,1.78441632,0.26549221,1.43938417,
                       0.78959769,0.64913879,1.16078544,0.42417995,0.36019114,0.80801707,
                       0.50323031,0.47574665,0.45197398,0.22070227]
    npt.assert_array_almost_equal(result, expected_output)

Note - this isn’t a good test:

  • It isn’t at all obvious why these numbers are correct.
  • It doesn’t test edge cases.
  • If the files change, the test will start failing.

However, it allows us to guarantee we don’t accidentally change the analysis output.

Pure functions

A pure function is a function that works like a mathematical function. That is, it takes in some inputs as parameters, and it produces an output. That output should always be the same for the same input. That is, it does not depend on any information not present in the inputs (such as global variables, databases, the time of day etc.) Further, it should not cause any side effects, such as writing to a file or changing a global variable.

You should try and have as much of the complex, analytical and mathematical code in pure functions.

By eliminating dependency on external things such as global state, we reduce the cognitive load to understand the function. The reader only needs to concern themselves with the input parameters of the function and the code itself, rather than the overall context the function is operating in.

Similarly, a function that calls a pure function is also easier to understand. Since the function won’t have any side effects, the reader needs to only understand what the function returns, which will probably be clear from the context in which the function is called.

This property also makes them easier to re-use as the caller only needs to understand what parameters to provide, rather than anything else that might need to be configured or side effects for calling it at a time that is different to when the original author intended.

Some parts of a program are inevitably impure. Programs need to read input from the user, or write to a database. Well designed programs separate complex logic from the necessary impure “glue” code that interacts with users and systems. This way, you have easy-to-test, easy-to-read code that contains the complex logic. And you have really simple code that just reads data from a file, or gathers user input etc, that is maybe harder to test, but is so simple that it only needs a handful of tests anyway.

Exercise: Refactor the function into a pure function

Refactor the analyse_data function into a pure function with the logic, and an impure function that handles the input and output. The pure function should take in the data, and return the analysis results:

def compute_standard_deviation_by_day(data):
  # TODO
  return daily_standard_deviation

The “glue” function should maintain the behaviour of the original analyse_data but delegate all the calculations to the new pure function.

Solution

You can move all of the code that does the analysis into a separate function that might look something like this:

def compute_standard_deviation_by_day(data):
    means_by_day = map(models.daily_mean, data)
    means_by_day_matrix = np.stack(list(means_by_day))

    daily_standard_deviation = np.std(means_by_day_matrix, axis=0)
    return daily_standard_deviation

Then the glue function can use this function, whilst keeping all the logic for reading the file and processing the data for showing in a graph:

def analyse_data(data_dir):
   """Calculate the standard deviation by day between datasets
   Gets all the inflammation csvs within a directory, works out the mean
   inflammation value for each day across all datasets, then graphs the
   standard deviation of these means."""
   data_file_paths = glob.glob(os.path.join(data_dir, 'inflammation*.csv'))
   if len(data_file_paths) == 0:
       raise ValueError(f"No inflammation csv's found in path {data_dir}")
   data = map(models.load_csv, data_file_paths)
   daily_standard_deviation = compute_standard_deviation_by_day(data)

   graph_data = {
       'standard deviation by day': daily_standard_deviation,
   }
   # views.visualize(graph_data)
   return daily_standard_deviation

Ensure you re-run our regression test to check this refactoring has not changed the output of analyse_data.

Testing Pure Functions

Now we have a pure function for the analysis, we can write tests that cover all the things we would like tests to cover without depending on the data existing in CSVs.

This is another advantage of pure functions - they are very well suited to automated testing.

They are easier to write - we construct input and assert the output without having to think about making sure the global state is correct before or after.

Perhaps more important, they are easier to read - the reader will not have to open up a CSV file to understand why the test is correct.

It will also make the tests easier to maintain. If at some point the data format is changed from CSV to JSON, the bulk of the tests won’t need to be updated.

Exercise: Write some tests for the pure function

Now we have refactored our a pure function, we can more easily write comprehensive tests. Add tests that check for when there is only one file with multiple rows, multiple files with one row and any other cases you can think of that should be tested.

Solution

You might have thought of more tests, but we can easily extend the test by parametrizing with more inputs and expected outputs:

@pytest.mark.parametrize('data,expected_output', [
   ([[[0, 1, 0], [0, 2, 0]]], [0, 0, 0]),
   ([[[0, 2, 0]], [[0, 1, 0]]], [0, math.sqrt(0.25), 0]),
   ([[[0, 1, 0], [0, 2, 0]], [[0, 1, 0], [0, 2, 0]]], [0, 0, 0])
],
ids=['Two patients in same file', 'Two patients in different files', 'Two identical patients in two different files'])
def test_compute_standard_deviation_by_day(data, expected_output):
   from inflammation.compute_data import compute_standard_deviation_by_data

   result = compute_standard_deviation_by_data(data)
   npt.assert_array_almost_equal(result, expected_output)

Functional Programming

Pure Functions are a concept that is part of the idea of Functional Programming. Functional programming is a style of programming that encourages using pure functions, chained together. Some programming languages, such as Haskell or Lisp just support writing functional code, but it is more common for languages to allow using functional and imperative (the style of code you have probably been writing thus far where you instruct the computer directly what to do). Python, Java, C++ and many other languages allow for mixing these two styles.

In Python, you can use the built-in functions map, filter and reduce to chain pure functions together into pipelines.

In the original code, we used map to “map” the file paths into the loaded data. Extending this idea, you could then “map” the results of that through another process.

You can read more about using these language features here. Other programming languages will have similar features, and searching “functional style” + your programming language of choice will help you find the features available.

There are no hard and fast rules in software design but making your complex logic out of composed pure functions is a great place to start when trying to make code readable, testable and maintainable. This tends to be possible when:

Key Points

  • By refactoring code into pure functions that act on data makes code easier to test.

  • Making tests before you refactor gives you confidence that your refactoring hasn’t broken anything

  • Functional programming is a programming paradigm where programs are constructed by applying and composing smaller and simple functions into more complex ones (which describe the flow of data within a program as a sequence of data transformations).


Using classes to de-couple code.

Overview

Teaching: 30 min
Exercises: 45 min
Questions
  • What is de-coupled code?

  • When is it useful to use classes to structure code?

  • How can we make sure the components of our software are reusable?

Objectives
  • Understand the object-oriented principle of polymorphism and interfaces.

  • Be able to introduce appropriate abstractions to simplify code.

  • Understand what decoupled code is, and why you would want it.

  • Be able to use mocks to replace a class in test code.

Introduction

When we’re thinking about units of code, one important thing to consider is whether the code is decoupled (as opposed to coupled). Two units of code can be considered decoupled if changes in one don’t necessitate changes in the other. While two connected units can’t be totally decoupled, loose coupling allows for more maintainable code:

Introducing abstractions is a way to decouple code. If one part of the code only uses another part through an appropriate abstraction then it becomes easier for these parts to change independently.

Exercise: Decouple the file loading from the computation

Currently the function is hard coded to load all the files in a directory Decouple this into a separate function that returns all the files to load

Solution

You should have written a new function that reads all the data into the format needed for the analysis:

def load_inflammation_data(dir_path):
  data_file_paths = glob.glob(os.path.join(dir_path, 'inflammation*.csv'))
  if len(data_file_paths) == 0:
      raise ValueError(f"No inflammation csv's found in path {dir_path}")
  data = map(models.load_csv, data_file_paths)
  return list(data)

This can then be used in the analysis.

def analyse_data(data_dir):
  data = load_inflammation_data(data_dir)
  daily_standard_deviation = compute_standard_deviation_by_data(data)
...

This is now easier to understand, as we don’t need to understand the the file loading to read the statistical analysis, and we don’t have to understand the statistical analysis when reading the data loading. Ensure you re-run our regression test to check this refactoring has not changed the output of analyse_data.

Even with this change, the file loading is coupled with the data analysis. For example, if we wave to support reading JSON files or CSV files we would have to pass into analyse_data some kind of flag indicating what we want.

Instead, we would like to decouple the consideration of what data to load from the analyse_data` function entirely.

One way we can do this is to use a language feature called a class.

Using Python Classes

A class is a way of grouping together data with some specific methods. In Python, you can declare a class as follows:

class Circle:
  pass

They are typically named using UpperCase.

You can then construct a class elsewhere in your code by doing the following:

my_circle = Circle()

When you construct a class in this ways, the classes construtor is called. It is possible to pass in values to the constructor that configure the class:

class Circle:
  def __init__(self, radius):
    self.radius = radius

my_circle = Circle(10)

The constructor has the special name __init__ (one of the so called “dunder methods”). Notice it also has a special first parameter called self (called this by convention). This parameter can be used to access the current instance of the object being created.

A class can be thought of as a cookie cutter template, and the instances are the cookies themselves. That is, one class can have many instances.

Classes can also have methods defined on them. Like constructors, they have an special self parameter that must come first.

import math

class Circle:
  ...
  def get_area(self):
    return math.pi * self.radius * self.radius
...
print(my_circle.get_area())

Here the instance of the class, my_circle will be automatically passed in as the first parameter when calling get_area. Then the method can access the member variable radius.

Exercise: Use a class to configure loading

Put the load_inflammation_data function we wrote in the last exercise as a member method of a new class called CSVDataSource. Put the configuration of where to load the files in the classes constructor. Once this is done, you can construct this class outside the the statistical analysis and pass the instance in to analyse_data.

Hint

When we have completed the refactoring, the code in the analyse_data function should look like:

def analyse_data(data_source):
  data = data_source.load_inflammation_data()
  daily_standard_deviation = compute_standard_deviation_by_data(data)
  ...

The controller code should look like:

data_source = CSVDataSource(os.path.dirname(InFiles[0]))
analyse_data(data_source)

Solution

You should have created a class that looks something like this:

class CSVDataSource:
  """
  Loads all the inflammation csvs within a specified folder.
  """
  def __init__(self, dir_path):
    self.dir_path = dir_path

  def load_inflammation_data(self):
    data_file_paths = glob.glob(os.path.join(self.dir_path, 'inflammation*.csv'))
    if len(data_file_paths) == 0:
      raise ValueError(f"No inflammation csv's found in path {self.dir_path}")
    data = map(models.load_csv, data_file_paths)
    return list(data)

We can now pass an instance of this class into the the statistical analysis function. This means that should we want to re-use the analysis it wouldn’t be fixed to reading from a directory of CSVs. We have “decoupled” the reading of the data from the statistical analysis.

def analyse_data(data_source):
  data = data_source.load_inflammation_data()
  daily_standard_deviation = compute_standard_deviation_by_data(data)
  ...

In the controller, you might have something like:

data_source = CSVDataSource(os.path.dirname(InFiles[0]))
analyse_data(data_source)

While the behaviour is unchanged, how we call analyse_data has changed. We must update our regression test to match this, to ensure we haven’t broken the code:

...
def test_compute_data():
    from inflammation.compute_data import analyse_data
    path = Path.cwd() / "../data"
    data_source = CSVDataSource(path)
    result = analyse_data(data_source)
    expected_output = [0.,0.22510286,0.18157299,0.1264423,0.9495481,0.27118211
    ...

Interfaces

Another important concept in software design is the idea of interfaces between different units in the code. One kind of interface you might have come across are APIs (Application Programming Interfaces). These allow separate systems to communicate with each other - such as a making an API request to Google Maps to find the latitude and longitude of an address.

However, there are internal interfaces within our software that dictate how different parts of the system interact with each other. Even if these aren’t thought out or documented, they still exist!

For example, our Circle class implicitly has an interface: you can call get_area on it and it will return a number representing its area.

Exercise: Identify the interface between CSVDataSource and analyse_data

What is the interface that CSVDataSource has with analyse_data. Think about what functions analyse_data needs to be able to call, what parameters they need and what it will return.

Solution

The interface is the load_inflammation_data method.

It takes no parameters.

It returns a list where each entry is a 2D array of patient inflammation results by day Any object we pass into analyse_data must conform to this interface.

Polymorphism

It is possible to design multiple classes that each conform to the same interface.

For example, we could provide a Rectangle class:

class Rectangle(Shape):
  def __init__(self, width, height):
    self.width = width
    self.height = height
  def get_area(self):
    return self.width * self.height

Like Circle, this class provides a get_area method. The method takes the same number of parameters (none), and returns a number. However, the implementation is different.

When classes share an interface, then we can use an instance of a class without knowing what specific class is being used. When we do this, it is called polymorphism.

Here is an example where we create a list of shapes (either Circles or Rectangles) and can then find the total area. Note how we call get_area and Python is able to call the appropriate get_area for each of the shapes.

my_circle = Circle(radius=10)
my_rectangle = Rectangle(width=5, height=3)
my_shapes = [my_circle, my_rectangle]
total_area = sum(shape.get_area() for shape in my_shapes)

This is an example of abstraction - when we are calculating the total area, the method for calculating the area of each shape is abstracted away to the relevant class.

How polymorphism is useful

As we saw with the Circle and Square examples, we can use common interfaces and polymorphism to abstract away the details of the implementation from the caller.

For example, we could replace our CSVDataSource with a class that reads a totally different format, or reads from an external service. All of these can be added in without changing the analysis. Further - if we want to write a new analysis, we can support any of these data sources for free with no further work. That is, we have decoupled the job of loading the data from the job of analysing the data.

Exercise: Introduce an alternative implementation of DataSource

Create another class that supports loading JSON instead of CSV. There is a function in models.py that loads from JSON in the following format:

[
  {
    "observations": [0, 1]
  },
  {
    "observations": [0, 2]
  }
]

It should implement the load_inflammation_data method. Finally, at run time construct an appropriate instance based on the file extension.

Solution

You should have created a class that looks something like:

class JSONDataSource:
  """
  Loads all the inflammation JSON's within a specified folder.
  """
  def __init__(self, dir_path):
    self.dir_path = dir_path

  def load_inflammation_data(self):
    data_file_paths = glob.glob(os.path.join(self.dir_path, 'inflammation*.json'))
    if len(data_file_paths) == 0:
      raise ValueError(f"No inflammation JSON's found in path {self.dir_path}")
    data = map(models.load_json, data_file_paths)
    return list(data)

Additionally, in the controller will need to select the appropriate DataSource to provide to the analysis:

_, extension = os.path.splitext(InFiles[0])
if extension == '.json':
  data_source = JSONDataSource(os.path.dirname(InFiles[0]))
elif extension == '.csv':
  data_source = CSVDataSource(os.path.dirname(InFiles[0]))
else:
  raise ValueError(f'Unsupported file format: {extension}')
analyse_data(data_source)

As you have seen, all these changes were made without modifying the analysis code itself.

Testing using Mock Objects

We can use this abstraction to also make testing more straight forward. Instead of having our tests use real file system data, we can instead provide a mock or dummy implementation instead of one of the real classes. Providing what we substitute conforms to the same interface, the code we are testing will work just the same. This dummy implementation could just returns some fixed example data.

An convenient way to do this in Python is using Python’s mock object library. These are a whole topic to themselves - but a basic mock can be constructed using a couple of lines of code:

from unittest.mock import Mock

mock_version = Mock()
mock_version.method_to_mock.return_value = 42

Here we construct a mock in the same way you’d construct a class. Then we specify a method that we want to behave a specific way.

Now whenever you call mock_version.method_to_mock() the return value will be 42.

Exercise: Test using a mock or dummy implementation

Complete this test for analyse_data, using a mock object in place of the data_source:

from unittest.mock import Mock

def test_compute_data_mock_source():
  from inflammation.compute_data import analyse_data
  data_source = Mock()

  # TODO: configure data_source mock

  result = analyse_data(data_source)

  # TODO: add assert on the contents of result

Create a mock for to provide as the data_source that returns some fixed data to test the analyse_data method. Use this mock in a test.

Don’t forget you will need to import Mock from the unittest.mock package.

Solution

from unittest.mock import Mock

def test_compute_data_mock_source():
  from inflammation.compute_data import analyse_data
  data_source = Mock()
  data_source.load_inflammation_data.return_value = [[[0, 2, 0]],
                                                     [[0, 1, 0]]]

  result = analyse_data(data_source)
  npt.assert_array_almost_equal(result, [0, math.sqrt(0.25) ,0])

Object Oriented Programming

Using classes, particularly when using polymorphism, are techniques that come from object oriented programming (frequently abbreviated to OOP). As with functional programming different programming languages will provide features to enable you to write object oriented code. For example, in Python you can create classes, and use polymorphism to call the correct method on an instance (e.g when we called get_area on a shape, the appropriate get_area was called).

Object oriented programming also includes information hiding. In this, certain fields might be marked private to a class, preventing them from being modified at will.

This can be used to maintain invariants of a class (such as insisting that a circles radius is always non-negative).

There is also inheritance, which allows classes to specialise the behaviour of other classes by inheriting from another class and overriding certain methods.

As with functional programming, there are times when object oriented programming is well suited, and times where it is not.

Good uses:

One downside of OOP is ending up with very large classes that contain complex methods. As they are methods on the class, it can be hard to know up front what side effects it causes to the class. This can make maintenance hard.

Classes and functional programming

Using classes is compatible with functional programming. In fact, grouping data into logical structures (such as three numbers into a vector) is a vital step in writing readable and maintainable code with any approach. However, when writing in a functional style, classes should be immutable. That is, the methods they provide are read-only. If you require the class to be different, you’d create a new instance with the new values. (that is, the functions should not modify the state of the class).

Don’t use features for the sake of using features. Code should be as simple as it can be, but not any simpler. If you know your function only makes sense to operate on circles, then don’t accept shapes just to use polymorphism!

Key Points

  • Classes can help separate code so it is easier to understand.

  • By using interfaces, code can become more decoupled.

  • Decoupled code is easier to test, and easier to maintain.


Architecting code to separate responsibilities

Overview

Teaching: 15 min
Exercises: 50 min
Questions
  • What is the point of the MVC architecture

  • How to design larger solutions.

  • How to tell what is and isn’t an appropriate abstraction.

Objectives
  • Understand the use of common design patterns to improve the extensibility, reusability and overall quality of software.

  • How to design large changes to the codebase.

  • Understand how to determine correct abstractions.

Introduction

Model-View-Controller (MVC) is a way of separating out different responsibilities of a typical application. Specifically we have:

Separating out these different responsibilities into different parts of the code will make the code much more maintainable. For example, if the view code is kept away from the model code, then testing the model code can be done without having to worry about how it will be presented.

It helps with readability, as it makes it easier to have each function doing just one thing.

It also helps with maintainability - if the UI requirements change, these changes are easily isolated from the more complex logic.

Separating out responsibilities

The key thing to take away from MVC is the distinction between model code and view code.

What about the controller

The view and the controller tend to be more tightly coupled and it isn’t always sensible to draw a thick line dividing these two. Depending on how the user interacts with the software this distinction may not be possible (the code that specifies there is a button on the screen, might be the same code that specifies what that button does). In fact, the original proposer of MVC groups the views and the controller into a single element, called the tool. Other modern architectures like Model-View-Presenter do away with the controller and instead separate out the layout code from a programmable view of the UI.

The view code might be hard to test, or use libraries to draw the UI, but should not contain any complex logic, and is really just a presentation layer on top of the model.

The model, conversely, should not really care how the data is displayed. For example, perhaps the UI always presents dates as “Monday 24th July 2023”, but the model would still store this using a Date rather than just that string.

Exercise: Identify model and view parts of the code

Looking at the code inside compute_data.py,

  • What parts should be considered model code
  • What parts should be considered view code?
  • What parts should be considered controller code?

Solution

  • The computation of the standard deviation is model code
  • Reading the data from the CSV is also model code.
  • The display of the output as a graph is the view code.
  • The logic that processes the supplied flats is the controller.

Within the model there is further separation that makes sense. For example, as we did in the last episode, separating out the impure code that interacts with file systems from the pure calculations is helps with readability and testability. Nevertheless, the MVC approach is a great starting point when thinking about how you should structure your code.

Exercise: Split out the model code from the view code

Refactor analyse_data such the view code we identified in the last exercise is removed from the function, so the function contains only model code, and the view code is moved elsewhere.

Solution

The idea here is to have analyse_data to not have any “view” considerations. That is, it should just compute and return the data.

def analyse_data(data_dir):
    """Calculate the standard deviation by day between datasets
    Gets all the inflammation csvs within a directory, works out the mean
    inflammation value for each day across all datasets, then graphs the
    standard deviation of these means."""
    data = data_source.load_inflammation_data()
    daily_standard_deviation = compute_standard_deviation_by_data(data)

    return daily_standard_deviation

There can be a separate bit of code that chooses how that should be presented, e.g. as a graph:

if args.full_data_analysis:
  _, extension = os.path.splitext(InFiles[0])
  if extension == '.json':
    data_source = JSONDataSource(os.path.dirname(InFiles[0]))
  elif extension == '.csv':
    data_source = CSVDataSource(os.path.dirname(InFiles[0]))
  else:
    raise ValueError(f'Unsupported file format: {extension}')
  analyse_data(data_source)
  graph_data = {
    'standard deviation by day': data_result,
  }
  views.visualize(graph_data)
  return

You might notice this is more-or-less the change we did to write our regression test. This demonstrates that splitting up model code from view code can immediately make your code much more testable. Ensure you re-run our regression test to check this refactoring has not changed the output of analyse_data.

Programming patterns

MVC is a programming pattern. Programming patterns are templates for structuring code. Patterns are a useful starting point for how to design your software. They also work as a common vocabulary for discussing software designs with other developers.

The Refactoring Guru website has a list of programming patterns. They aren’t all good design decisions, and can certainly be over-applied, but learning about them can be helpful for thinking at a big picture level about software design.

For example, the visitor pattern is a good way of separating the problem of how to move through the data from a specific action you want to perform on the data.

By having a terminology for these approaches can facilitate discussions where everyone is familiar with them. However, they cannot replace a full design as most problems will require a bespoke design that maps cleanly on to the specific problem you are trying to solve.

Architecting larger changes

When creating a new application, or creating a substantial change to an existing one, it can be really helpful to sketch out the intended architecture on a whiteboard (pen and paper works too, though of course it might get messy as you iterate on the design!).

The basic idea is you draw boxes that will represent different units of code, as well as other components of the system (such as users, databases etc). Then connect these boxes with lines where information or control will be exchanged. These lines represent the interfaces in your system.

As well as helping to visualise the work, doing this sketch can troubleshoot potential issues. For example, if there is a circular dependency between two sections of the design. It can also help with estimating how long the work will take, as it forces you to consider all the components that need to be made.

Diagrams aren’t foolproof, and often the stuff we haven’t considered won’t make it on to the diagram but they are a great starting point to break down the different responsibilities and think about the kinds of information different parts of the system will need.

Exercise: Design a high-level architecture

Sketch out a design for a new feature requested by a user

“I want there to be a Google Drive folder that when I upload new inflammation data to the software automatically pulls it down and updates the analysis. The new result should be added to a database with a timestamp. An email should then be sent to a group email notifying them of the change.”

Solution

Diagram showing proposed architecture of the problem

An abstraction too far

So far we have seen how abstractions are good for making code easier to read, maintain and test. However, it is possible to introduce too many abstractions.

All problems in computer science can be solved by another level of indirection except the problem of too many levels of indirection

When you introduce an abstraction, if the reader of the code needs to understand what is happening inside the abstraction, it has actually made the code harder to read. When code is just in the function, it can be clear to see what it is doing. When the code is calling out to an instance of a class that, thanks to polymorphism, could be a range of possible implementations, the only way to find out what is actually being called is to run the code and see. This is much slower to understand, and actually obfuscates meaning.

It is a judgement as to whether you have make the code too abstract. If you have to jump around a lot when reading the code that is a clue that is too abstract. Similarly, if there are two parts of the code that always need updating together, that is again an indication of an incorrect or over-zealous abstraction.

You Ain’t Gonna Need It

There are different approaches to designing software. One principle that is popular is called You Ain’t Gonna Need it - “YAGNI” for short. The idea is that, since it is hard to predict the future needs of a piece of software, it is always best to design the simplest solution that solves the problem at hand. This is opposed to trying to imagine how you might want to adapt the software in future and designing the code with that in mind.

Then, since you know the problem you are trying to solve, you can avoid making your solution unnecessarily complex or abstracted.

In our example, it might be tempting to abstract how the CSVDataSource walks the file tree into a class. However, since we only have one strategy for exploring the file tree, this would just create indirection for the sake of it

All of this is a judgement. For example, in this case, perhaps it would make sense to at least pull the file parsing out into a separate class, but not have the CSVDataSource be configurable. That way, it is clear to see how the file tree is being walked (there’s no polymorphism going on) without mixing the parsing code in with the file finding code. There are no right answers, just guidelines.

Exercise: Applying to real world examples

Thinking about the examples of good and bad code you identified at the start of the episode. Identify what kind of principles were and weren’t being followed Identify some refactorings that could be performed that would improve the code Discuss the ideas as a group.

Conclusion

Good architecture is not about applying any rules blindly, but instead practise and taking care around important things:

Practise makes perfect. One way to practise is to consider code that you already have and think how it might be redesigned. Another way is to always try to leave code in a better state that you found it. So when you’re working on a less well structured part of the code, start by refactoring it so that your change fits in cleanly. Doing this, over time, with your colleagues, will improve your skills as software architecture as well as improving the code.

Key Points

  • By splitting up the “view” code from “model” code, you allow easier re-use of code.

  • YAGNI - you ain’t gonna need it - don’t create abstractions that aren’t useful.

  • Sketching a diagram of the code can clarify how it is supposed to work, and troubleshoot problems early.


Section 4: Collaborative Software Development for Reuse

Overview

Teaching: 5 min
Exercises: 0 min
Questions
  • What practices help us develop software collaboratively that will make it easier for us and others to further develop and reuse it?

Objectives
  • Understand the code review process and employ it to improve the quality of code.

  • Understand the process and best practices for preparing Python code for reuse by others.

When changes - particularly big changes - are made to a codebase, how can we as a team ensure that these changes are well considered and represent good solutions? And how can we increase the overall knowledge of a codebase across a team? Sometimes project goals and time pressures take precedence and producing maintainable, reusable code is not given the time it deserves. So, when a change or a new feature is needed - often the shortest route to making it work is taken as opposed to a more well thought-out solution. For this reason, it is important not to write the code alone and in isolation and use other team members to verify each other’s code and measure our coding standards against. This process of having multiple team members comment on key code changes is called code review - this is one of the most important practices of collaborative software development that helps ensure the ‘good’ coding standards are achieved and maintained within a team, as well as increasing knowledge about the codebase across the team. We’ll thus look at the benefits of reviewing code, in particular, the value of this type of activity within a team, and how this can fit within various ways of team working. We’ll see how GitHub can support code review activities via pull requests, and how we can do these ourselves making use of best practices.

After that, we’ll look at some general principles of software maintainability and the benefits that writing maintainable code can give you. There will also be some practice at identifying problems with existing code, and some general, established practices you can apply when writing new code or to the code you’ve already written. We’ll also look at how we can package software for release and distribution, using Poetry to manage our Python dependencies and produce a code package we can use with a Python package indexing service to illustrate these principles.

Software design and architecture

Key Points

  • Agreeing on a set of best practices within a software development team will help to improve your software’s understandability, extensibility, testability, reusability and overall sustainability.


Developing Software In a Team: Code Review

Overview

Teaching: 30 min
Exercises: 40 min
Questions
  • How do we develop software in a team?

  • What is code review and how it can improve the quality of code?

Objectives
  • Describe commonly used code review techniques.

  • Understand how to do a pull request via GitHub to engage in code review with a team and contribute to a shared code repository.

Introduction

So far in this course we’ve focused on learning software design and (some) technical practices, tools and infrastructure that help the development of software in a team environment, but in an individual setting. Despite developing tests to check our code - no one else from the team had a look at our code before we merged it into the main development stream. Software is often designed and built as part of a team, so in this episode we’ll be looking at how to manage the process of team software development and improve our code by engaging in code review process with other team members.

Collaborative Code Development Models

The way your team provides contributions to the shared codebase depends on the type of development model you use in your project. Two commonly used models are:

  • Fork and pull model
  • Shared repository model

Fork and Pull Model

In this model, anyone can fork an existing repository (to create their copy of the project linked to the source) and push changes to their personal fork. A contributor can work independently on their own fork as they do not need permissions on the source repository to push modifications to a fork they own. The changes from contributors can then be pulled into the source repository by the project maintainer on request and after a code review process. This model is popular with open source projects as it reduces the start up costs for new contributors and allows them to work independently without upfront coordination with source project maintainers. So, for example, you may use this model when you are an external collaborator on a project rather than a core team member.

Shared Repository Model

In this model, collaborators are granted push access to a single shared code repository. By default, collaborators have write access to the main branch. However, it is best practice to create feature branches for new developments, and protect the main branch. See the extra on protecting the main branch for how to do this. While it requires more upfront coordination, it is easier to share each others work, so it works well for more stable teams. This model is more prevalent with teams and organizations collaborating on private projects.

Regardless of the collaborative code development model you and your collaborators use - code reviews are one of the widely accepted best practices for software development in teams and something you should adopt in your development process too.

Code Review

Code review is a software quality assurance practice where one or several people from the team (different from the code’s author) check the software by viewing parts of its source code.

Code review is one of the most useful team code development practices - someone checks your design or code for errors, they get to learn from your solution, having to explain code to someone else clarifies your rationale and design decisions in your mind too, and collaboration helps to improve the overall team software development process. It is universally applicable throughout the software development cycle - from design to development to maintenance. According to Michael Fagan, the author of the code inspection technique, rigorous inspections can remove 60-90% of errors from the code even before the first tests are run (Fagan, 1976). Furthermore, according to Fagan, the cost to remedy a defect in the early (design) stage is 10 to 100 times less compared to fixing the same defect in the development and maintenance stages, respectively. Since the cost of bug fixes grows in orders of magnitude throughout the software lifecycle, it is far more efficient to find and fix defects as close as possible to the point where they were introduced.

There are several code review techniques with various degree of formality and the use of a technical infrastructure, including:

You can read more about these techniques in Five Types of Review

It is worth trying multiple code review techniques to see what works best for you and your team. We will have a look at the tool-assisted code review process, which is likely to be the most effective and efficient. We will use GitHub’s built-in code review tool - pull requests, or PRs. It is a lightweight tool, included with GitHub’s core service for free and has gained popularity within the software development community in recent years.

Code Reviews via GitHub’s Pull Requests

Pull requests are fundamental to how teams review and improve code on GitHub (and similar code sharing platforms) - they let you tell others about changes you’ve pushed to a branch in a repository on GitHub and that your code is ready for review. Once a pull request is opened, you can discuss and review the potential changes with others on the team and add follow-up commits based on the feedback before your changes are merged from your feature branch into the develop branch. The name ‘pull request’ suggests you are requesting the codebase moderators to pull your changes into the codebase.

Such changes are normally done on a feature branch, to ensure that they are separate and self-contained, that the main branch only contains “production-ready” work, and that the develop branch contains code that has already been extensively tested. You create a branch for your work based on one of the existing branches (typically the develop branch but can be any other branch), do some commits on that branch, and, once you are ready to merge your changes, create a pull request to bring the changes back to the branch that you started from. In this context, the branch from which you branched off to do your work and where the changes should be applied back to is called the base branch, while the feature branch that contains changes you would like to be applied is the compare branch.

How you create your feature branches and open pull requests in GitHub will depend on your collaborative code development model:

In both development models, it is recommended to create a feature branch for your work and the subsequent pull request, even though you can submit pull requests from any branch or commit. This is because, with a feature branch, you can push follow-up commits as a response to feedback and update your proposed changes within a self-contained bundle. The only difference in creating a pull request between the two models is how you create the feature branch. In either model, once you are ready to merge your changes in - you will need to specify the base branch and the compare branch.

Code Review and Pull Requests In Action

Let’s see this in action - you are going to act as a reviewer on a change to the codebase raised by a fictional colleague on one of your course mate’s repository. Once this is done, you will then take on the role of the fictional colleague and respond to the review on your repository. If you are completing this course by yourself, you can raise the review on your own repository, review it there and finally respond to your own review comments. This is actually a very sensible thing to do in general - looking at your own code in a review window will allow you to spot mistakes you didn’t see before!

Here is an outline of the process of a tool based code review that we will be following:

%% Must disable useMaxWidth to have the diagram centre
%%{init: { 'sequence': {'useMaxWidth':false} } }%%
sequenceDiagram
    participant A as Author
    participant R as Reviewer
    A->>A: Write some code
    A->>R: Raise a pull request
    R->>R: Add comments to a review
    R->>A: Submit a review
    loop Until approved
        A->>R: Address or respond to review comments
        R->>A: Clarify or resolve comments
    end
    R->>A: Approve pull request
    A->>A: Merge pull request

Recall solution requirements SR1.1.1 from an earlier episode. Your fictional colleague has implemented it according to the specification and pushed it to a branch feature-std-dev. You will turn this branch into a pull request for your fictional colleague on your repository. You will then engage in code review for the change (acting as a code reviewer) on a course mate’s repository. Once complete, you will respond to the pull request on your repository from another team member.

Raising a pull request for your fictional colleague

  1. Head over to your remote repository in GitHub.
  2. Navigate to the pull requests tab
  3. Create a new pull request by clicking the green New pull request button. GitHub pull requests tab
  4. Select the base and the compare branch, e.g. main and feature-std-dev, respectively. Recall that the base branch is where you want your changes to be merged and the compare branch contains your changes.
  5. Click Create pull request to open the request Creating a new pull request.
  6. Add a comment describing the nature of the changes, and then submit the pull request by clicking Create pull request. Submitting a pull request.
  7. At this point, the code review process is initiated.

We will now discuss what to look for in a code review, before practising this on this fictional change.

Reviewing a pull request

Once a review has been raised it is over to the reviewer to review the code and submit a review.

Things to look for in a code review

Reviewing code effectively takes practice. However, here is some guidance on the kinds of things you should be looking for when reviewing a piece of code.

Start by understanding what the code should do, by reading the specification/user requirements, the pull request description and talking to the developer. In this case, understand what SR1.1.1 means.

Once you’re happy, start reading the code (skip the test code for now). You’re going to be assessing the code in 4 key areas:

Is the code readable

Is the code a minimal change

Is the structure of the code clear

Is there appropriate and up-to-date documentation

Adding a review comment

To add comments to a review, you need to open up the pull request. Then go to the Files changed tab. The files changed tab of a pull request When you find a line that you want to add a comment to, click on the plus (+) icon next to line. Adding a review comment to a pull request You can also add comments referring to multiple lines by clicking the plus and dragging down over the relevant lines.

If you want to make a concrete suggestion, such as renaming a variable, you can click the Add a suggestion button (which looks like a document with a plus and a minus in it). This will populate the comment with the existing code, and you can edit it to be what you think the code should be. Adding a suggestion to a pull request GitHub will then provide a button for the author to apply your change directly.

Write your comment in the box, and then click Start review. This will save your comment, but not publish it yet.

You can use Add single comment to immediately post a comment. However, it is best to batch the comments into a single review, so that the author knows when you have finished adding comments (and avoid spamming their email with notifications).

Continue adding comments in this way, using the Add review comment button on subsequent comments.

Effective review comments

  • Make sure your review comments are specific and actionable.
  • Try to be as specific as you can, rather than “this code is unclear” prefer, “I don’t understand what values this variable can hold”.
  • Make it clear in the comment if you want something to change as part of this pull request.
  • Ideally provide a concrete suggestion (e.g. better variable name).

Exercise: review some code

Pair up in the group and go to the pull request they created on their repo. If there are an odd number of people in your group, three people can go in a round robin fashion (the first team member will review the pull request on the second member’s repository and receives comments on the pull request on their repository from the third team member). If you are going through the material on your own and do not have a collaborator, you can be the reviewer on the pull requests on your own repository.

Review the code, looking for the kinds of problems that we have just discussed. There are examples of all the 4 main areas in the pull request, so try to make at least one suggestion for each area.

Don’t submit your review just yet.

Solution

Here are some of the things you might have found were wrong with the code:

Is the code readable

  • Unclear function name s_dev - uses an uncommon abbreviation increasing mental load when reading code that calls this function, prefer standard_deviation.
  • Variable number not clear what it contains — prefer business-logic name like mean or mean_of_data

Is the code minimal

  • Could have used np.std to compute standard deviation of data without having to reimplement from scratch.

Does the code have a clean structure

  • Have the function return the data, rather than having the graph name (a view layer consideration) leak into the model code.

Is the documentation up to date and correct

  • The docs say it returns the standard deviation, but it actually returns a dictionary containing the standard deviation.

Making sure code is valid

The other key thing you want to verify in code review is that the code is correct and well tested. One approach to do this is to build up a list of tests you expect to see (and the results you’d expect them to have), and then verify that all these tests are present and correct.

Start by listing out all the tests you’d expect to see based on the specification.

As you are going through the code, add to this list with any more tests you think of, making sure to add tests for:

Once you have built the list, go through the tests in the pull request. Make sure the tests test what you expect (so inspect them closely!).

Submit a review

Once you have a list of tests you want the author to add, it is time to submit your review.

To do this, click the Finish your review button at the top of the Files changed tab.

Using the finishing your review dialog

In the comment box, you can add any comments that aren’t associated with a specific line. For example, you can put the list of tests that you want to see added here.

Next you will select to one of Comment, Approve or Request changes.

Finally, you can press Submit review. This will publish all the comments you’ve made on the review and let the author know that the review is complete and it is their turn for action.

Exercise: review the code for suitable tests

Remind yourself of the specification of SR1.1.1 and write a list of tests you’d expect to see for this feature. Review the code again and expand this list to include any other edge cases the code makes you think of. Go through the tests in the pull request and work out which tests are present.

Once you are happy, you can submit your review. Select Request changes to let the author know they need to address your comments.

Solution

Your list might include the following:

  1. Standard deviation for one patient with multiple observations.
  2. Standard deviation for two patients.
  3. Graph includes a standard deviation graph.
  4. Standard deviation function should raise an error if given empty data.
  5. Computing standard deviation where deviation is different from variance.
  6. Standard deviation function should give correct result given negative inputs.
  7. Function should work with numpy arrays

Looking at the tests in the PR, you might be content that tests for 1, 4 and 7 are present so you would request changes to add tests 2, 3, 5 and 6.

In looking at the test you hopefully noticed that the test for numpy arrays is currently spuriously passing as it does not use the return value from the function in the assert.

You may have spotted that the function actually computes the variance rather than the standard deviation. Perhaps that is even what made you think to add the test for some data where the variance and standard deviation are different. In more complex examples, it is often easier to spot code that looks like it could be wrong and think of a test that will exercise it. This saves embarrassment if the code turns out to be right, means you have the missing test written if it is wrong, and is often quicker than trying to execute the code in your head to find out if it is correct.

What not to look for in a code review

The overriding priority for reviewing code should be making sure progress is being made - don’t let perfect be the enemy of good here. According to “Best Kept Secrets of Peer Code Review” (Cohen, 2006) it has been shown that the first hour of reviewing code is the most effective, with diminishing returns after that.

To that end, here are a few things you shouldn’t be trying to spot when reviewing:

  • Linting issues, or anything else that an automated tool can spot - get CI to do it.
  • Bugs - instead make sure there are tests for all cases.
  • Issues that pre-date the change - raise a PR fixing these issues separately to avoid heading down a rabbit hole.
  • Architecture re-writes - try to have design discussions upfront, or else have a meeting to decide whether the code needs to be rewritten.

Responding to review comments

When you receive comments on your review, there are a few different things that you will want to do.

With some, you will understand and agree with what the reviewer is saying. With these comments, you should make the change to your code on your branch. Once you’ve made the change you can commit it. It might be helpful to add a thumbs up reaction to the comment, so the reviewer knows you have addressed it.

Adding a emoji reaction to a review comment

With some, the comment might not make total sense. You can reply to comments for clarification.

Responding to a review comment

However, if you disagree, or are really lost on what they are driving it, it will be best to talk to them in person. Discussions done on code reviews can often feel quite adversarial - discussing what the best solution is in person can often defuse this.

Exercise: responding and addressing comments

Look at the PR that you created on your repo, that should now have someone elses comments on it. For each comment, either reply explaining why you don’t think the change is necessary or make the change and push a commit fixing it. You can reply to the comment indicating you have done it.

At the same time, people will be addressing your comments. If you’re happy that your comment has been suitably addressed, you can mark it as resolved. Once you’re happy they have all been addressed, you can approve the PR. To approve a PR, submit a new review and this time select Approve. This tells the author you are happy for them to merge the pull request.

Approve a pull request

  1. Once the reviewer approves the changes, the person whose repository it is can merge onto the base branch. Typically, it is the responsibility of the code’s author to do the merge but this may differ from team to team. In our case, you will merge the changes on the PR on your repository. Merging a pull request in GitHub
  2. Delete the merged branch to reduce the clutter in the repository.

Making code easy to review

There are a few things you can do when raising a pull request to make it as easy as possible for the reviewer to review your code:

The most important thing to keep in mind is how long your pull request is. Smaller changes, that just make one small improvement, will be much quicker and easier to review. There is no golden rule, but studies into code review show that you should not review more than 400 lines of code at a time, so this is a reasonable target to aim for. You can refer to some studies and Google recommendations as to what a “large pull request” is but be aware that it is not an exact science.

Even within a single review, try to keep each commit to be making one logical change. This can help if your review would otherwise be too large. In particular, if you’ve reformatted, refactored and changed the behavior of the code make sure each of these is in a separate commit (i.e reformat the code, commit, refactor the code, commit, alter the behavior of the code, commit).

Make sure you write a clear description of the content and purpose of the change. This should be provided as the pull request description. This should provide the context needed to read the code.

It is also a good idea to review your code yourself, before requesting a review. In doing this you will spot the more obvious issues with your code, allowing your reviewer to focus on the things you cannot spot.

Empathy in review comments

Code is written by humans (mostly!), and code review is a form of communication. As such, empathy is important for effective reviewing.

When reviewing code, it can be sometimes frustrating when code is confusing, particularly as it will be implemented differently to how you would have done it. However, it is important as a reviewer to be compassionate to the person whose code you are reviewing. Specifically:

Designing a review process

To be effective, code review needs to be a process that is followed by everyone developing the code. Everyone should believe that the process provides value.

One way to foster this is to design the process as a team. When you’re doing this you should consider:

You could also consider using pull request states in GitHub:

Once you’ve introduced a review process, you should monitor (either formally or informally) how well it is working.

It is important that reviews are processed quickly, to avoid costly context switching. We recommend aiming for 3 hours to get a first review, with the PR being merged the same day in most cases. If you are regularly missing these targets, then you should review where things are getting stuck and work out what you can do to move things along.

Exercise: Code Review in Your Own Working Environment

In this episode we have looked at why and how to use a tool driven code review process using GitHub pull requests. We’ve also looked at some best practices for doing code reviews in general.

Now think about how you typically develop code. What benefits do you think you would see for introducing a code review process in your work environment. How you might institute code review practices within your environment. Write down a process for a tool assisted code review, answering the questions above.

Once complete, discuss with the rest of the class what are the advantages of a code review process and what challenges you think you’d face in implementing this process in your own working environment.

Solution

The purposes of code review include:

  • improving internal code readability, understandability, quality and maintainability
  • checking for coding standards compliance, code uniformity and consistency
  • checking for test coverage and detecting bugs and code defects early
  • detecting performance problems and identifying code optimisation points
  • finding alternative/better solutions.
  • sharing knowledge of the code, and of coding standards and expectations of quality

Finally, it helps increase the sense of collective code ownership and responsibility, which in turn helps increase the “bus factor” and reduce the risk resulting from information and capabilities being held by a single person “responsible” for a certain part of the codebase and not being shared among team members.

Challenges you might face introducing a code review process:

  • Complaints that it is a waste of time
  • Creating a negative atmosphere where people are overly critical of each others work, or are defensive of their own
  • Perfectionism leading to slower development
  • People not sharing code to avoid the review process

Make sure to monitor whether these are happening, and adjust the process accordingly.

Other reading

There are multiple perspectives to a code review process - from general practices to technical details relating to different roles involved in the process. We have discussed the main points, but do check these useful code review blogs from Swarmia and Smartbear.

The key thing is to try it, and iterate the process until it works well for your team.

Key Points

  • Code review is a team software quality assurance practice where team members look at parts of the codebase in order to improve their code’s readability, understandability, quality and maintainability.

  • It is important to agree on a set of best practices and establish a code review process in a team to help to sustain a good, stable and maintainable code for many years.


Preparing Software for Reuse and Release

Overview

Teaching: 35 min
Exercises: 20 min
Questions
  • What can we do to make our programs reusable by others?

  • How should we document and license our code?

Objectives
  • Describe the different levels of software reusability

  • Explain why documentation is important

  • Describe the minimum components of software documentation to aid reuse

  • Create a repository README file to guide others to successfully reuse a program

  • Understand other documentation components and where they are useful

  • Describe the basic types of open source software licence

  • Explain the importance of conforming to data policy and regulation

  • Prioritise and work on improvements for release as a team

Introduction

In previous episodes we’ve looked at skills, practices, and tools to help us design and develop software in a collaborative environment. In this lesson we’ll be looking at a critical piece of the development puzzle that builds on what we’ve learnt so far - sharing our software with others.

The Levels of Software Reusability - Good Practice Revisited

Let’s begin by taking a closer look at software reusability and what we want from it.

Firstly, whilst we want to ensure our software is reusable by others, as well as ourselves, we should be clear what we mean by ‘reusable’. There are a number of definitions out there, but a helpful one written by Benureau and Rougler in 2017 offers the following levels by which software can be characterised:

  1. Re-runnable: the code is simply executable and can be run again (but there are no guarantees beyond that)
  2. Repeatable: the software will produce the same result more than once
  3. Reproducible: published research results generated from the same version of the software can be generated again from the same input data
  4. Reusable: easy to use, understand, and modify
  5. Replicable: the software can act as an available reference for any ambiguity in the algorithmic descriptions made in the published article. That is, a new implementation can be created from the descriptions in the article that provide the same results as the original implementation, and that the original - or reference - implementation, can be used to clarify any ambiguity in those descriptions for the purposes of reimplementation

Later levels imply the earlier ones. So what should we aim for? As researchers who develop software - or developers who write research software - we should be aiming for at least the fourth one: reusability. Reproducibility is required if we are to successfully claim that what we are doing when we write software fits within acceptable scientific practice, but it is also crucial that we can write software that can be understood and ideally modified by others. If others are unable to verify that a piece of software follows published algorithms, how can they be certain it is producing correct results? Where ‘others’, of course, can include a future version of ourselves.

Documenting Code to Improve Reusability

Reproducibility is a cornerstone of science, and scientists who work in many disciplines are expected to document the processes by which they’ve conducted their research so it can be reproduced by others. In medicinal, pharmacological, and similar research fields for example, researchers use logbooks which are then used to write up protocols and methods for publication.

Many things we’ve covered so far contribute directly to making our software reproducible - and indeed reusable - by others. A key part of this we’ll cover now is software documentation, which is ironically very often given short shrift in academia. This is often the case even in fields where the documentation and publication of research method is otherwise taken very seriously.

A few reasons for this are that writing documentation is often considered:

A very useful form of documentation for understanding our code is code commenting, and is most effective when used to explain complex interfaces or behaviour, or the reasoning behind why something is coded a certain way. But code comments only go so far.

Whilst it’s certainly arguable that writing documentation isn’t as exciting as writing code, it doesn’t have to be expensive and brings many benefits. In addition to enabling general reproducibility by others, documentation…

In the next section we’ll see that writing a sensible minimum set of documentation in a single document doesn’t have to be expensive, and can greatly aid reproducibility.

Writing a README

A README file is the first piece of documentation (perhaps other than publications that refer to it) that people should read to acquaint themselves with the software. It concisely explains what the software is about and what it’s for, and covers the steps necessary to obtain and install the software and use it to accomplish basic tasks. Think of it not as a comprehensive reference of all functionality, but more a short tutorial with links to further information - hence it should contain brief explanations and be focused on instructional steps.

Our repository already has a README that describes the purpose of the repository for this workshop, but let’s replace it with a new one that describes the software itself. First let’s delete the old one:

$ rm README.md

In the root of your repository create a replacement README.md file. The .md indicates this is a Markdown file, a lightweight markup language which is basically a text file with some extra syntax to provide ways of formatting them. A big advantage of them is that they can be read as plain-text files or as source files for rendering them with formatting structures, and are very quick to write. GitHub provides a very useful guide to writing Markdown for its repositories.

Let’s start writing README.md using a text editor of your choice and add the following line.

# Inflam

So here, we’re giving our software a name. Ideally something unique, short, snappy, and perhaps to some degree an indicator of what it does. We would ideally rename the repository to reflect the new name, but let’s leave that for now. In Markdown, the # designates a heading, two ## are used for a subheading, and so on. The Software Sustainability Institute’s guide on naming projects and products provides some helpful pointers.

We should also add a short description underneath the title.

# Inflam
Inflam is a data management system written in Python that manages trial data used in clinical inflammation studies.

To give readers an idea of the software’s capabilities, let’s add some key features next:

# Inflam
Inflam is a data management system written in Python that manages trial data used in clinical inflammation studies.

## Main features
Here are some key features of Inflam:

- Provide basic statistical analyses over clinical trial data
- Ability to work on trial data in Comma-Separated Value (CSV) format
- Generate plots of trial data
- Analytical functions and views can be easily extended based on its Model-View-Controller architecture

As well as knowing what the software aims to do and its key features, it’s very important to specify what other software and related dependencies are needed to use the software (typically called dependencies or prerequisites):

# Inflam
Inflam is a data management system written in Python that manages trial data used in clinical inflammation studies.

## Main features
Here are some key features of Inflam:

- Provide basic statistical analyses over clinical trial data
- Ability to work on trial data in Comma-Separated Value (CSV) format
- Generate plots of trial data
- Analytical functions and views can be easily extended based on its Model-View-Controller architecture

## Prerequisites
Inflam requires the following Python packages:

- [NumPy](https://www.numpy.org/) - makes use of NumPy's statistical functions
- [Matplotlib](https://matplotlib.org/stable/index.html) - uses Matplotlib to generate statistical plots

The following optional packages are required to run Inflam's unit tests:

- [pytest](https://docs.pytest.org/en/stable/) - Inflam's unit tests are written using pytest
- [pytest-cov](https://pypi.org/project/pytest-cov/) - Adds test coverage stats to unit testing

Here we’re making use of Markdown links, with some text describing the link within [] followed by the link itself within ().

One really neat feature - and a common practice - of using many CI infrastructures is that we can include the status of running recent tests within our README file. Just below the # Inflam title on our README.md file, add the following (replacing <your_github_username> with your own:

# Inflam
![Continuous Integration build in GitHub Actions](https://github.com/<your_github_username>/python-intermediate-inflammation/workflows/CI/badge.svg?branch=main)
...

This will embed a badge (icon) at the top of our page that reflects the most recent GitHub Actions build status of your software repository, essentially showing whether the tests that were run when the last change was made to the main branch succeeded or failed.

That’s got us started with documenting our code, but there are other aspects we should also cover:

For more verbose sections, there are usually just highlights in the README with links to further information, which may be held within other Markdown files within the repository or elsewhere.

We’ll finish these off later. See Matias Singer’s curated list of awesome READMEs for inspiration.

Other Documentation

There are many different types of other documentation you should also consider writing and making available that’s beyond the scope of this course. The key is to consider which audiences you need to write for, e.g. end users, developers, maintainers, etc., and what they need from the documentation. There’s a Software Sustainability Institute blog post on best practices for research software documentation that helpfully covers the kinds of documentation to consider and other effective ways to convey the same information.

One that you should always consider is technical documentation. This typically aims to help other developers understand your code sufficiently well to make their own changes to it, including external developers, other members in your team and a future version of yourself too. This may include documentation that covers the software’s architecture, including its different components and how they fit together, API (Application Programming Interface) documentation that describes the interface points designed into your software for other developers to use, e.g. for a software library, or technical tutorials/’HOW TOs’ to accomplish developer-oriented tasks.

Choosing an Open Source Licence

Software licensing is a whole topic in itself, so we’ll just summarise here. Your institution’s Intellectual Property (IP) team will be able to offer specific guidance that fits the way your institution thinks about software.

In IP law, software is considered a creative work of literature, so any code you write automatically has copyright protection applied. This copyright will usually belong to the institution that employs you, but this may be different for PhD students. If you need to check, this should be included in your employment/studentship contract or talk to your university’s IP team.

Since software is automatically under copyright, without a licence no one may:

Fundamentally there are two kinds of licence, Open Source licences and Proprietary licences, which serve slightly different purposes:

Within the open source licences, there are two categories, copyleft and permissive:

Which of these types of licence you prefer is up to you and those you develop code with. If you want more information, or help choosing a licence, the Choose An Open-Source Licence or tl;dr Legal sites can help.

Exercise: Preparing for Release

In a (hopefully) highly unlikely and thoroughly unrecommended scenario, your project leader has informed you of the need to release your software within the next half hour, so it can be assessed for use by another team. You’ll need to consider finishing the README, choosing a licence, and fixing any remaining problems you are aware of in your codebase. Ensure you prioritise and work on the most pressing issues first!

Merging into main

Once you’ve done these updates, commit your changes, and if you’re doing this work on a feature branch also ensure you merge it into develop, e.g.:

$ git checkout develop
$ git merge my-feature-branch

Finally, once we’ve fully tested our software and are confident it works as expected on develop, we can merge our develop branch into main:

$ git checkout main
$ git merge develop
$ git push

Tagging a Release in GitHub

There are many ways in which Git and GitHub can help us make a software release from our code. One of these is via tagging, where we attach a human-readable label to a specific commit. Let’s see what tags we currently have in our repository:

$ git tag

Since we haven’t tagged any commits yet, there’s unsurprisingly no output. We can create a new tag on the last commit we did by doing:

$ git tag -a v1.0.0 -m "Version 1.0.0"

So we can now do:

$ git tag
v.1.0.0

And also, for more information:

$ git show v1.0.0

You should see something like this:

tag v1.0.0
Tagger: <Name> <email>
Date:   Fri Dec 10 10:22:36 2021 +0000

Version 1.0.0

commit 2df4bfcbfc1429c12f92cecba751fb2d7c1a4e28 (HEAD -> main, tag: v1.0.0, origin/main, origin/develop, origin/HEAD, develop)
Author: <Name> <email>
Date:   Fri Dec 10 10:21:24 2021 +0000

	Finalising README.

diff --git a/README.md b/README.md
index 4818abb..5b8e7fd 100644
--- a/README.md
+++ b/README.md
@@ -22,4 +22,33 @@ Flimflam requires the following Python packages:
 The following optional packages are required to run Flimflam's unit tests:

 - [pytest](https://docs.pytest.org/en/stable/) - Flimflam's unit tests are written using pytest
-- [pytest-cov](https://pypi.org/project/pytest-cov/) - Adds test coverage stats to unit testing
\ No newline at end of file
+- [pytest-cov](https://pypi.org/project/pytest-cov/) - Adds test coverage stats to unit testing
+
+## Installation
+- Clone the repo ``git clone repo``
+- Check everything runs by running ``python -m pytest`` in the root directory
+- Hurray 😊
+
+## Contributing
+- Create an issue [here](https://github.com/Onoddil/python-intermediate-inflammation/issues)
+  - What works, what doesn't? You tell me
+- Randomly edit some code and see if it improves things, then submit a [pull request](https://github.com/Onoddil/python-intermediate-inflammation/pulls)
+- Just yell at me while I edit the code, pair programmer style!
+
+## Getting Help
+- Nice try
+
+## Credits
+- Directed by Michael Bay
+
+## Citation
+Please cite [J. F. W. Herschel, 1829, MmRAS, 3, 177](https://ui.adsabs.harvard.edu/abs/1829MmRAS...3..177H/abstract) if you used this work in your day-to-day life.
+Please cite [C. Herschel, 1787, RSPT, 77, 1](https://ui.adsabs.harvard.edu/abs/1787RSPT...77....1H/abstract) if you actually use this for scientific work.
+
+## License
+This source code is protected under international copyright law.  All rights
+reserved and protected by the copyright holders.
+This file is confidential and only available to authorized individuals with the
+permission of the copyright holders.  If you encounter this file and do not have
+permission, please contact the copyright holders and delete this file.
\ No newline at end of file

So now we’ve added a tag, we need this reflected in our Github repository. You can push this tag to your remote by doing:

$ git push origin v1.0.0

What is a Version Number Anyway?

Software version numbers are everywhere, and there are many different ways to do it. A popular one to consider is Semantic Versioning, where a given version number uses the format MAJOR.MINOR.PATCH. You increment the:

  • MAJOR version when you make incompatible API changes
  • MINOR version when you add functionality in a backwards compatible manner
  • PATCH version when you make backwards compatible bug fixes

You can also add a hyphen followed by characters to denote a pre-release version, e.g. 1.0.0-alpha1 (first alpha release) or 1.2.3-beta4 (fourth beta release)

We can now use the more memorable tag to refer to this specific commit. Plus, once we’ve pushed this back up to GitHub, it appears as a specific release within our code repository which can be downloaded in compressed .zip or .tar.gz formats. Note that these downloads just contain the state of the repository at that commit, and not its entire history.

Using features like tagging allows us to highlight commits that are particularly important, which is very useful for reproducibility purposes. We can (and should) refer to specific commits for software in academic papers that make use of results from software, but tagging with a specific version number makes that just a little bit easier for humans.

Conforming to Data Policy and Regulation

We may also wish to make data available to either be used with the software or as generated results. This may be via GitHub or some other means. An important aspect to remember with sharing data on such systems is that they may reside in other countries, and we must be careful depending on the nature of the data.

We need to ensure that we are still conforming to the relevant policies and guidelines regarding how we manage research data, which may include funding council, institutional, national, and even international policies and laws. Within Europe, for example, there’s the need to conform to things like GDPR. It’s a very good idea to make yourself aware of these aspects.

Key Points

  • The reuse battle is won before it is fought. Select and use good practices consistently throughout development and not just at the end.


Packaging Code for Release and Distribution

Overview

Teaching: 0 min
Exercises: 20 min
Questions
  • How do we prepare our code for sharing as a Python package?

  • How do we release our project for other people to install and reuse?

Objectives
  • Describe the steps necessary for sharing Python code as installable packages.

  • Use Poetry to prepare an installable package.

  • Explain the differences between runtime and development dependencies.

Why Package our Software?

We’ve now got our software ready to release - the last step is to package it up so that it can be distributed.

For very small pieces of software, for example a single source file, it may be appropriate to distribute to non-technical end-users as source code, but in most cases we want to bundle our application or library into a package. A package is typically a single file which contains within it our software and some metadata which allows it to be installed and used more simply - e.g. a list of dependencies. By distributing our code as a package, we reduce the complexity of fetching, installing and integrating it for the end-users.

In this session we’ll introduce one widely used method for building an installable package from our code. There are range of methods in common use, so it’s likely you’ll also encounter projects which take different approaches.

There’s some confusing terminology in this episode around the use of the term “package”. This term is used to refer to both:

Packaging our Software with Poetry

Installing Poetry

Because we’ve recommended GitBash if you’re using Windows, we’re going to install Poetry using a different method to the officially recommended one. If you’re on MacOS or Linux, are comfortable with installing software at the command line and want to use Poetry to manage multiple projects, you may instead prefer to follow the official Poetry installation instructions.

We can install Poetry much like any other Python distributable package, using pip:

$ source venv/bin/activate
$ pip3 install poetry

To test, we can ask where Poetry is installed:

$ which poetry
/home/alex/python-intermediate-inflammation/venv/bin/poetry

If you don’t get similar output, make sure you’ve got the correct virtual environment activated.

Poetry can also handle virtual environments for us, so in order to behave similarly to how we used them previously, let’s change the Poetry config to put them in the same directory as our project:

$ poetry config virtualenvs.in-project true

Setting up our Poetry Config

Poetry uses a pyproject.toml file to describe the build system and requirements of the distributable package. This file format was introduced to solve problems with bootstrapping packages (the processing we do to prepare to process something) using the older convention with setup.py files and to support a wider range of build tools. It is described in PEP 518 (Specifying Minimum Build System Requirements for Python Projects).

Make sure you are in the root directory of your software project and have activated your virtual environment, then we’re ready to begin.

To create a pyproject.toml file for our code, we can use poetry init. This will guide us through the most important settings - for each prompt, we either enter our data or accept the default.

Displayed below are the questions you should see with the recommended responses to each question so try to follow these, although use your own contact details!

NB: When you get to the questions about defining our dependencies, answer no, so we can do this separately later.

$ poetry init
This command will guide you through creating your pyproject.toml config.

Package name [example]:  inflammation
Version [0.1.0]: 1.0.0
Description []:  Analyse patient inflammation data
Author [None, n to skip]: James Graham <J.Graham@software.ac.uk>
License []:  MIT
Compatible Python versions [^3.8]: ^3.8

Would you like to define your main dependencies interactively? (yes/no) [yes] no
Would you like to define your development dependencies interactively? (yes/no) [yes] no
Generated file

[tool.poetry]
name = "inflammation"
version = "1.0.0"
description = "Analyse patient inflammation data"
authors = ["James Graham <J.Graham@software.ac.uk>"]
license = "MIT"

[tool.poetry.dependencies]
python = "^3.8"

[tool.poetry.dev-dependencies]

[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"


Do you confirm generation? (yes/no) [yes] yes

We’ve called our package “inflammation” in the setup above, instead of “inflammation-analysis” like we did in our previous setup.py. This is because Poetry will automatically find our code if the name of the distributable package matches the name of our module package. If we wanted our distributable package to have a different name, for example “inflammation-analysis”, we could do this by explicitly listing the module packages to bundle - see the Poetry docs on packages for how to do this.

Project Dependencies

Previously, we looked at using a requirements.txt file to define the dependencies of our software. Here, Poetry takes inspiration from package managers in other languages, particularly NPM (Node Package Manager), often used for JavaScript development.

Tools like Poetry and NPM understand that there are two different types of dependency: runtime dependencies and development dependencies. Runtime dependencies are those dependencies that need to be installed for our code to run, like NumPy. Development dependencies are dependencies which are an essential part of your development process for a project, but are not required to run it. Common examples of developments dependencies are linters and test frameworks, like pylint or pytest.

When we add a dependency using Poetry, Poetry will add it to the list of dependencies in the pyproject.toml file, add a reference to it in a new poetry.lock file, and automatically install the package into our virtual environment. If we don’t yet have a virtual environment activated, Poetry will create it for us - using the name .venv, so it appears hidden unless we do ls -a. Because we’ve already activated a virtual environment, Poetry will use ours instead. The pyproject.toml file has two separate lists, allowing us to distinguish between runtime and development dependencies.

$ poetry add matplotlib numpy
$ poetry add --dev pylint
$ poetry install

These two sets of dependencies will be used in different circumstances. When we build our package and upload it to a package repository, Poetry will only include references to our runtime dependencies. This is because someone installing our software through a tool like pip is only using it, but probably doesn’t intend to contribute to the development of our software and does not require development dependencies.

In contrast, if someone downloads our code from GitHub, together with our pyproject.toml, and installs the project that way, they will get both our runtime and development dependencies. If someone is downloading our source code, that suggests that they intend to contribute to the development, so they’ll need all of our development tools.

Have a look at the pyproject.toml file again to see what’s changed.

Packaging Our Code

The final preparation we need to do is to make sure that our code is organised in the recommended structure. This is the Python module structure - a directory containing an __init__.py and our Python source code files. Make sure that the name of this Python package (inflammation - unless you’ve renamed it) matches the name of your distributable package in pyproject.toml unless you’ve chosen to explicitly list the module packages.

By convention distributable package names use hyphens, whereas module package names use underscores. While we could choose to use underscores in a distributable package name, we cannot use hyphens in a module package name, as Python will interpret them as a minus sign in our code when we try to import them.

Once we’ve got our pyproject.toml configuration done and our project is in the right structure, we can go ahead and build a distributable version of our software:

$ poetry build

This should produce two files for us in the dist directory. The one we care most about is the .whl or wheel file. This is the file that pip uses to distribute and install Python packages, so this is the file we’d need to share with other people who want to install our software.

Now if we gave this wheel file to someone else, they could install it using pip - you don’t need to run this command yourself, you’ve already installed it using poetry install above.

$ pip3 install dist/inflammation*.whl

The star in the line above is a wildcard, that means Bash should use any filenames that match that pattern, with any number of characters in place for the star. We could also rely on Bash’s autocomplete functionality and type dist/inflammation, then hit the Tab key if we’ve only got one version built.

After we’ve been working on our code for a while and want to publish an update, we just need to update the version number in the pyproject.toml file (using SemVer perhaps), then use Poetry to build and publish the new version. If we don’t increment the version number, people might end up using this version, even though they thought they were using the previous one. Any re-publishing of the package, no matter how small the changes, needs to come with a new version number. The advantage of SemVer is that the change in the version number indicates the degree of change in the code and thus the degree of risk of breakage when we update.

$ poetry build

In addition to the commands we’ve already seen, Poetry contains a few more that can be useful for our development process. For the full list see the Poetry CLI documentation.

The final step is to publish our package to a package repository. A package repository could be either public or private - while you may at times be working on public projects, it’s likely the majority of your work will be published internally using a private repository such as JFrog Artifactory. Every repository may be configured slightly differently, so we’ll leave that to you to investigate.

What if We Need More Control?

Sometimes we need more control over the process of building our distributable package than Poetry allows. There many ways to distribute Python code in packages, with some degree of flux in terms of which methods are most popular. For a more comprehensive overview of Python packaging you can see the Python docs on packaging, which contains a helpful guide to the overall packaging process, or ‘flow’, using the Twine tool to upload created packages to PyPI for distribution as an alternative.

Optional Exercise: Enhancing our Package Metadata

The Python Packaging User Guide provides documentation on how to package a project using a manual approach to building a pyproject.toml file, and using Twine to upload the distribution packages to PyPI.

Referring to the section on metadata in the documentation, enhance your pyproject.toml with some additional metadata fields to improve the information your package.

Key Points

  • Poetry allows us to produce an installable package and upload it to a package repository.

  • Making our software installable with Pip makes it easier for others to start using it.

  • For complete control over building a package, we can use a setup.py file.


Section 5: Managing and Improving Software Over Its Lifetime

Overview

Teaching: 5 min
Exercises: 0 min
Questions
  • How do we manage the process of developing and improving our software?

  • How do we ensure we reuse other people’s code while maintaining the sustainability of our own software?

Objectives
  • Use established tools to track and manage software problems and enhancements in a team.

  • Understand the importance of critical reflection to improving software quality and reusability.

  • Improve software through feedback, work estimation, prioritisation and agile development.

In this section of the course we look at managing the development and evolution of software - how to keep track of the tasks the team has to do, how to improve the quality and reusability of our software for others as well as ourselves, and how to assess other people’s software for reuse within our project. The focus in this section will move beyond just software development to software management: internal planning and prioritising tasks for future development, management of internal communication as well as how the outside world interacts with and makes use of our software, how others can interact with ourselves to report issues, and the ways we can successfully manage software improvement in response to feedback.

Managing software

In this section we will:

Key Points

  • For software to succeed it needs to be managed as well as developed.

  • Estimating the effort to deliver work items is a foundational tool for prioritising that work.


Managing a Collaborative Software Project

Overview

Teaching: 15 min
Exercises: 30 min
Questions
  • How can we keep track of identified issues and the list of tasks the team has to do?

  • How can we communicate within a team on code-related issues and share responsibilities?

  • How can we plan, prioritise and manage tasks for future development?

Objectives
  • Register and track progress on issues with the code in our project repository

  • Describe some different types of issues we can have with software

  • Manage communications on software development activities within the team using GitHub’s notification system Mentions

  • Use GitHub’s Project Boards and Milestones for software project management, planning sprints and releases

Introduction

Developing software is a project and, like most projects, it consists of multiple tasks. Keeping track of identified issues with the software, the list of tasks the team has to do, progress on each, prioritising tasks for future development, planning sprints and releases, etc., can quickly become a non-trivial task in itself. Without a good team project management process and framework, it can be hard to keep track of what’s done, or what needs doing, and particularly difficult to convey that to others in the team or share the responsibilities.

Using GitHub to Manage Issues With Software

As a piece of software is used, bugs and other issues will inevitably come to light - nothing is perfect! If you work on your code with collaborators, or have non-developer users, it can be helpful to have a single shared record of all the problems people have found with the code, not only to keep track of them for you to work on later, but to avoid people emailing you to report a bug that you already know about!

GitHub provides Issues - a framework for managing bug reports, feature requests, and lists of future work.

Go back to the home page for your python-intermediate-inflammation repository in GitHub, and click on the Issue tab. You should see a page listing the open issues on your repository - currently there should be none.

List of project issues in GitHub

Let’s go through the process of creating a new issue. Start by clicking the New issue button.

Creating a new issue in GitHub

When you create an issue, you can add a range of details to them. They can be assigned to a specific developer for example - this can be a helpful way to know who, if anyone, is currently working to fix the issue, or a way to assign responsibility to someone to deal with it.

They can also be assigned a label. The labels available for issues can be customised, and given a colour, allowing you to see at a glance the state of your code’s issues. The default labels available in GitHub include:

You can also create your own custom labels to help with classifying issues. There are no rules really about naming the labels - use whatever makes sense for your project. Some conventional custom labels include: status:in progress (to indicate that someone started working on the issue), status:blocked (to indicate that the progress on addressing issue is blocked by another issue or activity), etc.

As well as highlighting problems, the bug label can make code much more usable by allowing users to find out if anyone has had the same problem before, and also how to fix (or work around) it on their end. Enabling users to solve their own problems can save you a lot of time. In general, a good bug report should contain only one bug, specific details of the environment in which the issue appeared (e.g. operating system or browser, version of the software and its dependencies), and sufficiently clear and concise steps that allow a developer to reproduce the bug themselves. They should also be clear on what the bug reporter considers factual (“I did this and this happened”) and speculation (“I think it was caused by this”). If an error report was generated from the software itself, it’s a very good idea to include that in the issue.

The enhancement label is a great way to communicate your future priorities to your collaborators but also to yourself - it’s far too easy to leave a software project for a few months to work on something else, only to come back and forget the improvements you were going to make. If you have other users for your code, they can use the label to request new features, or changes to the way the code operates. It’s generally worth paying attention to these suggestions, especially if you spend more time developing than running the code. It can be very easy to end up with quirky behaviour because of off-the-cuff choices during development. Extra pairs of eyes can point out ways the code can be made more accessible - the easier the code is to use, the more widely it will be adopted and the greater impact it will have.

One interesting label is wontfix, which indicates that an issue simply won’t be worked on for whatever reason. Maybe the bug it reports is outside of the use case of the software, or the feature it requests simply isn’t a priority. This can make it clear you’ve thought about an issue and dismissed it.

Locking and Pinning Issues

The Lock conversation and Pin issue buttons are both available from individual issue pages. Locking conversations allows you to block future comments on the issue, e.g. if the conversation around the issue is not constructive or violates your team’s code of conduct. Pinning issues allows you to pin up to three issues to the top of the issues page, e.g. to emphasise their importance.

Manage Issues With Your Code Openly

Having open, publicly-visible lists of the limitations and problems with your code is incredibly helpful. Even if some issues end up languishing unfixed for years, letting users know about them can save them a huge amount of work attempting to fix what turns out to be an unfixable problem on their end. It can also help you see at a glance what state your code is in, making it easier to prioritise future work!

Exercise: Our First Issue!

Individually, with a critical eye, think of an aspect of the code you have developed so far that needs improvement. It could be a bug, for example, or a documentation issue with your README, a missing LICENSE file, or an enhancement. In GitHub, enter the details of the issue and select Submit new issue. Add a label to your issue, if appropriate.

Time: 5 mins

Solution

For example, “Add a licence file” could be a good first issue, with a label documentation.

Issue (and Pull Request) Templates

GitHub also allows you to set up issue and pull request templates for your software project. Such templates provide a structure for the issue/pull request descriptions, and/or prompt issue reporters and collaborators to fill in answers to pre-set questions. They can help contributors raise issues or submit pull requests in a way that is clear, helpful and provides enough information for maintainers to act upon (without going back and forth to extract it). GitHub provides a range of default templates, but you can also write your own.

Using GitHub’s Notifications & Referencing System to Communicate

GitHub implements a comprehensive notifications system to keep the team up-to-date with activities in your code repository and notify you when something happens or changes in your software project. You can choose whether to watch or unwatch an individual repository, or can choose to only be notified of certain event types such as updates to issues, pull requests, direct mentions, etc. GitHub also provides an additional useful notification feature for collaborative work - Mentions. In addition to referencing team members (which will result in an appropriate notification), GitHub allows us to reference issues, pull requests and comments from one another - providing a useful way of connecting things and conversations in your project.

Referencing Team Members Using Mentions

The mention system notifies team members when somebody else references them in an issue, comment or pull request - you can use this to notify people when you want to check a detail with them, or let them know something has been fixed or changed (much easier than writing out all the same information again in an email).

You can use the mention system to link to/notify an individual GitHub account or a whole team for notifying multiple people. Typing @ in GitHub will bring up a list of all accounts and teams linked to the repository that can be “mentioned”. People will then receive notifications based on their preferred notification methods - e.g. via email or GitHub’s User Interface.

Referencing Issues, Pull Requests and Comments

GitHub also lets you mention/reference one issue or pull request from another (and people “watching” these will be notified of any such updates). Whilst writing the description of an issue, or commenting on one, if you type # you should see a list of the issues and pull requests on the repository. They are coloured green if they’re open, or white if they’re closed. Continue typing the issue number, and the list will narrow down, then you can hit Return to select the entry and link the two. For example, if you realise that several of your bugs have common roots, or that one enhancement can’t be implemented before you’ve finished another, you can use the mention system to indicate the depending issue(s). This is a simple way to add much more information to your issues.

While not strictly notifying anyone, GitHub lets you also reference individual comments and commits. If you click the ... button on a comment, from the drop down list you can select to Copy link (which is a URL that points to that comment that can be pasted elsewhere) or to Reference [a comment] in a new issue (which opens a new issue and references the comment by its URL). Within a text box for comments, issue and pull request descriptions, you can reference a commit by pasting its long, unique identifier (or its first few digits which uniquely identify it) and GitHub will render it nicely using the identifier’s short form and link to the commit in question.

Referencing comments and commits in GitHub

Exercise: Our First Mention/Reference!

Add a mention to one of your team members using the @ notation in a comment within an issue or a pull request in your repository - e.g. to ask them a question or a clarification on something or to do some additional work.

Alternatively, add another issue to your repository and reference the issue you created in the previous exercise using the # notation.

Time: 5 mins

You Are Also a User of Your Code

This section focuses a lot on how issues and mentions can help communicate the current state of the code to others and document what conversations were held around particular issues. As a sole developer, and possibly also the only user of the code, you might be tempted to not bother with recording issues, comments and new features as you don’t need to communicate the information to anyone else.

Unfortunately, human memory isn’t infallible! After spending six months on a different topic, it’s inevitable you’ll forget some of the plans you had and problems you faced. Not documenting these things can lead to you having to re-learn things you already put the effort into discovering before. Also, if others are brought on to the project at a later date, the software’s existing issues and potential new features are already in place to build upon.

Software Project Management in GitHub

Managing issues within your software project is one aspect of project management but it gives a relative flat representation of tasks and may not be as suitable for higher-level project management such as prioritising tasks for future development, planning sprints and releases. Luckily, GitHub provides two project management tools for this purpose - Projects and Milestones.

Both Projects and Milestones provide agile development and project management systems and ways of organising issues into smaller “sub-projects” (i.e. smaller than the “project” represented by the whole repository). Projects provide a way of visualising and organising work which is not time-bound and is on a higher level (e.g. more suitable for project management tasks). Milestones are typically used to organise lower-level tasks that have deadlines and progress of which needs to be closely tracked (e.g. release and version management). The main difference is that Milestones are a repository-level feature (i.e. they belong and are managed from a single repository), whereas projects are account-level and can manage tasks across many repositories under the same user or organisational account.

How you organise and partition your project work and which tool you want to use to track progress (if at all) is up to you and the size of your project. For example, you could create a project per milestone or have several milestones in a single project, or split milestones into shorter sprints. We will use Milestones soon to organise work on a mini sprint within our team - for now, we will have a brief look at Projects.

Projects

A Project uses a “project board” consisting of columns and cards to keep track of tasks (although GitHub now also provides a table view over a project’s tasks). You break down your project into smaller sub-projects, which in turn are split into tasks which you write on cards, then move the cards between columns that describe the status of each task. Cards are usually small, descriptive and self-contained tasks that build on each other. Breaking a project down into clearly-defined tasks makes it a lot easier to manage. GitHub project boards interact and integrate with the other features of the site such as issues and pull requests - cards can be added to track the progress of such tasks and automatically moved between columns based on their progress or status.

Project are a Cross-Repository Management Tool

Project in GitHub are created on a user or organisation level, i.e. they can span all repositories owned by a user or organisation in GitHub and are not a repository-level feature any more. A project can integrate your issues and pull requests on GitHub from multiple repositories to help you plan and track your team’s work effectively.

Let’s create a Project in GitHub to plan the first release of our code.

  1. From your GitHub account’s home page (not your repository’s home page!), select the “Projects” tab, then click the New project button on the right.

    Adding a new project board in GitHub

  2. In the “Select a template” pop-up window, select “Board” - this will give you a classic “cards on a board” view of the project. An alternative is the “Table” view, which presents a spreadsheet-like and slightly more condensed view of a project.

    Selecting a project board template in GitHub

  3. GitHub will create an unnamed project board for you. You should populate the name and the description of the project from the project’s Settings, which can be found by clicking the ... button in the top right corner of the board.

    Project board setting in GitHub

  4. We can, for example, use “Inflammation project - release v0.1” and “Tasks for the v0.1 release of the inflammation project” for the name and description of our project, respectively. Or you can use anything that suits your project.

    Naming a project in GitHub

  5. GitHub’s default card board template contains the following three columns with pretty self-explanatory names:

    • To Do
    • In Progress
    • Done

    Default card board in GitHub

    You can add or remove columns from your project board to suit your use case. One commonly seen extra column is On hold or Waiting - if you have tasks that get held up by waiting on other people (e.g. to respond to your questions) then moving them to a separate column makes their current state clearer.

    To add a new column, press the + button on the right; to remove a column select the ... button in the top right corner of the column itself and then the Delete column option.

  6. You can now add new items (cards) to columns by pressing the + Add item button at the bottom of each column - a text box to add a card will appear. Cards can be simple textual notes which you type into the text box and pres Enter when finished. Cards can also be (links to) existing issues and pull requests, which can be filtered out from the text box by pressing # (to activate GitHub’s referencing mechanism) and selecting the repository and an issue or pull request from that repository that you want to add.

    Adding issues and notes to a project board in GitHub

    Notes contain task descriptions and can have detailed content like checklists. In some cases, e.g. if a note becomes too complex, you may want to convert it into an issue so you can add labels, assign them to team members or write more detailed comments (for that, use the Convert to issue option from the ... menu on the card itself).

    Coverting a note to issue

  7. You can now drag a card to In Progress column to indicate that you are working on it or to the Done column to indicate that it has been completed. Issues and pull requests on cards will automatically be moved to the Done column for you when you close the issue or merge the pull request - which is very convenient and can save you some project management time.

Exercise: Working With Projects

Spend a few minutes planning what you want to do with your project as a bigger chunk of work (you can continue working on the first release of your software if you like) and play around with your project board to manage tasks around the project:

  • practice adding and removing columns,
  • practice adding different types of cards (notes and from already existing open issues and/or unmerged pull requests),
  • practice turing cards into issues and closing issues, etc.

Make sure to add a certain number of issues to your repository to be able to use in your project board.

Time: 10 mins

Prioritisation With Project Boards

Once your project board has a large number of cards on it, you might want to begin priorisiting them. Not all tasks are going to be equally important, and some will require others to be completed before they can even be begun. Common methods of prioritisation include:

  • Vertical position: the vertical arrangement of cards in a column implicitly represents their importance. High-priority issues go to the top of To Do, whilst tasks that depend on others go beneath them. This is the easiest one to implement, though you have to remember to correctly place cards when you add them.
  • Priority columns: instead of a single To Do column, you can have two or more, for example - To Do: Low Priority and To Do: High Priority. When adding a card, you pick which is the appropriate column for it. You can even add a Triage column for newly-added issues that you’ve not yet had time to classify. This format works well for project boards devoted to bugs.
  • Labels: if you convert each card into an issue, then you can label them with their priority - remember GitHub lets you create custom labels and set their colours. Label colours can provide a very visually clear indication of issue priority but require more administrative work on the project, as each card has to be an issue to be assigned a label. If you choose this route for issue prioritisation - be aware of accessibility issues for colour-blind people when picking colours.

Key Points

  • We should use GitHub’s Issues to keep track of software problems and other requests for change - even if we are the only developer and user.

  • GitHub’s Mentions play an important part in communicating between collaborators and is used as a way of alerting team members of activities and referencing one issue/pull requests/comment/commit from another.

  • Without a good project and issue management framework, it can be hard to keep track of what’s done, or what needs doing, and particularly difficult to convey that to others in the team or sharing the responsibilities.


Assessing Software for Suitability and Improvement

Overview

Teaching: 15 min
Exercises: 30 min
Questions
  • What makes good code actually good?

  • What should we look for when selecting software to reuse?

Objectives
  • Explain why a critical mindset is important when selecting software

  • Conduct an assessment of software against suitability criteria

  • Describe what should be included in software issue reports and register them

Introduction

What we’ve been looking at so far enables us to adopt a more proactive and managed attitude and approach to the software we develop. But we should also adopt this attitude when selecting and making use of third-party software we wish to use. With pressing deadlines it’s very easy to reach for a piece of software that appears to do what you want without considering properly whether it’s a good fit for your project first. A chain is only as strong as its weakest link, and our software may inherit weaknesses in any dependent software or create other problems.

Overall, when adopting software to use it’s important to consider not only whether it has the functionality you want, but a broader range of qualities that are important for your project. Adopting a critical mindset when assessing other software against suitability criteria will help you adopt the same attitude when assessing your own software for future improvements.

Assessing Software for Suitability

Exercise: Decide on Your Group’s Repository!

You all have your code repositories you have been working on throughout the course so far. For the upcoming exercise, groups will exchange repositories and review the code of the repository they inherit, and provide feedback.

Time: 5 mins

  1. Decide as a team on one of your repositories that will represent your group. You can do this any way you wish, but if you are having trouble then a pseudo-random number might help: python -c "import numpy as np; print(np.random.randint(low=1, high=<size_group_plus_1>))"
  2. Add the URL of the repository to the section of the shared notes labelled ‘Decide on your Group’s Repository’, next to your team’s name.

Exercise: Conduct Assessment on Third-Party Software

The scenario: It is envisaged that a piece of software developed by another team will be adopted and used for the long term in a number of future projects. You have been tasked with conducting an assessment of this software to identify any issues that need resolving prior to working with it, and will provide feedback to the developing team to fix these issues.

Time: 20 mins

  1. As a team, briefly decide who will assess which aspect of the repository, e.g. its documentation, tests, codebase, etc.
  2. Obtain the URL for the repository you will assess from the shared notes document, in the section labelled ‘Decide on your Group’s Repository’ - see the last column which indicates which team’s repository you are assessing.
  3. Conduct the assessment and register any issues you find on the other team’s software repository on GitHub.
  4. Be meticulous in your assessment and register as many issues as you can!

Supporting Your Software - How and How Much?

Within your collaborations and projects, what should you do to support other users? Here are some key aspects to consider:

  • Provide contact information: so users know what to do and how to get in contact if they run into problems
  • Manage your support: an issue tracker - like the one in GitHub - is essential to track and manage issues
  • Manage expectations: let users know the level of support you offer, in terms of when they can expect responses to queries, the scope of support (e.g. which platforms, types of releases, etc.), the types of support (e.g. bug resolution, helping develop tailored solutions), and expectations for support in the future (e.g. when project funding runs out)

All of this requires effort, and you can’t do everything. It’s therefore important to agree and be clear on how the software will be supported from the outset, whether it’s within the context of a single laboratory, project, or other collaboration, or across an entire community.

Key Points

  • It’s as important to have a critical attitude to adopting software as we do to developing it.

  • As a team agree on who and to what extent you will support software you make available to others.


Software Improvement Through Feedback

Overview

Teaching: 5 min
Exercises: 45 min
Questions
  • How should we handle feedback on our software?

  • How, and to what extent, should we provide support to our users?

Objectives
  • Prioritise and work on externally registered issues

  • Respond to submitted issue reports and provide feedback

  • Explain the importance of software support and choosing a suitable level of support

Introduction

When a software project has been around for even just a short amount of time, you’ll likely discover many aspects that can be improved. These can come from issues that have been registered via collaborators or users, but also those you’re aware of internally, which should also be registered as issues. When starting a new software project, you’ll also have to determine how you’ll handle all the requirements. But which ones should you work on first, which are the most important and why, and how should you organise all this work?

Software has a fundamental role to play in doing science, but unfortunately software development is often given short shrift in academia when it comes to prioritising effort. There are also many other draws on our time in addition to the research, development, and writing of publications that we do, which makes it all the more important to prioritise our time for development effectively.

In this lesson we’ll be looking at prioritising work we need to do and what we can use from the agile perspective of project management to help us do this in our software projects.

Estimation as a Foundation for Prioritisation

For simplicity, we’ll refer to our issues as requirements, since that’s essentially what they are - new requirements for our software to fulfil.

But before we can prioritise our requirements, there are some things we need to find out.

Firstly, we need to know:

We also need estimates for how long each requirement will take to resolve, since we cannot meaningfully prioritise requirements without knowing what the effort tradeoffs will be. Even if we know how important each requirement is, how would we even know if completing the project is possible? Or if we don’t know how long it will take to deliver those requirements we deem to be critical to the success of a project, how can we know if we can include other less important ones?

It is often not the reality, but estimation should ideally be done by the people likely to do the actual work (i.e. the Research Software Engineers, researchers, or developers). It shouldn’t be done by project managers or PIs simply because they are not best placed to estimate, and those doing the work are the ones who are effectively committing to these figures.

Why is it so Difficult to Estimate?

Estimation is a very valuable skill to learn, and one that is often difficult. Lack of experience in estimation can play a part, but a number of psychological causes can also contribute. One of these is Dunning-Kruger, a type of cognitive bias in which people tend to overestimate their abilities, whilst in opposition to this is imposter syndrome, where due to a lack of confidence people underestimate their abilities. The key message here is to be honest about what you can do, and find out as much information that is reasonably appropriate before arriving at an estimate.

More experience in estimation will also help to reduce these effects. So keep estimating!

An effective way of helping to make your estimates more accurate is to do it as a team. Other members can ask prudent questions that may not have been considered, and bring in other sanity checks and their own development experience. Just talking things through can help uncover other complexities and pitfalls, and raise crucial questions to clarify ambiguities.

Where to Record Effort Estimates?

There is no dedicated place to record the effort estimate on an issue in current GitHub’s interface. Therefore, you can agree on a convention within your team on how to record this information - e.g. you can add the effort in person/days in the issue title. Recording estimates within comments on an issue may not be the best idea as it may get lost among other comments. Another place where you can record estimates for your issues is on project boards - there is no default field for this but you can create a custom numeric field and use it to assign effort estimates (note that you cannot sum them yet in the current GitHub’s interface).

Exercise: Estimate!

As a team go through the issues that your partner team has registered with your software repository, and quickly estimate how long each issue will take to resolve in minutes. Do this by blind consensus first, each anonymously submitting an estimate, and then briefly discuss your rationale and decide on a final estimate. Make sure these are honest estimates, and you are able to complete them in the allotted time!

Time: 15 mins

Using MoSCoW to Prioritise Work

Now we have our estimates we can decide how important each requirement is to the success of the project. This should be decided by the project stakeholders; those - or their representatives - who have a stake in the success of the project and are either directly affected or affected by the project, e.g. Principle Investigators, researchers, Research Software Engineers, collaborators, etc.

To prioritise these requirements we can use a method called MoSCoW, a way to reach a common understanding with stakeholders on the importance of successfully delivering each requirement for a timebox. MoSCoW is an acronym that stands for Must have, Should have, Could have, and Won’t have. Each requirement is discussed by the stakeholder group and falls into one of these categories:

In typical use, the ratio to aim for of requirements to the MH/SH/CH categories is 60%/20%/20% for a particular timebox. Importantly, the division is by the requirement estimates, not by number of requirements, so 60% means 60% of the overall estimated effort for requirements are Must Haves.

Why is this important? Because it gives you a unique degree of control of your project for each time period. It awards you 40% of flexibility with allocating your effort depending on what’s critical and how things progress. This effectively forces a tradeoff between the effort available and critical objectives, maintaining a significant safety margin. The idea is that as a project progresses, even if it becomes clear that you are only able to deliver the Must Haves for a particular time period, you have still delivered it successfully.

GitHub’s Milestones

Once we’ve decided on those we’ll work on (i.e. not Won’t Haves), we can (optionally) use a GitHub’s Milestone to organise them for a particular timebox. Remember, a milestone is a collection of issues to be worked on in a given period (or timebox). We can create a new one by selecting Issues on our repository, then Milestones to display any existing milestones, then clicking the “New milestone” button to the right.

Milestones in GitHub

Create a milestone in GitHub

We add in a title, a completion date (i.e. the end of this timebox), and any description for the milestone.

Create a milestone in GitHub

Once created, we can view our issues and assign them to our milestone from the Issues page or from an individual issue page.

Milestones in GitHub

Let’s now use Milestones to plan and prioritise our team’s next sprint.

Exercise: Prioritise!

Put your stakeholder hats on, and as a team apply MoSCoW to the repository issues to determine how you will prioritise effort to resolve them in the allotted time. Try to stick to the 60/20/20 rule, and assign all issues you’ll be working on (i.e. not Won't Haves) to a new milestone, e.g. “Tidy up documentation” or “version 0.1”.

Time: 10 mins

Using Sprints to Organise and Work on Issues

A sprint is an activity applied to a timebox, where development is undertaken on the agreed prioritised work for the period. In a typical sprint, there are daily meetings called scrum meetings which check on how work is progressing, and serves to highlight any blockers and challenges to meeting the sprint goal.

Exercise: Conduct a Mini Mini-Sprint

For the remaining time in this course, assign repository issues to team members and work on resolving them as per your MoSCoW breakdown. Once an issue has been resolved, notable progress made, or an impasse has been reached, provide concise feedback on the repository issue. Be sure to add the other team members to the chosen repository so they have access to it. You can grant Write access to others on a GitHub repository via the Settings tab for a repository, then selecting Collaborators, where you can invite other GitHub users to your repository with specific permissions.

Time: however long is left

Depending on how many issues were registered on your repository, it’s likely you won’t have resolved all the issues in this first milestone. Of course, in reality, a sprint would be over a much longer period of time. In any event, as the development progresses into future sprints any unresolved issues can be reconsidered and prioritised for another milestone, which are then taken forward, and so on. This process of receiving new requirements, prioritisation, and working on them is naturally continuous - with the benefit that at key stages you are repeatedly re-evaluating what is important and needs to be worked on which helps to ensure real concrete progress against project goals and requirements which may change over time.

Project Boards For Planning Sprints

Remember, you can use project boards for higher-level project management - e.g. planning several sprints in advance (and use milestones for tracking progress on individual sprints).

Key Points

  • Prioritisation is a key tool in academia where research goals can change and software development is often given short shrift.

  • In order to prioritise things to do we must first estimate the effort required to do them.

  • For accurate effort estimation, it should be done by the people who will actually do the work.

  • Aim to reduce cognitive biases in effort estimation by being honest about your abilities.

  • Ask other team members - or do estimation as a team - to help make accurate estimates.

  • MoSCoW is a useful technique for prioritising work to help ensure projects deliver successfully.

  • Aim for a 60%/20%/20% ratio of Must Haves/Should Haves/Could Haves for requirements within a timebox.


Wrap-up

Overview

Teaching: 15 min
Exercises: 0 min
Questions
  • Looking back at what was covered and how different pieces fit together

  • Where are some advanced topics and further reading available?

Objectives
  • Put the course in context with future learning.

Summary

As part of this course we have looked at a core set of established, intermediate-level software development tools and best practices for working as part of a team. The course teaches a selected subset of skills that have been tried and tested in collaborative research software development environments, although not an all-encompassing set of every skill you might need (check some further reading). It will provide you with a solid basis for writing industry-grade code, which relies on the same best practices taught in this course:

Reflection Exercise: Putting the Pieces Together

As a group, reflect on the concepts (e.g. tools, techniques and practices) covered throughout the course, how they relate to one another, how they fit together in a bigger picture or skill learning pathways and in which order you need to learn them.

Solution

One way to think about these concepts is to make a list and try to organise them along two axes - ‘perceived usefulness of a concept’ versus ‘perceived difficulty or time needed to master a concept’, as shown in the table below (for the exercise, you can make your own copy of the template table for the purpose of this exercise). You then may think in which order you want to learn the skills and how much effort they require - e.g. start with those that are more useful but, for the time being, hold off those that are not too useful to you and take loads of time to master. You will likely want to focus on the concepts in the top right corner of the table first, but investing time to master more difficult concepts may pay off in the long run by saving you time and effort and helping reduce technical debt.

Usefulness versus time to master grid

Another way you can organise the concepts is using a concept map (a directed graph depicting suggested relationships between concepts) or any other diagram/visual aid of your choice. Below are some example views of tools and techniques covered in the course using concept maps. Your views may differ but that is not to say that either view is right or wrong. This exercise is meant to get you to reflect on what was covered in the course and hopefully to reinforce the ideas and concepts you learned.

Overview of tools and techniques covered in the course

A different concept map tries to organise concepts/skills based on their level of difficulty (novice, intermediate and advanced, and in-between!) and tries to show which skills are prerequisite for others and in which order you should consider learning skills.

Overview of topics covered in the course based on level of difficulty

Further Resources

Below are some additional resources to help you continue learning:

Key Points

  • Collaborative techniques and tools play an important part of research software development in teams.