CS472 - Dynamic Analysis
Labs
This individual assignment is due Sept 17th, 2024
In this Lab your will practice writing unit tests and analysing test coverage using two programming languages: Java and Python. In the Lab you will also continue working with Git and GitHub facilities. You will make all your contributions for this Lab in the Team’s repository you created and used in the Git and GitHub Lab.
Dynamic Analysis
Dynamic analysis is “the analysis of the properties of a running software system” [Ball1999]. It is complementary to static analysis techniques. Some properties that cannot be studied through static analysis can be examined with dynamic analysis and vice versa. The applications of dynamic analysis techniques are very broad: program comprehension, system verification, resource profiling, test analysis, etc. In this session, we focus on one very important aspect of dynamic analysis: Testing.
“Tests: Your Life Insurance!” Tests are essential for software engineering activities. They can help you:
- to reveal unwanted side effects of changing the code
- to understand the inner workings of a system.
The presence of automated tests does however not offer any guarantee about its quality. Do the tests cover the whole system or are some parts left untested? Which parts are covered to which extent? Hence, measuring test coverage is a useful, even necessary, way to assess the quality and usefulness of a test suite in the context of software engineering.
Materials & Tools Used for this Session
- IntelliJ IDE (you can use Eclipse at your discretion, but it may require some adaptations for the project we are using during the lab sessions)
- JPacman repository.
- JaCoCo is an eclipse plugin for coverage analysis. It is also available as a maven repository. Newer versions of IntelliJ already have this plugin pre-installed as a part of the test coverage plugin.
- pytest Most popular python testing framework - makes it easy to write small, readable tests, and can scale to support complex functional testing for applications and libraries.
- flask a web framework, it’s a Python module that lets you develop web applications easily.
Setup / Preparation
To start getting acquainted with the JPacman source code. Download/Clone the JPacman project from the Prof’s repository and open it on IntelliJ; build it. JPacman uses Gradle as a built/dependency manager. Make sure you can build and run it before doing any source code modification. Now look at the source code and try to understand its internal structure. In the “docs/uml” folder there are two simplified UML diagrams.
Task 1 – JPacman Test Coverage
We will begin by using the IntelliJ IDE test coverage plugin.
The testing and coverage plugins should be enabled by default. If you are not sure, check under IntelliJ IDEA > Preferences > Plugins > installed
if your
plugins called Code Coverage for Java
, JUnit
, and TestNG
are enabled.
First, make sure that you can test your JPacman, by using the following command line in the IntelliJ IDE terminal:
./gradlew test
Note: Remember to set the project to point to the JDK version on which it was built. Look at External Libraries
under the
Project’s folder in IntelliJ IDE to see the JDK version.
Now, right-click on the test
folder (inside the src
folder) and select the option “Run ‘Tests’
in jpacman.test
with Coverage”. If that option is not available, select “Build Module
jpacman.test
” and after the build right-click again and the option “Run ‘Tests’ in
jpacman.test
with Coverage” should be available.
Alternatively, you can also right-click on the Gradle task test
, inside the module Task->verification
shown in the Gradle plugin
(default position is a collapsed tab on the right part of your IntelliJ).
Select Run 'jpacman [test]' with Coverage
. This Gradle task should produce the same coverage.
Therefore, use whichever you prefer.
If everything executed without errors, you should see a new window showing the code coverage. Please try to remember this coverage (or take a screenshot to not depend on your memory).
Question:
- Is the coverage good enough?
Task 2 – Increasing Coverage on JPacman
For the second task, we will increase the coverage on JPacman. Doing that is very simple, we just need to write more tests. In this task, we are going to write one new test case. As you have seen from Task 1 that the coverage for several packages is zero percent.
Let’s create a simple unit test
on a method. We will test the isAlive()
method in class Player
(package level
). You should look at the DirectionTest
class (folder test
, package board
) as a template
for your test case. The hardest part is instantiating a Player
object as it requires other objects.
The PlayerFactory
class is responsible for creating instances of Player
. And, PlayerFactory
constructor requires a PacManSprites
(package sprites
) object. Therefore, you need to instantiate a
PacManSprites
object, to pass it on to the constructor of PlayerFactory
, and only then you can
call the factory method to create a Player
.
Create the package level
in the test
folder. Then, create the class PlayerTest
inside this
package level
. Now you can write the test case for testing the method isAlive()
from Player
.
Here is an example of such a test class, but I strongly advise you to try for yourself (it is a simple test and the hardest part is just to instantiate the objects).
After adding the new test, build jpacman.test
again and run it with coverage. If your test does not
have any errors, you should see the IntelliJ window showing the code coverage. Leave this window with the
coverage information on as you may need it to answer the questions from the next task
(or take a screenshot of it).
Task 2.1 - 15 points (5 points each)
Identify three or more methods in any java classes and write unit tests
of those methods.
Remember to take screenshots of the test coverage before and after creating the unit tests.
Since there are many methods in the project, I should not find almost all the group members of a given group attempting the same methods.
Discuss between the group mates what methods you will be writing unit tests for.
A simple Google sheet having two columns would help get the group organised.
Names | Fully Qualified Method Name |
---|---|
John Businge | src/main/java/nl/tudelft/jpacman/game/GameFactory.createSinglePlayerGame |
John Businge | src/main/java/nl/tudelft/jpacman/board/BoardFactory.createBoard |
Task 3 – JaCoCo Report on JPacman (10 points)
The gradle build file provided in JPacman
, already has JaCoCo
configured. Look at the folder build/reports/jacoco/test/html
, right-click on the file index.html
and select
“Open in Browser”. This is the coverage report from the JaCoCo
tool. As you can see, JaCoCo
shows not only line coverage but also branch coverage. Click on the level
package, then on the Player
class, and after that on any method. You will see the source code with color information on which branches are covered (or partially covered).
Questions:
- Are the coverage results from
JaCoCo
similar to the ones you got fromIntelliJ
in the last task? Why so or why not? - Did you find helpful the source code visualization from
JaCoCo
on uncovered branches? - Which visualization did you prefer and why?
IntelliJ
’s coverage window orJaCoCo
’s report?
Write a report for Tasks 2.1 and Task 3. Name the report <your-names>_unitTesting.pdf>
Remember to include the code snippets of your unit tests for Tasks 2.1 in your report.
Make sure that your report is descriptive enough for me to follow without looking at your project code.
Task 4 – Working with Python Test Coverage
In this task, you will practice improving your test coverage in Python. You will generate a test coverage report and interpret the report to determine which lines of code do not have test cases, and writing test cases to cover those lines.
- Clone the git project Python Testing lab. Open the IDE, navigate to the directory
test_coverage
and run the commandpip install -r requirements.txt
- You will do all your editing work in the file
tests/test_account.py
. - Before writing any code, you should always check that the test cases are passing.Otherwise, when they fail, you won’t know if you broke the code, or if the code was broken before you started.
- run the
pytest
and produce acoverage
report to identify the lines that are missing code coverage:
- run the
Name Stmts Miss Cover Missing
--------------------------------------------------
models/__init__.py 7 0 100%
models/account.py 40 13 68% 26, 30, 34-35, 45-48, 52-54, 74-75
--------------------------------------------------
TOTAL 47 13 72%
----------------------------------------------------------------------
Ran 2 tests in 0.349s
- Starting with 72% test coverage. The goal is to reach 100%! Looking at the first missed line,
line 26 in
account.py
to see if we can write a test case for it. To increase the test coverage, we first investigate line 26 inmodels/account.py
. This file is in themodel
package from the root of the repo. Look at the following code on lines25
and26
.
def __repr__(self):
return '<Account %r>' % self.name
Notice that this method is one of the magic methods that is called to represent the class when
printing it out. We will add a new test case in test_account.py
that calls the __repr__()
method on an Account.
def test_repr():
"""Test the representation of an account"""
account = Account()
account.name = "Foo"
assert str(account) == "<Account 'Foo'>"
- Run
pytest
again to ensure line26
is now covered through testing and to determine the next line of code for which you should write a new test case:
Name Stmts Miss Cover Missing
--------------------------------------------------
models/__init__.py 7 0 100%
models/account.py 40 12 70% 30, 34-35, 45-48, 52-54, 74-75
--------------------------------------------------
TOTAL 47 12 74%
----------------------------------------------------------------------
Ran 3 tests in 0.387s
Note that the overall test coverage has increased from 72% to 74% and the new report does not list line 26
in the Missing column.
- Next, let us look at the next line of code listed in the lines of code missing tests cases, line
is
30
. Examine this line inmodels/account.py
to find out what that code is doing.
We will look at code of the entire function on lines 28
through 30
to see what it is doing.
def to_dict(self) -> dict:
"""Serializes the class as a dictionary"""
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
Notice that this code is the to_dict()
method. Now, let us add a new test case in test_account.py
that executes the to_dict()
method on an Account, and thereafter run pytest
again.
def test_to_dict():
""" Test account to dict """
rand = randrange(0, len(ACCOUNT_DATA)) # Generate a random index
data = ACCOUNT_DATA[rand] # get a random account
account = Account(**data)
result = account.to_dict()
assert account.name == result["name"]
assert account.email == result["email"]
assert account.phone_number == result["phone_number"]
assert account.disabled == result["disabled"]
assert account.date_joined == result["date_joined"]
Name Stmts Miss Cover Missing
--------------------------------------------------
models/__init__.py 7 0 100%
models/account.py 40 11 72% 34-35, 45-48, 52-54, 74-75
--------------------------------------------------
TOTAL 47 11 77%
----------------------------------------------------------------------
Ran 4 tests in 0.368s
Note that the overall test coverage increased from 74% to 76%.
Your task - Getting coverage to 100% (20)
In this task to try to get the test coverage to close to 100% as possible. You will examine models/account.py
on lines 34-35
, 45-48
, 52-54
, 74-75
to find out what that code is doing.
Add to your report of the previous tasks and include the code snippets for your test cases.
Task 5 - TDD
Test driven development (TDD) is an approach to software development in which you first write the test cases for the code you wish you had and then write the code to make the test cases pass. In this Task, you will write test cases based on the requirements given to you, and then you will write the code to make the test cases pass.
- Clone and use the repo (Python Testing lab). Navigate to the
tdd
folder. If you did not already install the requirements, run the commandpip install -r requirements.txt
- Open the IDE, navigate to the directory
tdd
.status.py
- has some HTTP error codes that we will use when we’re checking our error codespytest.ini
- In case you have many files in the project, and you are only interested in focusing on a specific directory or file you are testing, so thatpytest
only returns testing results for that file, e.g.,--cov=counter
- You will add test cases in
test_counter.py
. Currently, the file contains only a doc string listing the requirements and no code.
- You will be working with HTTP methods and REST guidelines you can take a read here
Creating a counter
You will start by implementing a test case to test creating a counter. Following REST API guidelines, create uses a POST request and returns code 201_OK
if successful. Create a counter and then update it.
- Write the following code in the file
test_counter.py
. Runpytest
. You should see an errorModuleNotFoundError
import pytest
# we need to import the unit under test - counter
from src.counter import app
# we need to import the file that contains the status codes
from src import status
- Create a new file in the
src
directory calledcounter.py
and runpytest
again. You should see anImportError
, cannot findapp
-ImportError: cannot import name 'app' from 'src.counter'
- Write the code below and run
pytest
again. The tests should run with no error.
from flask import Flask
app = Flask(__name__)
The output should be similar to the one below:
Name Stmts Miss Cover Missing
-----------------------------------------------
src/__init__.py 0 0 100%
src/counter.py 2 0 100%
src/status.py 6 0 100%
-----------------------------------------------
TOTAL 8 0 100%
- Let us write our first test case and run
pytest
again.def test_create_a_counter(): """It should create a counter""" client = app.test_client() result = client.post('/counters/foo') self.assert result.status_code == status.HTTP_201_CREATED
This time we get RED -
AssertionError: 404 !=201
. I didn’t find an endpoint called/counters
, so I can’t possibly post to it.” That’s the next piece of code we need to go write. - Let’s go to
counters.py
and create that endpoint. Import status code from the status file -from . import status
and add the code below:
COUNTERS = {}
# We will use the app decorator and create a route called slash counters.
# specify the variable in route <name>
# let Flask know that the only methods that is allowed to called
# on this function is "POST".
@app.route('/counters/<name>', methods=['POST'])
def create_counter(name):
"""Create a counter"""
app.logger.info(f"Request to create counter: {name}")
global COUNTERS
COUNTERS[name] = 0
return {name: COUNTERS[name]}, status.HTTP_201_CREATED
Now we’ve implemented this first endpoint that should make the test pass.
When we run pytest
again, we will have GREEN.
Duplicate names must return a conflict error code.
The second requirement is if the name being created already exists, return a 409 conflict.
Since a lot of the code is going to be repeated, we will REFACTOR the repetitive code using the fixture
feature of pytest
.
- For this example,
client = app.test_client()
that is insidetest_create_a_counter
test case will be used by more than one test case, let us REFACTOR it into new function calledclient
and decorate it with@pytest.fixture()
. - Next we will also create a class called
TestCounterEndPoints
to group all our counter related tests and move the first test inside the class declaration. - For the last part of our refactoring, we need make the client fixture automatically available to all the test methods within our class. This can be achieved by using the pytest
usefixtures
decorator at the class level:@pytest.mark.usefixtures("client")
. The finally code is shown below:
@pytest.fixture()
def client():
return app.test_client()
@pytest.mark.usefixtures("client")
class TestCounterEndPoints:
"""Test cases for Counter-related endpoints"""
def test_create_a_counter(self, client):
"""It should create a counter"""
result = client.post('/counters/foo')
assert result.status_code == status.HTTP_201_CREATED
- Now, let us now write the
test_duplicate_a_counter
as below. We create a counter calledbar
two times. The second time we expect to get aHTTP_409_CONFLICT
.
def test_duplicate_a_counter(self, client):
"""It should return an error for duplicates"""
result = client.post('/counters/bar')
assert result.status_code == status.HTTP_201_CREATED
result = client.post('/counters/bar')
assert result.status_code == status.HTTP_409_CONFLICT
When we run our test cases we obtain
RED phase - AssertionError: 201 != 409
.
It happily created that counter a second time, which is very dangerous because it set it to zero.
If we update the counter 1, 2, 3, 4, 5, and then we create the same counter again,
it’s going to reset it to zero.
- Let us go REFACTOR
counter.py
and fix the problem. Before we create any counter, we have to check if it already exists. Copy and paste the code snippet below and place it right after the code lineglobal COUNTERS
.
if name in COUNTERS:
return {"Message":f"Counter {name} already exists"}, status.HTTP_409_CONFLICT
When we run pytest
again we should get the GREEN phase.
Your task (15 points)
You will implement the updating the counter by name following the TDD workflow (write test cases and
write the code to make the test cases pass).
The test cases you will add to are in test_counter.py
, and the code you will add is in counter.py
. These are the only two files you will work with.
Following REST API guidelines, an update uses a PUT
request and returns code 200_OK
if successful.
Create a counter and then update it.
You will implement the following requirements:
In test_counter.py
, create a test called test_update_a_counter(self, client)
. It should implement the following steps:
- Make a call to Create a counter.
- Ensure that it returned a successful return code.
- Check the counter value as a baseline.
- Make a call to Update the counter that you just created.
- Ensure that it returned a successful return code.
- Check that the counter value is one more than the baseline you measured in step 3.
When you run pytest
, you should be in the RED phase.
Next, in counter.py
, create a function called update_counter(name)
.
It should implement the following steps:
- Create a route for method
PUT
on endpoint/counters/<name>
. - Create a function to implement that route.
- Increment the counter by 1.
- Return the new counter and a
200_OK
return code.
Next, you will write another test case to read a counter. Following REST API guidelines, a read uses a GET
request and returns a 200_OK
code if successful. Create a counter and then read it. Here you should figure out the requirements for the test case as well as code you will put in the unit under test.
Add to your report of the previous tasks and detail the steps (red/green/refactor phases) you followed to implement the requirements. Include in your report the code snippets you wrote at every step as well as the exceptions you encountered while running pytest
.
Make your report self-contained so that it is easy to follow without running your code
Submitting the Assignment
- Put a link to your fork repository in the report.
- create a folder on your local fork repository called
jpacman
. - create a branch on your local fork repository called
jpacman_tests
using the following commandgit branch jpacman_tests
. - run the command
git checkout jpacman_tests
- copy your report–
<your-names>_unitTesting.pdf>
and paste it in the folderjpacman
- push the changes onto your remote fork repository.
- open a pull request on the
main branch
of the Team repository and write an appropriate title and body. - one of the repository maintainers should integrate your contribution into the main branch.
- for Tasks 4 & 5, only the report is required.
- You should also submit your report on Canvas
This lab aims to evaluate your proficiency in both GitHub usage and software testing. Tasks 2 and 3 will assess both skills, while Tasks 4 and 5 will focus solely on evaluating your software testing abilities.
Importantly, for Tasks 4 and 5, there’s no requirement to commit your code to the team repository. The evaluation will be based on your software testing proficiency in the report submitted rather than GitHub usage. However, when submitting your report on Canvas, ensure it includes documentation for all tasks.