This is not the current offering of the course. You may want the next offering at https://ucsd-cse15l-f22.github.io/, or scroll down for the winter 2022 material.

Week 1

Week 4 – When Tests Accumulate

Due This Week

To Watch/Read

Notes from Class

1pm Monday

2pm Monday

3pm Monday

1pm Wed

2pm Wed

3pm Wed

Lab Tasks

As usual, these tasks might change a little bit before the start of lab, but the general instructions and tools are likely to be similar.

Clone (or pull, if it’s your repository) the repository with your group’s code from last lab. You should have the original provided test file test-file.md, and three other test files that you wrote as part of the last lab. If your group doesn’t have this many test files, create one more, then commit and push it.

Meta-comment: One thing that is common in programming but uncommon in many programming courses is revisiting the same program for many weeks in a row. We are going to work with this markdown parser for many weeks. It will help us understand how code evolves over time and how a repository helps us track that evolution.

Your Memory

Run the program on one of the examples you wrote last week. Is the output correct? How do you know?

Write in notes what process did your team go through to justify that the output was correct? Did you remember what it was supposed to be or did you have to open the file to verify?

Running via the Command Line

Have someone share screen with VScode open on the markdown parser project with their terminal open. They should make a small edit to the program (like adding a print statement in main).

Start a timer, then have them recompile the program and run the program on all of the test files (there should be 4 total), then stop the timer.

Write in notes: How long did this take?

Note that this time doesn’t include any time you’d have to spend reading output to check that it’s correct, etc, because we’re pretty sure things are working from our process in lab 3. That’s a lot of time to spend on each change, checking that we haven’t broken the behavior for an existing test!

Testing Tools

These two issues—remembering what the correct result ought to be, and the work involved in re-running tests one-by-one—motivate using automated testing tools. There are lots of choices we could make here. We’re going to start by using JUnit not least because it is representative of many similar tools, and sees widespread use in large software projects.

For this part of the lab, you’ll install JUnit and use it to write a test program that solves the problems of having to remember what “correct output” is and taking a lot of manual effort to re-run many tests.

Setting up JUnit

Download these two .jar files:

Then, make a directory called lib in your project, and copy both of those files to that directory. Commit and push the files once they are added to lib (this is a useful step because it ensures that you see them in your repository!)

Next, create a file called MarkdownParseTest.java in your repository. Put the following code in the file:

import static org.junit.Assert.*;
import org.junit.*;

public class MarkdownParseTest {
    @Test
    public void addition() {
        assertEquals(2, 1 + 1);
    }
}

Then, run this code at the command line using these two commands:

javac -cp .:lib/junit-4.13.2.jar:lib/hamcrest-core-1.3.jar MarkdownParseTest.java

java -cp .:lib/junit-4.13.2.jar:lib/hamcrest-core-1.3.jar org.junit.runner.JUnitCore MarkdownParseTest

Write in notes: Copy the output of this command, including any errors if you didn’t use it correctly the first time.

Write in notes: Copy the test file code into the notes. On each line, describe what you think that line is doing. If you aren’t sure, write it down as a question.

Post on Piazza: Take all the open questions you have that you couldn’t answer with your group and tutor, and post them as public questions on Piazza. Sign the post with your team animal name.

Commit to Github: Once you get this command working, add the test file and commit and push it to Github. For your commit message, put the commands you used to run the tests so you’ll have a record of what worked later.

Writing our Tests with JUnit

The example above just tests that 1+1 is == to 2 from a JUnit tutorial. Next, we should add another test to actually do some work with our code.

Your task is to add another test method to MarkdownParseTest that:

  • Calls getLinks on the contents of test-file.md
  • Checks that the result is equal to a list containing https://something.com and some-page.html

Hints: Remember that Files.readString and Path.of are useful for reading the contents of files, and require using throws clauses. You can use List.of("a", "b", "c") to easily create a List of strings.

Write in notes: Put at least one error you encountered during this process into the doc as a screenshot.

When you have the test successfully passing, add and commit the changes, then push to Github.

Then, add separate test methods to test each of the test files you’ve written with getLinks. Make sure all the tests pass, and commit and push to Github.

The Benefits of Automated Tests

Have someone ready at the terminal in VScode with the project open. Start a timer. Have them run the commands to compile and run the tests. Stop the timer.

Write down in notes How much quicker was it to run the JUnit tests than to run the test for each file individually?

More Tests, and Symptoms vs. Bugs

Consider the test files provided in this repo markdown-parse. Your task is to answer/do the following:

  1. Which test files are failure-inducing inputs for your version of MarkdownParse?
  2. Do any pairs of test files demonstrate the same symptom for your version?
  3. Of those pairs of test files, do any have the same symptoms due to the same underlying bug, and which have different bugs?
  4. Fix the bugs that make the failure-inducing inputs fail!

Write in notes – the output of running each of the test files, and *indicate which outputs are incorrect based on what we should expect as output for this program (hint – you can make them into JUnit tests, or just run them at the command line). Those are the failure-inducing inputs. Start your report with a section containing these outputs and descriptions.

Then write in your notes which pairs of outputs have exactly the same output/same behavior. Those have the same symptoms. Organize this section as a bulleted list with each bullet showing one symptom, and in that list item say which test files showed that symptom. Depending on how you’ve done your work, you may have no pairs with the same symptoms; say so if that’s true.

Then use whatever process you like to help identify the bugs that caused these failures in your implementation. We recommend adding a JUnit test for each one so you can easily run them all at once.

Each time you change the code and cause one or more tests to pass that didn’t before, make a commit and indicate:

  • The failure-inducing input(s) that are fixed
  • The symptom they were showing
  • The bug that fixed the symptom

Try to make your implementation work for all the provided tests.

Week 4 Lab Report

Create another page in your lab report repository, like you did for lab report 1, and write your report there.

Pick three code changes that your group worked on in labs 3 and 4 in order to fix a bug; these should be stored as commits on someone’s repository. Fork the repository so you have your own copy with all the work your group did if you haven’t already.

For each of the three code changes:

  • Show a screenshot of the code change diff from Github (a page like this)
  • Link to the test file for a failure-inducing input that prompted you to make that change
  • Show the symptom of that failure-inducing input by showing the output of running the file at the command line for the version where it was failing (this should also be in the commit message history)
  • Write 2-3 sentences describing the relationship between the bug, the symptom, and the failure-inducing input.

You will submit this to the week 4 lab report assignment on Gradescope, which will have a similar process to the first lab report for grading.