Aftermath of Sprint 3

During Sprint 3, our team continued working on automated testing, but this sprint was different from the previous one because we moved further from setup research into more in depth writing and improving tests. In Sprint 2, much of the work focused on trying to get Vitest and Playwright working inside Docker and understanding why the testing environment was failing. In Sprint 3, I focused more on rebuilding my local setup, reinstalling necessary software, checking that the development environment was usable, and writing tests that could become part of the project workflow. This included fixing issues with VS Code, Docker, dependencies, and the project files so that the test tools could run more consistently.

https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfofrontend/-/tree/frontend_testing_release2026?ref_type=heads (since I had problems in my computer asked to upload from other teammates

One thing that worked well during this sprint was that I made more progress with the Docker and software setup compared to the previous sprint. Instead of only researching testing tools, I spent time rebuilding the actual development environment so Vitest and Playwright could run more consistently. This included reinstalling software, checking dependencies, fixing container-related issues, and making sure VS Code, Docker, and the project packages were working together correctly. Even though this was frustrating, it helped me better understand how much the testing workflow depends on the environment around it. By working through these setup problems, I was able to create a cleaner base for future testing work and better understand how to troubleshoot similar problems later.

What did not work well was that environment problems still took too much time. Some tools worked differently depending on the computer or container setup, which made it difficult to know whether a problem came from the test file, the project configuration, Docker, or the local machine. Reinstalling software helped, not knowing which software is making the problem was time consuming. Another problem was that conflicts and setup differences made collaboration harder. When one person’s setup works but another person’s does not, it becomes difficult to create a shared testing workflow.

As a team, we could improve by creating clearer documentation for the testing setup. This documentation should include the exact commands to install dependencies, run Vitest, run Playwright, and troubleshoot common errors. It would also help to record which setup steps were already tried and which errors happened, so teammates do not repeat the same failed attempts. Another team improvement would be to agree on one consistent environment, especially when using Docker, so that everyone is testing under the same conditions.

Individually, I could improve by debugging in smaller steps instead of trying to fix everything at once. During this sprint, I sometimes had to reinstall or reset multiple things, which made it harder to isolate the exact cause of a problem. In the future, I should test one change at a time, write down the result, and commit smaller working changes more often. I also need to compare my local setup with my teammates’ setups earlier, because small differences in versions or installed dependencies can cause major testing problems.

The Apprenticeship Pattern I selected is “Breakable Toys.” This pattern is about creating a safe space to experiment, make mistakes, and learn without damaging the main project. A breakable toy can be a small practice project, local environment, or separate setup where a developer can try tools and ideas freely.I selected this pattern because it connects directly to my experience during Sprint 3. A lot of my work involved reinstalling software, rebuilding the environment, and testing different ways to make automated tests work. If I could have found the reason for the errors correctly, I could have tested Vitest, Playwright, and Docker changes without worrying as much about breaking the project environment. This pattern is relevant because testing setup work often requires experimentation, and not every attempt works the first time. Overall, Sprint 3 showed me that writing tests is not only about knowing the syntax of Vitest or Playwright, but also about building a reliable environment where those tests can run consistently.

Comments

Popular posts from this blog

Waterfall 2.0

Is Agile going to fail?

Clean code and reality.