Posts

Technical Review in Software Testing

  The resource I selected is TestSigma’s article, “What is Technical Review in Software Testing?” The article explains that a technical review is a structured evaluation of software artifacts such as code, design documents, test plans, requirements, architecture, and test cases. Its main purpose is to find defects early in the software development life cycle before they become more expensive and harder to fix later. The article describes technical review as a static testing technique because it can identify problems without actually running the software. It also explains different types of reviews, including code reviews, design reviews, requirements reviews, test plan reviews, test case reviews, architecture reviews, and document reviews. The article also outlines a basic review process: planning, preparation, review meeting, approval, and documentation.              I selected this resource because I have been work...

More of Test Doubles

  The blog post I selected is “Understanding Test Doubles - Fakes, Stubs, Mocks, and Spies” by Sarah Dutkiewicz. I chose this resource because it gives our view of how we do test doubles with better examples for class as it explains several kinds of test doubles and shows how each one supports more focused and maintainable tests through concrete code examples. The article explains that “test doubles” is an umbrella term for objects that replace real dependencies during testing, and that not all doubles should be used in the same way. The post explains that fakes are simple working implementations made for testing, stubs return pre-programmed responses, mocks are used to verify that certain interactions happened, and spies record calls so the test can inspect them later. It continues to uses small C# examples with interfaces like IUserRepository and IExternalService to show when each type is appropriate and why choosing the right one matters. It also emphasizes that using the wro...

After the second sprint

  During this sprint, our team focused on setting up automated testing tools within our development environment, working with Vitest and Playwright. Unlike the previous sprint where we mainly researched tools, this time we attempted to implement them into our actual project workflow using Docker. A large portion of the work involved configuring the environment, installing dependencies, and troubleshooting issues that prevented the tools from running correctly. Although there are limited successful GitLab commits due to setup failures, the effort was focused on building the foundation for testing integration. However, we were able to get a working version that can be used as a base for every test. First half-working git commit : https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/guestinfosystem/guestinfofrontend/-/commit/9382538b27224d0d6ffc4815798cfe0207040781 One thing that worked well during this sprint was persistence in troubleshooting. Even though the setu...