For eCommerce operators, functional tests that can simulate how users actually interact with a specific system are critical, ensuring applications are working efficiently and securely. QA is an important component of the development cycle.
For eCommerce operators, functional tests that can simulate how users actually interact with a specific system are critical, ensuring applications are working efficiently and securely. QA is one of the most valuable aspects of the development cycle, continually refining usability and improving functionality for business customers.
The key is to create functional tests for effective simulation of eCommerce. Currently our developers are managing quality with frameworks like:
Webdriverlo - a framework for automating modern web and mobile applications, and simplifying interaction with apps. This automation framework also offers a set of plugins for creating a scalable test or validation suite.
Jenkins + Allure - a plug-in that generates the Allure report and attaches it to build during the Jenkins job run.
Developers test the front-end, making sure clicks, navigation, and inputs are working in eCommerce scenarios. With frameworks like Webdriverlo, Cypress, and Selenium, it is possible to implement all necessary tests and run them as needed. Function test automation tools are very effective for scheduling and running test reports, simulating real users using Robots.
Establishing Production Deployment Rules
As developers employ the test automation framework to test a larger list of features, it is usually impossible to execute all the data manually. Because of this, our team has defined the rules that help a business decide if a version is good enough for deploys. Our developers selected the central, critical scenarios that should never fail in any production.
Before deployments, all QA processes must be approved. This involves running all automated tests. Should any one of them fail, developers are notified to abort the deploy until there is a fix.
Understanding When and How to Test
All automated test scripts are created and tested first on low environments. Each client has a different number of low environments to test (not PROD). Some customers are also asked to run in a production environment without testing orders, however, to evaluate the system overall. In that scenario, the user is able to complete the check-out process, but other customers cannot. The team decides which environment to run depending on the customer and their needs.
Prior to Going Live
Just before going live, the development team runs all automated tests that have been created. The goal is to make sure that all scenarios elected for tests are working, thus allowing the customers to feel comfortable with the decision to go live.
During this phase, developers run automated tests without placing orders. The runs are Robots, meant to avoid errors on metrics used to monitor real users; for example, while running in production, test runs do not appear on Dynatrace.
Real-user traffic appears as Robots, and is possible to filter by analyzing the traffic and performance. Before running in production, the team ensures tags are triggered to make the runs appear as Robots–not affecting any real traffic.
Automation tests cannot be substituted for the manual execution of tests, and it is still important to run them manually if features are not stable for automation–while development is still in progress. It is critical to note that we are not automating features that indicate instability or have bugs.
The automation purposes are for running regressions, ensuring that the system is still working as expected after changes are made and features are already approved. To be able to implement automated tests, developers must follow patterns on code development. This includes organization on how the code was created and structured, as clean and organized code is key to enable test automation creation and maintenance.
The quality of the system is based on code development patterns, definition, and application. Later, the development team documents all test cases to be automated on TestRail, and ultimately, plans the scenarios, elects the main ones, and avoids the creation of a test project larger than the original system project.
About the Author
Quality Assurance Global Manager
Patricia is a computer scientist with 18 years of experience working with technology, including software development, software quality and management. Specialized in software quality and focused on automated testing, she is able to combine technical knowledge applying the best tools and techniques.