Partners & Technology

The Elevation of Quality Engineering: An Exploration of Visual Validation Tools

Digital 2022: Global Overview Report — DataReportal – Global Digital Insights

Our digital footprint continues to expand rapidly in today's visual world. With a staggering increase of 95 million mobile phone users from January 2021 to January 2022, ensuring the visual integrity and quality of digital products has become paramount. Traditional testing methods often fall short when detecting visual defects, and demanding the use of specialised visual validation tools becomes a must. These tools introduce a new dimension to the testing landscape, offering unique benefits and addressing specific challenges.

In this article, we embark on an exploration of visual validation tools, ranging from popular open-source options like Playwright to premium solutions like Percy from BrowserStack and Applitools. We will explore a Proof-of-concept approach and highlight to engineering professionals what should keep they in mind when selecting a suitable tool.

While visual validation tools have undoubtedly transformed quality engineering, it is essential to dispel the misconception that they can replace all other forms of testing. Varying browser behaviours, business transactions, and user journeys demand a comprehensive approach to testing. By recognising the value of visual validation as a powerful complement to other testing methods, quality engineering thought leaders can design robust testing strategies that maximise product quality and user satisfaction. Embracing a balanced approach will ultimately lead to superior digital experiences and enhanced customer trust in our ever-evolving digital landscape.

By shedding light on the benefits and potential caveats of these visual validation tools, we aim to equip quality engineering thought leaders with the knowledge they need to make informed decisions. Join us as we navigate the dynamic landscape of visual validation and uncover the power these tools hold in revolutionising quality engineering practices.

The 3 rivals

I have undertaken the task of evaluating three prominent visual validation tools: Playwright, Percy by BrowserStack, and Applitools. This evaluation aligns with the principles of Shift Left and early testing, enabling me to incorporate testing activities at the earliest stages of development. To further optimise my testing efforts, I am leveraging the capabilities of GitHub Copilot to collaboratively build a robust test strategy and architecture concurrent with the app's development.

Leveraging Playwright for Visual Validation, a versatile open-source tool, offers a comprehensive solution for testing. We explore how Playwright seamlessly integrates with React Native development, allowing us to write visual validation tests in popular programming languages such as JavaScript and TypeScript. By harnessing Playwright's capabilities, we can validate our React Native app's visual elements across different platforms and browsers.

I conducted a preliminary visual comparison across Playwright-supported browsers and various breakpoints, which yielded notable advantages. Primarily, I observed a significant reduction in code requirements, as the need to add an extensive amount of code for validating different elements was eliminated. Moreover, I promptly identified the discrepancies being detected, further enhancing the efficiency of the process.

One of the significant challenges I encountered during the development process was dealing with dynamic API responses that constantly changed due to factors like time and weather conditions. This posed a unique problem for visual validation, as the changing data led to frequent failures in image validation. However, I was able to overcome this challenge by leveraging the assistance of GitHub Copilot.

Using Copilot's AI-powered capabilities, I utilised its expertise to generate mock data that accurately replicated the static visuals I intended to validate. By basing the mock data on the contract defined by the API, I ensured that the functional aspects of the application were not compromised. This approach allowed me to separate the visual validation concerns from the data variability, ensuring that any failures in visual validation were truly indicative of visual defects rather than data-related issues.

By employing Copilot to automate the generation of mock data aligned with the API contract, I achieved a more reliable and consistent visual validation process. This enabled me to focus on the true visual aspects of the application while maintaining confidence in the functional accuracy of the app.

The excitement of the development process took a slight turn when I discovered that the visuals of my application did not match when viewed on an iOS simulator. Unfortunately, I encountered difficulties implementing Playwright with iOS simulators, which required me to explore alternative options. It was at this point that I turned my attention to Applitools.

Open your eyes!

With its cross-platform capabilities, Applitools offered a promising solution to my visual validation challenges. By leveraging Applitools, I could effectively test the application's visuals across various platforms, including iOS simulators. This allowed me to gain a comprehensive understanding of how the application appeared and functioned across different devices, ensuring a consistent user experience.

Applitools provided me with a range of features designed specifically for visual validation, such as pixel-level comparisons and advanced image recognition algorithms. These capabilities enabled me to identify even the subtlest visual discrepancies and ensure the visual integrity of my application.

By adopting Applitools as my visual validation tool, I could overcome the limitations faced with Playwright native visual validations and iOS simulators. Not only that I also did not have to mock the API calls, given that Applitools has also a layout validation over full pixel by pixel.

I encountered persistent challenges in effectively validating the application and identifying the discrepancies I was observing. Consequently, I began exploring alternative methods to address this issue.

Initially, I attempted to test the web app on an iOS Safari mode, hoping to gain insights into the differences observed during compilation of the IPA (iOS application package). Regrettably, this approach did not yield the desired results. I invested time and effort into investigating potential solutions to integrate Mobile App testing into my existing toolkit, which primarily consisted of Playwright and Applitools. Unfortunately, my investigations led me to the conclusion that a fundamental change in my approach was necessary to achieve the desired outcomes.

The Competition

I continue to investigate options on how I could remain using JavaScript and TypeScript so it was easier to hand this over to a React Native development team, but I could not see this suggestion with my current set up. Acknowledging the limitations of my current setup, I realised the importance of reassessing my testing strategy and exploring alternative methodologies. (However, later on I found the instructions on how to do this with JavaScript).

To conduct a thorough comparison of the benefits offered by different tools, I initiated a separate project dedicated to evaluating their capabilities. During this stage, I turned to the setup provided by BrowserStack, which encompassed the utilisation of Webdriver.io, App Automate, and Percy App.

One of the notable challenges I encountered in the testing process involved the compilation and loading of my mobile applications onto the BrowserStack server for the purpose of conducting tests against the compiled application. This presented an additional scope of work compared to my local setup, where I could conveniently load my React Native app using the Expo library.

During this process, I discovered the significance of having a fast and local method for generating and deploying my application. Expo proved to be an invaluable resource in this regard, providing a user-friendly platform that facilitated the seamless creation of my application. Leveraging Expo, I was able to efficiently build my test suite and begin exploring strategies to incorporate this workflow into a continuous integration and continuous deployment (CI/CD) pipeline. The utilisation of Expo not only streamlined the development and deployment of my application but also allowed me to focus on building a comprehensive test suite that encompassed the critical aspects of visual validation and functional testing.

By addressing the challenges associated with compiling and loading the application onto the BrowserStack server, Expo played a pivotal role in enhancing the efficiency and effectiveness of my testing process.

With the issue of application build resolved, I proceeded to explore the BrowserStack documentation for WebdriverIO, this was a valuable resource to guide me through the next steps. Familiar with the process from previous proof-of-concept (POC) projects involving BrowserStack, I was able to quickly grasp the necessary procedures to upload my IPA file into the BrowserStack platform.

Referring to my prior experience, I followed the established workflow and made the necessary adjustments to adapt my existing test suite developed for Playwright and Applitools. However, I encountered an inconsistency in the object mapping between the two frameworks, requiring additional modifications to both the application and my test suite.

To address this discrepancy, I meticulously updated the necessary components within the application codebase, ensuring compatibility with the WebdriverIO framework. Simultaneously, I adjusted my test suite to align with the new object mapping structure, enabling seamless execution of the test scenarios within the BrowserStack environment.

While these changes added a layer of complexity to the overall process, they were crucial in ensuring the successful integration of WebdriverIO with BrowserStack. By making the necessary adaptations and overcoming the object mapping challenges, I was able to maintain the integrity of my tests and effectively validate the visual aspects and functionality of my application using the BrowserStack platform.

While the initial setup process of BrowserStack was straightforward and easy to follow, I encountered challenges when it came to understanding the reporting functionality. In my assessment, there are certain areas where BrowserStack could benefit from further improvement.

One particular challenge I faced was related to the developer flow. Currently, to gain a comprehensive understanding of the executed tests and their outcomes, the test builder needs to navigate through three different locations local IDE, BrowserStack and Percy. This disjointed experience could be streamlined to enhance efficiency and ease of use.

Additionally, I observed that the out-of-the-box configuration of BrowserStack can generate false positives. This means that the test results may indicate successful execution even when there are actual issues present. Without proper knowledge of the tool or thorough investigation, these false positives can potentially lead to overlooking critical issues.

On the positive side, BrowserStack excelled in certain areas. One notable feature was the ability to generate video recordings of test executions. This functionality was readily available without any additional setup, providing users with valuable evidence and aiding in troubleshooting efforts.

Furthermore, I appreciated the trial period offered by BrowserStack, which allowed me to conduct a comprehensive proof-of-concept (POC) incorporating live app testing, app automation, and Percy App. This trial period enabled me to thoroughly evaluate the capabilities of BrowserStack across multiple dimensions without any limitations.

Overall, while there are areas that I believe could benefit from improvement, BrowserStack offers valuable features such as video execution recordings and a generous trial period that contribute to the testing process.

Conclusion

In conclusion, it is important to recognize that the selection of a visual validation tool is just one aspect of a quality engineer's role. Rather than solely focusing on the tool itself, our primary objective should be to identify and address the underlying problem statements in our quality engineering efforts. It is crucial to understand the specific challenges and requirements of our value stream, and to implement the appropriate capabilities that align with those needs.

While each visual validation tool may have its own strengths and weaknesses, the ultimate goal is to ensure the visual integrity and quality of digital products. This involves considering factors such as cross-platform compatibility, ease of use, reporting capabilities, and the ability to detect visual defects accurately.

By understanding the problem statements at hand and evaluating the value stream, we can make informed decisions about the most suitable visual validation tool or combination of tools. This holistic approach empowers quality engineers to proactively address challenges, enhance the efficiency of testing processes, and deliver high-quality digital products.

Ultimately, the success of visual validation efforts lies in our ability to align tools, techniques, and methodologies with the problem statements we aim to solve. By continually striving to understand and address the relevant value stream, we can effectively contribute to the overall quality and success of digital projects.

Alejandro Sanchez-Giraldo
Alejandro Sanchez Giraldo
Head of Quality Engineering and Observability

Alejandro is a seasoned professional with over 15 years of experience in the tech industry, specializing in quality and observability within both enterprise settings and start-ups. With a strong focus on quality engineering, he is dedicated to helping companies enhance their overall quality posture while actively engaging with the community.

Alejandro actively collaborates with cross-functional teams to cultivate a culture of continuous improvement, ensuring that organizations develop the necessary capabilities to elevate their quality standards. By fostering collaboration and building strong relationships with internal and external stakeholders, Alejandro effectively aligns teams towards a shared goal of delivering exceptional quality while empowering individuals to expand their skill sets.

With Alejandro's extensive experience and unwavering dedication, he consistently strives to elevate the quality engineering landscape, both within organizations and across the wider community.

Related Posts

3 Jun, 2023

DevOps1 approach to Enterprise Data Masking

Introduction Data is an asset as well as a liability. In a complex enterprise environment, we have large IT systems involving multiple enhancements and change requests all the time.

1 May, 2023

Delphix Partners with DevOps1 in Australia

Our technologists are dedicated to using their collective experience and knowledge of modern engineering best practices to deliver effective, secure, and forward-thinking solutions.

12 Apr, 2023

Why is Certificate Management important to businesses?

Certificate management refers to the process of managing digital certificates in a DevOps environment.