[Music]
Happy birthday dear podcast, happy birthday to you! Yes, my dear, the podcast will be one year old on April 18th. And we want to celebrate that. That's why there is a new episode every day this week. I call that a gift rain. Enjoy the episodes and have a nice new podcast year. [Music] Hello and welcome to a new episode of Podcast Software Testing. I am your host Ritschi and have brought another episode of the Co-S-Tag 2023 in Frankfurt.
This time I talked to Mesut Durukay, in English, about his experiences with the selection of an end-to-end test framework. He was of the decision that he found that test automation no longer worked as it should and set out to evaluate, test and select end-to-end test automation frameworks. We will discuss in this episode what pros and cons he found and what was decisive for his decision.
If you like this podcast, you learn something new again and again, get some inspiration or just get new ideas, then of course I am very happy about a small rating on the usual portals and feedback to me at [email protected]. And now have fun with the episode. [Music] Hi, welcome to the show. Thank you for attending here. Hello, it's my pleasure. Thanks for inviting. We are here on Co-S-Tag in Frankfurt. And you arrived today, you said? Yes, I arrived in the morning, early morning.
But I like Frankfurt. I guess if I'm not wrong, it's my fourth time joining Co-S-Tag. I like the conference. I know some people joining regularly as myself. And I like the city itself. So everything is perfect. How long was your flight today? I came from Singapore. Normally I live in Japan, but I take the transit from there. It took 12 hours in total. Okay, so now you're fresh. Yes, I am.
So, okay. I read your abstract and you stated your transition from or your switch from an end-to-end framework tool or whatever to another one. And I think this is a very interesting topic for our audience here in the podcast. So just go in. Why did you have problems with your old end-to-end framework? Or what was the situation? Yeah, like starting from a higher perspective, I believe automation is an important part of the quality assurance because we are trying to execute a lot of test cases.
And of course, doing all of them manually is not easy. Obviously we need and we use automation. And as automation is an important part of the quality assurance, choosing the correct tool set is an important part of building the automation environment. Because if we go with the wrong tools or wrong alternatives like libraries, frameworks, whatever we are using in our environment or ecosystem, then we might not be able to cover everything that we are supposed to do.
For example, as you said, in my proposal, I was talking about a story. I joined the project and I didn't do the initial decision. They had already come up with some solutions and I started using them. And after some time, I figured out that the tool which was already chosen and being used in the project did not support all the browsers that we are supposed to test. For example, in my real life example, as I said previously, I live in Japan.
And in that area, especially Safari browser usage is very high because people love using iOS and macOS systems. So most of the traffic is coming from Safari. And then I figured out the tool that we were using is not supporting executing test cases on Safari. So this was not the correct decision at all. Then I started searching for some more alternatives. And then I started doing kind of benchmarking study. I collected several alternatives and I highlighted the pros and cons of each of them.
And did a kind of prioritization. Obviously, all of them have different weaknesses and different strengths. I cannot say this tool is the best one. Every one of them has different strengths and different weaknesses. But the important thing is I figured out and I highlighted which features or the attributes of each alternative is the most prior for me. Like executing on Safari browser is important for me, obviously. So that attribute had a high priority for me.
So I did this kind of benchmarking study and then chose or tried to choose the best option to me. Then I had more options or alternatives to improve my coverage. So this was how the tool set that I choose directly affect my capabilities to build my test automation environment. And the former tool and the other tools have very special kind of closed source, open source or bought or free software frameworks. Most of the time it was a combination.
Like there is a basic set of features you can use freely. But if you need some extra features, you have to pay. So most of the time, all the tools were supporting a basic set of features for free and then the extra features for paid. OK, I see. And how was your strategy to come to what you need or maybe your stakeholder need for automation? How did you go this way? First of all, I had my personal experiences, like what kind of things we are trying to cover.
For example, I was trying to ensure the quality on a web application. So I did some interactions, automate some human interactions on the web pages. And I figured out what kind of attributes I was interacting with on the web pages. Like did I have any iframes or was I opening new tabs, new pages when I'm executing my test scenarios? So I listed the features that I need to execute my test cases. This was my personal experience.
And then I discussed with several parties, like I talked to product owners. Like if I'm missing something, they want me to execute some more test cases. I talked to developers if there are any pain points, they are aware that there are some risks that I have to cover. So based on top of my experiences and talking with all the parties in the project team, then I come up with a list of features that I really need in my automation environment.
So it's a very, very clear expectation to a tool you have or to a framework you have. As we have here a lot of ICT stuff and so on. And they all said you have to think about what you want and then check which tools are able to do that. But a lot of people don't do that. They just buy something and try it. Definitely. There is a clear part of it, but there are some gray parts as well. Like whenever we are talking about the speed, execution speed. So what is the level of speed that I need?
There's no certain level. Like it should be fast as much as possible, but there is no certain level that I don't have a requirement. Like all the test cases should execute below 10 seconds. There's no such a requirement. But obviously I need a framework which are capable of executing test cases as fast as possible. So I need speed in the range of the tool set is supporting. But sometimes there's no very clear requirements. And how long was your list of features? Like the wide range of features?
Actually, I don't have too much. The web application that I was testing was not too complicated or complex. I just had some simple web pages. So I was not switching between different domains. So it was basically one single domain. And I didn't have too complex elements, like not too many drop down menu items or just some text fields and buttons. And the expected labels or navigating to different pages. So I had some simple scenarios, but I need some certain features I need to execute.
For example, like not only mobile, sorry, not only desktop web applications, but also mobile I need to execute. So for that, obviously all the time we don't execute on real devices. But sometimes we have to do the simulation on the browsers itself. So this is what I need additionally on top of executing the desktop browser versions. I had to execute them with the mobile simulation. So the feature that I was looking for is the support of device emulators. So this was one feature that I need.
Other than that, I'm trying to recall, like sometimes I had to customize the requests. Like from user perspective, I'm doing some interactions with the system. So basically I'm sending some requests. Sometimes for testing purposes, I had to customize this request. I have to capture, I have to do some modifications to simulate or trigger some different use cases. So these kind of customizations I tried to cover in the tool that I chose.
Okay. And how many tools did you get for your expectation, Matt, and for your checking which pros and cons are there? There are already certain locomotive tools everyone knows. So I come up with four or five of them. Like maybe I can list some names of them. Like the very mostly common used ones are Cypress, Playwrights. Selenium is one of the oldest ones. Test Cafe, Nightwatch. So these are some frameworks which are the most downloaded from the NPM resources as well.
So I chose the very widely common and commonly used frameworks in the community. And then from these best options, I tried to choose the best serving to my goals. Yeah. And you already examined the pros and cons of the tools. Can you tell us from some tools what were your favorite pros and cons of these tools? Yeah. I remember some of them. Like for example, in Cypress, I like the reporting feature very much. It has a dashboard. You don't need to do anything extra.
Just execute your test cases and all the results are automatically reported. Whenever your execution is done, you can go to the dashboard. I guess nowadays it is named as Cypress Cloud. They just renamed the platform. But you can just go and check all the details, including execution durations. Like which test case took how much time. And of course, all the failures, all the flakiness, and everything. All the details, execution details, you can easily see.
For others, you have to add up some custom solutions. So in Cypress, the feature that I like most was obviously that one. And Cypress Runner as well. For example, Cypress has its own runner. So it opens a new window and you can easily debug your test cases. In some others, you should do the debugging on your IDE, development environment. But in Cypress, the tool itself is providing that. But for example, in Playwright, of course, the speed is one of the powerful strengths.
Because it was like three or four times faster than the other options. It was lightweight. Because whenever you execute your test cases on the pipelines, first of all, you have to download the Docker images, including those versions. And then start your executions. So this image itself was already very lightweight. And was not taking too much time as the other options. So Playwright was very fast. I can say that. And what else? Nightwatch was also very simple and easy to understand.
One of the features that I liked about Nightwatch was usability and readability of the code that you are implementing. That I like most. So in each of them, of course, there are different strengths and weaknesses as well. Can you tell something of the weaknesses also? We are here in an open community. So we are resilient. The tool which did not support executing on Safari was Cypress. By the way, these tools are trying to improve the versions every day.
So they are developing some new versions and deploying, delivering new versions every day. So by the time that I tried, they did not have any solution for Safari execution. But then, I guess they started beta versions for executing test cases on a WebKit. Which is basically Safari browser, the driver for the Safari browser. So at that time, the weakness of the Cypress for me was support of the Safari browsers or the relevant drivers. For Playwright, I didn't have any certain weakness.
In general, it's amid all the expectations that I had. In Nightwatch, I remember like it didn't have a... By the way, some of them have totally open source. Some of them are supported by some big, huge communities or the companies. So Nightwatch is like open source and it didn't have too much support as the others. It had some more open issues, open tickets. So that was the thing about the Nightwatch. And a little bit documentation.
For example, when I have an issue, when I search for it in Cypress... By the way, Cypress is older than Playwright. So it had more community support. You can easily find all the relevant documentation you need. But for Nightwatch, for example, for some issues I had, I couldn't find a clear solution. So that was the thing about that option. Okay, I see. And so you did the examination and now the question is obvious. Which one did you choose? I chose Playwright. But it depends.
For my product, for my project, that was the best option. But the message of my talk and my study is it depends on your requirements, on your expectancies. Which feature is the most important for you? If the execution speed, then you might choose Playwright. But the ease of debugging or doing the root cause analysis troubleshooting, then maybe you should choose Cypress. Or maybe you need more and more customization than maybe just Selenium.
And on top of the Selenium framework, you can develop or build your own solution. So it totally depends. My message would be, first of all, define your expectancies or the requirements that you need. And then list all the strengths or weaknesses of these tools aligned with those requirements that you have. I think there's a very important part to think about what are my requirements for a tool.
So, as I said before, often the tools are used and they're here and nobody thinks about what can we do. Does it fit to our requirements? So I think this is a very, very important message you have here to think about the requirements to such a tool. But another question I have is for the transformation. So you had the old E2E testing. Did you convert all these tests or did you rewrite them or did you delete them and get new ones?
No, obviously, if I have to delete some of them, it means that those test cases were not needed at all. So I need them. Those are the test cases that I have to execute. So I had to convert them. And how did I do that? Of course, I, by the way, in the first place, I tried to avoid duplication. So the common operations or the functions methods, I already collected them in some helper classes, not in the spec files. So the conversion was easier for me.
Like those operations are already implemented in the programming language that I use. So whatever framework I'm using, those functions are just functions implemented in some programming language. So they can be executed, they're reusable. So this is the importance of building your automation environment, regardless of the tool or the framework you are using. How well your architecture is built. Like, is it built in a reusable way or you are having a lot of duplications in the spec files?
Like, for example, I have a login operation, right? And login should be used in most of test cases. First of all, I log in and then do some operations. If I do this login steps inside the spec file, then I have to change all of them in all spec files. But I have login implementation in a just JavaScript file. So even if I convert from Cypress to Play, right? JavaScript file is there. I don't have to change. So this is the importance of the architecture and the patterns that you apply.
Yeah, yeah, yeah. That's a very, very, very big part for the transformation to get it on a good way. And I recall one interesting point as well. When talking about this kind of like self-healing or modification, refactoring or even conversion things. Now we can talk about machine learning algorithms as well, because there is a way to use them. You just basically provide your spec files and ask machine learning platforms to convert.
And maybe not 100% accurate, but in a certain level of accuracy, they provide some functions. Yeah, you have a good foundation then to do the rest manual. Yeah, exactly. You can reduce the manual effort. Okay. I think it is very, very important talk you give and to let people be aware that they think of their requirements. How do the framework fit to my expectation? Yes. And how can I make it run for me? And also to retrospective think of that to review the things what I do in the past.
So is my end-to-end framework still applicable for the future? So I think that's two very, very important messages you have. Yeah. Yes. And so I thank you very much for this insights you gave us here. I think we can all think about that in the next project. I agree. Like it should be a continuous way of thinking. Yeah. Like anytime we can find a better approach. Yeah. We can always find some improvement rooms. Yeah. So you highlighted a very important point. Thanks for raising this.
I think like whenever we come up with some solution, it doesn't mean that we cannot change it at all anymore. Yeah. There can be always some different options and different improvement rooms. So we should continuously think about this improvement way. Yes, that's true. Thank you very much for the talk here. Thank you. I wish you much pleasure here today at the QS Tag and for your tutorial, for your speech. Thanks. And yes, have a good time here.
Thank you. Thanks for the opportunity to discuss all these improvement things. You're welcome. Thank you. [Music]