Aw gawd no. Why do test framework authors repeatedly think this is a good idea? Everywhere I’ve worked that has embraced such a framework (often robot), the suites always eventually outgrow the capabilities of whatever DSL the framework provides.
Even with escape hatches into an established language, you turn the majority of test editing into second class citizen, because there is no chance that your ”test-IDE” will beat IntelliJ or VSCode. Developers don’t want to touch such test code and testers do the lazy thing and copy paste instead of building appropriate fixtures. Do you really want to relearn how to define a constant in flavor of the month test-dsl, vs just doing it the way you always do in TS?
When you see a yaml that resembles a list of ”steps”, it’s not declarative anymore, it’s a crippled imperative language in disguise.
Hi everyone, co-founder of Mobile.dev and co-author of Maestro here . Thanks jstan for sharing Maestro and for the kind words—really glad to hear it’s working well for you!
We built Maestro because E2E testing felt unnecessarily complicated, and we wanted something simple and powerful that anyone could use — whether you’re a seasoned developer or just getting started with automation. It’s been amazing to see it adopted at companies like Meta, Block, DoorDash, Stripe, and Disney, but honestly, what excites us most is seeing teams who’ve never done test automation before finally get a solid strategy in place because Maestro is so easy to use and get started with.
Oh, and if you’re wondering — yes, it works for web testing too!
We’re constantly iterating and adding features, so if you’ve got ideas, run into issues, or just want to chat, let me know. Always happy to hear how we can make it better.
Thanks again for checking it out, and happy testing!
We need to trigger external effects (eg on a USB-attached embedded device) and sometimes do stuff like forget a BLE device in system settings. We‘ve been looking into making a fake mouse that can achieve the latter. If you can support both use cases, you‘ve won.
That's what I'm doing with my new project, Valet. It's a Raspberry Pi configured to be a fake mouse, keyboard, and an Android (touch stylus). Works well on iOS and Android.
We loved Maestro but we didn’t like the pricing and the team wanted something more predictable. We are using Moropo and it’s been great. Very affordable, good DX, and it’s all basically just Maestro with extras.
The advantages of using open source! It would be great if it became industry standard and more companies would offer it as a service.
Yep free and open-source to use. Plenty of folks run Maestro on directly on GitHub Actions, Bitrise, etc. Teams often run on our hosted cloud infra for parallelism and reliability when scaling up their testing, but that's totally up to you!
Hey tibbe - co-founder of Mobile.dev here. First off, totally get where you're coming from. We do offer a startup discount, but would love to dig in more to see if there's something we can work out. My co-founder and I would love to chat if you're open to it! Just shoot me a note if interested! leland@mobile.dev
Installed and tried it for a sample Flutter app. So far looks too good to be true :) Super easy to start and tinker with. And surprisingly fast. Learning how to write real world tests with Flutter apps probably will have some learning curve, but that's expected.
Would be amazing to use it with Flutter desktop (macos at least) to avoid running iOS simulators.
I’ve been using Maestro for two very large Flutter apps and it’s been so ahead of every other option is not even funny.
No long compilation times, no half baked testing dev experience, supports iOS and Android, no pumpAndSettle BS, No Flutter hacks, multiple cloud providers (cloud.mobile.dev, moropo), you can interact with native elements, so you can work with push notifications, system dialogs, system settings, email clients, web views, browsers, and a very simple test definition files that every capable QA engineer can maintain with very little supervision from developers (no need for dart expertise for writing tests).
haha happy to hear that :) (I created Patrol) ((and then worked on Maestro at mobile.dev briefly))
I too think that for the vast majority of use cases, Maestro is the best solution. Fast and easy to write and run.
Also it's cool that it's open-source and has very strong community. I'd be skeptical to have all my tests stored in some SaaS that I can't even run locally (as some other solutions do)
Maestro being open source, being able to run it locally AND having two providers where we can just sign up and start running our tests was an important factor going with them.
We ended up going with Moropo as their pricing matched our needs better. Even if in the future we had issues with them, we could just go to a different provider is a big plus.
I found out about Maestro after I coming across Flashlight. I was looking for something that could effectively give me a performance score for my apps like Google Lighthouse, but for mobile apps. I found that in Flashlight. I found Maestro was relatively easy to pick up, like others have said before.
I'm a performance engineering consultant, but my apps are side projects, so I needed something that helps me do some quick performance testing. Maestro, and Flashlight, help me do that. It's early days, but I'm actually working on a separate product to use both Flashlight and Maestro to test on any number of real devices so I can get performance score trends across devices. Contact me if you're interested.
Looking forward to testing some of the updates with Maestro, especially web and iOS device support.
We extensively use Maestro in our testing setup. We test our Android (+ Android TV) and iOS apps on a couple of different emulators, and use it to take a bunch of screenshots to generate diff reports.
The only thing we don't like so far is that it is not extensible at all. And the AI direction is one that we absolutely don't care for either.
We built a huge wrapper script in python that allows us to spin up and control a wiremock server, as well as help us implement features that are not supported by maestro directly. We make calls to a local webserver (spawned by our wrapper) from the maestro test to do this, which works surprisingly well but it feels like we could perfectly leverage custom yaml commands or something like that.
Trying this out at work and so far it has been leagues better than other mobile automation tools. I have just gotten started but it has been encouraging.
I used Maestro for one year after we switched from Detox. It's awesome to start end-to-end tests with and definitely the most accessible. However, in the end, we had to switch to Appium. While it's great to get started quickly, I definitely wouldn't recommend it for a serious production system pipeline. We encountered several issues:
- When attempting to write logic using JavaScript, there were issues such as error stacks providing no useful information and a complete lack of console logging. The injection of variables and the custom fetch also made linting ineffective. At least Maestro now supports ES6 via GraalJS.
- Coordination of test flows is lacking. I wish I could retry each flow individually. Ultimately, I had to create a wrapper (director) around Maestro to provide things like recordings, retries on failing tests, and (relevant) JSON output for our CI. I also needed to write custom reporters for Slack and other integrations. While these are not a core need for a testing tool, when I switched to Appium + Webdriver, most of these tools were available out-of-the-box (though they come with their own issues as well).
- When we updated XCode or iOS to newer versions, things tended to break. We often had to freeze the pipeline versions and wait until fixes were released, regularly checking GitHub issues to see if updates became available.
- We also experienced strange, random test timeouts which started to be frequently enought to break even with retries. These happened even though we had frozen versions of Maestro, XCode, and iOS, so it wasn't Maestro's fault, but it was problematic enough that we decided to move away from it, because we couldn't isolate the root cause.
I definitely miss the simplicity of Maestro. Appium takes more time to set up and comes with its own set of issues.
I still follow the project (and also the Flashlight.dev project, which uses Maestro for performance measurement), and looking forward to updates, which the team does constantly.
If you are a startup with no QAs I definetly would go with the Maestro route, but avoid it for a complex app pipeline use case, at least for now.
> "Appium takes more time to set up and comes with its own set of issues.
Hi, Appium project creator here. I'm working on something new (complementary to Appium) to address Appium set-up and other issues. If you ever wanted to chat, would love to hear more.
First of all, I didn't make this, let me be clear, and I don't work for the company Mobile.dev.
I've been looking for a replacement for Appium because the documentation for that site is absolutely garbage. Maestro boils everything down to YAML and runs its own test server so that you don't have to worry about connecting to the device drivers. It's missing an API but who needs an API when the CLI is so beautiful.
Does anyone know of anything on par with this that I should try? So far this has knocked my socks off.
Co-author of Maestro here - really appreciate that support jztan! If you get a chance you should also try out web support which we recently released! And always open to feedback, so please let me know if there's anything you think can be improved!
Hi Jztan, glad you're exploring this space! I'm the co-founder of MobileBoost, and I'd love to introduce our product, GPT Driver (https://www.mobileboost.io/).
We started two years ago with an AI-native approach, which is particularly useful for handling dynamic flows, hard-to-locate UI elements, and testing across multiple platforms and languages. Our main objective is to reduce test maintenance effort.
We offer:
a Web Studio – A no-setup-required platform with all tooling preconfigured.
SDKs – Directly integrate with existing test suites (Appium, XCUI, Espresso).
Yes, you can use our SDKs to run it locally on Simulators, Emulators, and real devices. We also support popular third-party device farms via the WebDriver protocol.
By default, the SDKs use our API endpoints, where we run a combination of models to maximize accuracy and reliability. This also enables us to provide logging with screenshots and reasoning to help with debugging.
That said, we're currently experimenting with a few customers who run our tooling against their own hosted models. While it's not publicly available yet, we might introduce that option going forward.
Would love to hear more about your use case, if a self-hosted setup is relevant or just the use of your own LLM tokens?
Really like the fact that it's easy to start doing something useful. I may end up using it for some screen scraping too. Puppeteer is powerful, but the scripts tends to be brittle.
the fact that it's open-source and nicely structured internally lets you "peel off" the topmost YAML layer and just use the underlying components to interact with the mobile device, using your JVM-compatible language of choice.
Awesome to hear! There's still tons we want to do on the Web side, so please let me know if there's anything you think should be added or improved there! Feel free to tag/DM me (@Leland) in our Slack community or email me leland@mobile.dev with questions/suggestions!
Maestro is great. However it lacks so many important features you might need
For example, Maestro does not let you to coordinate multiple flow tests together. One test case I had is one phone initiating a call, and another answers it. Instead, Maestro prefers that every flow is self contained and will not run both in parallel reliably
I found many such limitations in its design only after writing a whole lot of their custom flow syntax
I've used maestro in the past and liked it, very easy to get started and add decent coverage quickly.
Just wanted to share a project that I'm keeping a close eye on. I haven't actually used it yet but hoping to do so soon:
https://github.com/takahirom/arbigent
The newer UI testing tools like mobileboost and QA buddy all support using vision language models and natural language to make testing easier. Do you plan to add support for that?
Main difference is that Maestro takes a reliable-by-default approach. We hear plenty of stories of folks exploring tools like the ones you mentioned, then ultimately coming back to Maestro due to reliability / reproducibility issues, which are non-negotiable when it comes to end to end testing
In my little experience, many big companies are heavily invested in Appium, which was the only viable solution x years ago, and keep clinging on to that.
Also Maestro may not be flexible/hackable enough for some of the things they do with Appium. But in the long term I think everyone would benefit if Maestro became the go-to UI testing tool, the way Docker became the go-to tool for containerization.
> The quality of most mobile apps sucks so not sure why mobile testing is not mainstream?
I think it's actually: mobile testing is not mainstream, so most mobile apps suck, haha.
btw, I think that most mobile apps actually make no sense at all (many are just stupid CRUDs), and should be web apps. But that'd be a whole different rant :)
When testing mobile apps, how do you manage the data at the backend? i.e how do you ensure that data that you see in the app is he same every time and actions during the test do not affect the data for the next test?
When testing the backend in frameworks such as Rails, this is taken care of by seed data and DB transactions.
I've worked in this niche for a very long time, I seriously need to be able to use a normal programming language. A lot of test tools need to be a part of a larger workflow, if this was good ole NodeJS I could use some other tricks, for example intercepting network request,custom logic in JavaScript, etc.
Aw gawd no. Why do test framework authors repeatedly think this is a good idea? Everywhere I’ve worked that has embraced such a framework (often robot), the suites always eventually outgrow the capabilities of whatever DSL the framework provides.
Even with escape hatches into an established language, you turn the majority of test editing into second class citizen, because there is no chance that your ”test-IDE” will beat IntelliJ or VSCode. Developers don’t want to touch such test code and testers do the lazy thing and copy paste instead of building appropriate fixtures. Do you really want to relearn how to define a constant in flavor of the month test-dsl, vs just doing it the way you always do in TS?
When you see a yaml that resembles a list of ”steps”, it’s not declarative anymore, it’s a crippled imperative language in disguise.
reply