This is how we do beta testing at Mammoth-AI
We opted to start beta testing early (from the very first beta) so that we could use the feedback to priorities. For our own beta testing, we enlisted the help of roughly 20 people who agreed to check out the new grid and provide comments on their experience.
I wasn’t convinced this would be useful at first. What kinds of things could be tested at this stage? What are the thoughts of the beta-testers on this relatively slim version? Would they dismiss it as ineffective and lose interest? Is it realistic to expect any kind of response?
Despite my initial reservations, the developers insisted on moving forward, and I’m glad I consented. This was excellent material!
Here’s how we go about it:
1. Before you start – Three decisions to guide your planning
- a) Make a decision about what you wish to accomplish. Make certain you understand what information is significant and how you will use it. If you don’t want it, don’t ask for it!
- b) Choose a method for gathering feedback. Prepare to manage whatever comes your way and to respond to everyone who asks a question or sends a request or a suggestion.
Make use of a program. Which allows you to collect input from a specific e-mail address and immediately transform it into needs or defects. This is quite useful, in my opinion, because you can compare the new input to your previous requirement.
- c) Make a decision about how to handle the input. Beta testers want you to follow your promise and incorporate their suggestions into your work, otherwise they will feel cheated out of their time and effort. Stick to your plan; it’s tempting to get carried away. Take action that is in line with your objectives, and avoid the temptation to merely shift the goalposts due to a desire to proceed in a different direction.
2. Selection of participants
To serve a variety of business demands, we wanted a mix of beta testers in various job categories (testers, test leads, product owners, and other roles).
We chose roughly 20 “new” and “old” clients because the old ones may have grown accustomed to the Mammoth-AI way of doing things, but we also wanted to hear new user impressions. We sought for users in a variety of industries, including government, industry, and service, as well as large and small businesses.
3. Communication with customers
We decided to send an e-mail to persons who fit our criteria, asking whether they were interested in working with us on this. They certainly were! The opportunity to contribute and have an influence drew them in.
In addition to sending invites to a restricted group of users, we announced on our website that we were looking for beta testers and that anyone interested might email a note of interest.
All participants received an e-mail confirmation after signing up, outlining what to expect during the beta testing. We included a screen photo in the email to show them how to access the beta and how the grid will appear on screen.
A word of caution regarding proposing new features: don’t overpromise. It’s critical that beta testers have realistic expectations; otherwise, you’ll have dissatisfied people who are expecting for features that were never included in the final release, and they’ll lose faith in you.
There were a few questions/issues in the e-mail that we wanted them to pay special attention to. We supplied instructions on how to send us feedback as well as contact information in case they had any additional questions for the team.
4. Collecting feedback
We choose to treat the input as a set of requirements. We save them in Mammoth-AI so that we can compare them to other requirements we already have. The comments can subsequently be utilized to write requirements as well as adjust the backlog’s priority order.
You will have to select whether or not to develop sooner or later. When all of your feedback is in one place, you’ll be able to handle decision-making throughout and after beta testing in a more efficient, effective, and comprehensive manner.
5. How to prioritize feedback
Maybe you don’t think a few of features are that vital, but everyone wants to use and test them. To help you compare features and decide how to priorities them, Mammoth-AI allows you to rapidly classify and filter features based on “Business value” and “Cost.”
It’s critical to keep the developers informed about any modifications after you’ve created a list of prioritized requirements and begin implementing them. If the order is altered too frequently, it may result in additional effort, as well as higher time and financial costs. At this point, you should always ask yourself whether changing plans is worthwhile.
6. Release-handling and communication
Return to the testers before the release to remind them of how valuable their feedback was and how it helped you in specific areas.
Before the next release is out, let them know what they will be able to accomplish with it. It’s only common courtesy, and it’ll add to the pleasure of being a part of a beta testing group. If you treat your recruits well, they will be more willing to assist you in the future.
After the new release is released, remember to thank everyone again and encourage people to keep bringing in their feedback.
Conclusion – Back to the grid
When it comes to loops, we’re not done yet! Our team is currently collecting data from our current beta testing, and after that is over, it will begin all again until the next release, and then the release after that.
The goal, as usual, is to provide a solution that truly serves the users’ needs in every manner imaginable.
Keep an eye out for further information regarding our new grid. We’re on the verge of receiving an official announcement and release date!
For more info: https://mammoth-ai.com/testing-services/
Also Read: https://www.guru99.com/software-testing.html