Skip to main content

Collecting feedback from mobile game beta testers


Ok, so you've almost finished your game and want to take it for a spin by distributing a beta version. Let's assume that you're quite satisified with the design and functionality, and you have also somehow managed to get a solid amount of testers interested in your game.

That's aweseome, but a few big questions still remain unanswered:
  • When are you planning to collect feedback from your testers?
  • How are you going to collect it?
  • What kind of feedback and statistics do you want to collect?
These are not easy questions to answer, but if you want to get the most out of your beta testers you need to think about them. Hard.

I don't have the knowledge or experience to say what works best, what you should/must do, etcetera. But I did think about this a lot while planning the beta testing for my iOS game Ice Trap recently, and I'll try to explain what I did and why. Let's get started.


When to collect feedback

When I discuss the issue of when to collect feedback I assume that you won't have neither the time to interview each tester in person, nor the resources to invite them to some kind of focus group to observe their behavior. Collecting feedback must therefore be done asynchronously with minimal time spent. Basically, it all boils down to the following two alternatives:
  • Ask testers to provide feedback after they're done testing, for example using email or a web survey.
  • Allow testers to provide feedback while actually playing your game. A lot of this type of feedback can be completely invisible to the testers, but you may also want to collect some feedback that requires additional interaction.
I guess you already realize that I strongly favor the second approach, although a combination of both alternatives is probably the best.

What's so good about collecting in-game feedback? Well, how about:
  • Testers are already engaged in your game. They don't have to switch context to provide feedback, remember details, take notes etcetera. Feedback is collected when things actually happen!
  • Testers' opinions might change during the test sessions. What if a tester really loves your game 30 minutes in, but then plays it another 3 hours and grows tired of it? What feedback will you get if you wait until afterwards to ask?
  • Collecting good feedback after the testing period is difficult. Testers might forget what they did, details get lost, they don't have time to give extensive answers, or they might not even answer your email/survey at all even though they spent a good amount of time testing your game! Ouch, what a waste!
So my recommendation for when to collect feedback is: Focus on in-game feedback, but you can and should also follow up with an email/survey after the testing period is over.

How to collect feedback

There's a million options for how to collect your feedback, and I won't go into detail about any of them. This is very much a matter of personal opinion, your choice of development platform/language, and which tools are most accessible to you. As long as you can gather and analyze the feedback you need, you should be good to go.

In my case, I chose to collect all in-game feedback using both GameAnalytics and Flurry Analytics. The reason for this is that Ice Trap is built in Corona, and there are free plugins to use for GameAnalytics and Flurry. The two tools shine in different areas of statistics, which is why I opted to use both in parallel.

What feedback to collect

This is maybe the most important part, and definitely the one that's hardest to get right. Getting your testers to answer some questions is of course always a good thing, but how valuable the feedback actually is depends hugely on how well you have designed your questions.

To start with, I'd like to divide the feedback you can collect into two categories:
  • Shallow feedback: Tells you a tester's opions on something, but you can't really do anything with it. 
  • Deep feedback: Tells you not only a tester's opinion, but also gives you a clear indication of any changes you may need to do to improve your game. 

Shallow and deep are just terms that I came up with myself while writing this blog post. There are probably more scientific terms for this, but I don't know about them so let's just stick to shallow and deep for now. :-)

Example of shallow feedback: You ask your testers some kind of general question, such as rating your game 1-5 stars. Whatever they answer, you can only use the answers for statistics. For example, you will get to know that 60% rated the game 4-5 stars. But you'll have no clue why the 60% liked your game, or more important why 40% don't seem to like it that much.

Example of deep feedback: You ask your testers a much more specific question about what they like or don't like about your game. For example, you can ask them what type of improvements they would rather see and let them select from a number of options such as graphics, sound, mechanics, tempo etcetera. Great, now you know a whole lot better how your game is perceived and what to do about it.

As you understand, it's crucial that you manage to collect as much deep feedback as possible. But to be able to gather deep feedback you probably need to ask some shallow questions first. Given the example above, you can start by asking testers to rate your game 1-5 to get some shallow feedback. Now that you know if the testers like your game or not, you can ask a relevant follow-up question to find out more specifically what is good about it or what needs improvement.


Collecting in-game feedback

When it comes to in-game feedback, another important thing to consider is how much feedback to collect. Too few questions will not give you enough information to work with, so let's try to avoid that. But too many questions on the other hand might annoy testers and have an impact on their user experience so we don't want to do that either. If testers start answering "whatever" on your questions because they get tired of the endless popup prompts, then you're in big trouble. That means they will give you feedback that can't be trusted and you probably won't even realize it! I guess it's all about finding that golden spot, but as a general rule of thumb I would say that having too few questions is better than having too many. Prefer quality over quantity.

Prompting testers with questions where they can choose from a number of predefined answers will get you a long way, but in my opinion you should also always provide the possibility to enter free text feedback in some way. This to make sure that testers always have a chance to speak their mind and give you detailed information that you wouldn't be able to collect otherwise.

A few other things to keep in mind when designing questions for your feedback prompts are:
  • It should be quick and easy to answer the questions. Keep texts short, and limit the number of answers to choose from to a minimum.
  • Predefined answers to multi-choice questions should not overlap each other. If you find yourself having problems with this, then maybe you should split your question into several questions instead.
  • Don't force testers to enter freetext answers. Instead encourage them.
  • Don't prompt testers too often with multi-question prompts. Prompts that only require a single tap to answer a question can be shown more often.
  • Make sure to gather a lot of hidden feedback/statistics to track the testers' progress in the game. 
  • Testing a game should be mostly about playing it, and then answering some questions every now and then. Not the other way around...

An example: Ice Trap in-game feedback

For the beta testing of Ice Trap I decided to only show my feedback prompts after a level has been completed. This way I don't have to interrupt the game flow during the normal gameplay, and I can catch the testers while they're having a natural break from the game.

Waiting for players to accomplish something before prompting for feedback is a well-known technique to increase the chances of getting a better response rate and more valuable feedback. Think about it. If you're playing a game, you probably don't want to be bothered with "stupid questions" when you fail on the same level for the tenth time in a row...

My setup of in-game prompts for Ice Trap was this:
  • Rate level - Ask testers to rate the level that was just completed. A single tap to rate it is all that is required. No follow-up questions.
  • First impression - Ask testers how the start of the game was perceived, how easy it was to understand the mechanics etcetera. Shown only once, after completing just a few basic levels.
  • Rate game - Ask testers to rate the game 1-5 stars. Depending on the answer there will be different follow-up questions to collect deeper feedback. Shown only once, rather early in the game in hope of keeping as many testers as possible up to this point.
  • Opinion check - For the testers that keep on playing for a while I want to know if they change their minds about the game at some point. Therefore I show a yes/no question on a regular basis, and if the testers answers that their opinions have changed they get the chance to rate the game once again.

Players are asked to quick-rate each completed level. This is an example of what I call shallow feedback.


In a flow chart, the overview of prompts looks like this:


And here's a more detailed look at the flowchart for the "Rate Game" prompt:




Below are two screenshots from the Rate Game prompt:

Players are asked to rate the game 1-5 stars, just like in the App Store. This is once again shallow feedback.

After players have given the game a basic rating, we'll ask some follow-up questions. In this example the player rated the game 3 stars and is therefore asked how the game might be improved. This gives us a little deeper feedback, and a chance to actually take appropriate actions. 

Follow-up survey

Like I said at the beginning of this text I strongly believe that it's crucial to collect as much in-game feedback as possible. That said, I also recommend that you always send out a follow-up survey to your testers once you feel that you've collected a decent amount of in-game feedback.

If you have designed your follow-up survey in a good and thoughtful way, the responses you receive can be invaluable to you since you will get a much better understanding of how each individual player perceives your game.

I won't go into much details about how to design the questions of your follow-up survey, but I'll try to give you a few pointers that I followed myself when designing and sending out the follow-up survey for Ice Trap:
  • Ask a few things about the player, not just about the game. Get to know the player!
  • Ask relevant questions about the game that will provide you with as much deep feedback as possible. Sounds obvious, but isn't at all easy to get right. Think long and hard about what is most important for you to find out and design your questions accordingly.
  • Keep the number of questions down to a minimum to increase the chance of getting get high-quality answers. Avoid overlapping questions. When you think you're done with your question, review them once again. You'll most likely find questions that overlap each other more or less, or questions that aren't really necessary to ask at all.
  • Stick to mostly multi-choice questions since they are a lot quicker and easier to answer. One or a few free-text questions at the end of the survey is enough.
  • Present all questions at once to the user, or at least show the current progress. Otherwise there's a big chance that user's will drop out in the middle of the survey if they feel it's taking too long.
  • The order of the questions matters a lot. There should be a natural flow where the questions feel somewhat connected. The first questions must be very easy to answer to get people started and not scare anybody away. A good way to accomplish this is to place the questions about the players themselves first.
  • Try to find the perfect time to send out the survey. If you send it too early, people won't have played your game enough to give you the feedback you're hoping for. Wait too long and you run the risk of people forgetting how they experienced your game, or they will have lost interest and won't complete the survey at all.

Outcome of the Ice Trap iOS beta test

As I'm writing this the beta testing period for Ice Trap has been going on for about four weeks, and these are my results and observations.

In-game feedback

Wow! The in-game feedback prompts worked out extraordinary well and I'm very happy that I decided to implement them even though it took quite a lot of time. It was definitely time well spent! A few things that I learnt by collecting in-game feedback:

  • The game was generally speaking very well received by most players. That's just completely awesome to hear, and something that I wouldn't had known without the beta test feedback. Now I can be fairly sure that the game is actually pretty good, even if I know that I still need to improve and polish certain parts before release.
  • Almost nobody had anything negative to say about the graphics and the music. Great to know that I probably don't need to spend more time polishing these parts.
  • The game is considered a little too difficult by some players. Will take that into consideration and adjust the difficulty curve to make it slightly easier, at least in the beginning.
  • Some players still find the first level(s) a little to complicated. I thought I had made them really simple already, but apparently that wasn't the case so I guess it's best to redesign them once again.
  • I got a very good indication on which levels are well-designed and fun to play, which need to be redesigned, and which to cut completely from the final game.
This type of feedback is extremely valuable, and it makes many of my future decisions about the game so much easier.

Follow-up survey

I'm quite happy with the results of the follow-up survey as well, even though there wasn't the same wow factor as for the in-game feedback.

The most positive thing about the survey was that almost all of the reponses were of high quality. Some responses were very detailed and pointed out weaknesses of the game that I would probably never have identified myself. I also feel more confident in which parts of the game are already good enough, and which parts need more work to be perfected before release.

The biggest problem was, just as I as expected, to get people to complete the survey at all. I sent it out about a week ago, and despite sending a reminder a few days later only about 25% of the people who played the game have completed the survey by now. I have to admit that 25% is even lower than what I was hoping for, but I can't see what I should have done differently to increase the response rate. 

Conclusion

If I ever had any doubts about whether it's worth running a beta test period, they're all gone now. I have received tons of valuable feedback, mainly from the in-game prompts but also from the follow-up survey. I have learnt about different strengths and weaknesses of the game, and maybe most importantly received confirmation that Ice Trap is actually a good puzzle game that's fun to play! That has given me a massive energy boost to try and take it to the finish line as soon as possible.

So if you're in the same position I was in a couple of months ago, contemplating whether to perform a beta test or not, I have only one thing to say to you: Do it!



Comments