donderdag 18 december 2014

final recap - The good, the bad, and the ugly: another big wall of text

It's been a while since our previous recap, and, with the final presentation tomorrow, it's time to review our work, what went good, what we could and maybe should have done differently, and what we most definitely shouldn't have done.

In this blogpost I will be mainly focusing on the reports we wrote, and won't go into the details of the implementation and art, these recaps will be covered in different blogposts.

Let's start of by reviewing the plans we had at the end of our previous recap, then we move on to how are last reports fit into those plans. Finally we will go over what we have learned, what we would have done differently, and which things might be touched upon in the future.

The rough roadmap of our previous recap

For reference our previous recap can be found here 

After going over our first two reports, we broke down casual games into three subdomains, core game mechanics, interaction mechanics and aesthetics. These would act as our framework to position our evaluations.

Now we left you, the reader, last time before we actually released the game in the wild, we wanted to implement analytics, furthermore we wanted to improve the learnability and the gameplay mechanics. We established that the learnability at the time could be improved, which was our next focus point.

We also mentioned the following longer term areas that could be improved.

  • Evaluation and tweaking of the core game mechanics. 
  • Experimentation with non-tap game controls on touch-screen devices. These could be for example swiping and instant lane switching. 
  • Several proper standardized questionnaires related to the current state of our game.

So we'll take a look at what we have improved and evaluated since that last wall of text:

Evaluations

Evaluation 3 


Our first goal was to improve the learnability. 
The main problems were, players not understanding how to react to the big and small dinosaurs. Furthermore the old tutorial screen did not explain the multiplier. 

We theorized that this could be accomplished by improving the simple tutorial screen of the game, which would explain the different mechanics to the (new player). Also the visual feedback the players got when eating a small dinosaur was improved by adding a small sprite above the head of our top hat dinosaur, and having this same sprite featured next to the score.

The tutorial was changed from:

to:



we had 18 people evaluate these changes to determine whether the controls and the gameplay was clearer now. This seemed to properly explain the entities, the small and big dinosaur etc, but the multiplier however wasn't clear to most participants. 

Thus we decided that the multiplier would be something that needed improvement in next evaluations.

Furthermore several people commented on the control layout, suggesting that others controls, that did not involve tapping, but for example swiping, would be more intuitive and perform better. Since we did not test these lay-outs yet and we wanted to be able to properly determine the best control scheme, this was decided to be something a next evaluation should be spent on as well.

Evaluation 4


To end the discussion about the best control scheme on touch devices once and for all, one final evaluation considering a total of six possible lay outs was held. 


The controls schemes: 
  • left - up; right - down
  • top - up; bottom - down
  • left - down; right - up
  • dragging
  • swiping
  • tapping under the playable character - down; tapping above the playable character - up
We had eight people evaluating the different control schemes, per control scheme

they:
  • got an explanation, explaining the control scheme
  • played the game several times with the control scheme
  • answered a couple of questions about the control scheme.
We found that dragging or swiping, performed poorly, and that the original proposed control scheme performed best according to our players. A close second was the top/bottom tapping. We thus decided to leave these two schemes in the game. 
Unfortunately we only tested it on a smartphone, thus we were not able to determine if the screen size has any effect on the preferred control scheme.

Evaluation 5


I mentioned earlier that at the end of the last recap we wanted to release our game in the wild. We added analytics to both the android version, and the web version, and the released it officially in the wild on the 27th of november, with both a release in the playstore and announcing its release on facebook. 

The results of this 'release in the wild' were analysed on the 7th of december, in this report, report 5, Now we probably should have had a clearer intention of what exactly we wanted to find out in this evaluation, however we did not define our objective that precisely, this made it somewhat difficult to properly analyze the data, and form a conclusion, based on the analysis.

We tried to derive the satisfaction of our game from the length of the game sessions and the the amount of retries in each session. Few to no retries would indicate a lack of interest in the provided gameplay, thus indicating that our game might be boring or too frustrating. Extremely short game sessions would indicate our game might be too difficult, on the other hand, if we would only have really long sessions, the game might be too easy.

One problem we had, was the amount of data, unfortunately creating virality is hard, more on that later, thus we did not have as many results as we would like. Furthermore the results that we had, might have been skewed by ourselves and close relatives, that played the game significantly more than random players with which we had no personal connection.

We did conclude however that the game is either too difficult or not entertaining enough to keep playing, this was further underpinned by comments during previous evaluations, the game difficulty increased  to quickly. 

Thus we decided to make the game somewhat easier, by limiting the spawnrates, and reducing the maximum speed slightly. 

Evaluation 6


Our final evaluation focussed on evaluating the current state of our gameplay with the standardised Game Review Questionnaire, which can be found in the appendix of this report.

We wanted to know the current state of our game and more specifically if our game was actually fun, 
especially since we tweaked the difficulty somewhat. The game review questionnaire has several broad, multiple choice questions in which participants can state whether they think aspects like aesthetics, gameplay etc. are okay, and what should be improved. Overall we got decent scores, though we also found out that are game could improved in both length / content and fun.

In a way this was somewhat expected, areas we focussed on previously scored definitely higher than the aspects we haven't focussed on yet. This in itself is a good sign, meaning that time spent on working on those areas was not spent in vain. 

We also somewhat expected the lower score in content/length. Originally we thought of different modes that could be included with dinotophat, for example a story mode which would have levels and such.  At this moment the game only has one game-mode, the endless mode, and while it is fun, it is not too addictive, and I wouldn't be surprised if people lost interest in it after playing it a couple of  times. The achievements and highscores might add a completionist and competitive element that people might enjoy, however to say this would be enough to create a good replayability value, would be incorrect in my opinion.

Future work on the game judging from this evaluation should thus be focussed on adding more content,  and possibly balancing the game more, such that players enjoy a good mix between satisfaction and difficulty. 

Summary of the reports

As with any artistic endavour, one could say that such a project is never finished, and clearly we still have many possibilities to explore, if we wanted to improve this game further. If this would be worth the time would be a different discussion though. 

So far we have build upon each evaluation to give us direction which aspect of the game to improve next. We started out with determining a good control scheme for our touch version. Interaction with the casual game is the very first thing that will leave an impression on a gamer that has just downloaded the game, thus we deemed it important to get right in the beginning. 

In the second evaluation we continued gathering insight into users opinions of our controlscheme choice as well as start exploring the learnability of our game. 

The third evaluation focussed on evaluating our changes to the learnability, putting to test our conclusions of the second evaluation.

The fourth evaluation was used to make a final decision about the control scheme for touch devices, having tested several different lay-outs we are now confident we picked the two lay-outs that most players can appreciate. 

The fifth evaluation focussed on the data gathered from our in the wild release, which we used to analyse the difficulty and fun of our game.

and finally the sixth evaluation focussed on the current state of the gameplay mechanics of the game.

Compare the first minimum viable product, with the current game and you'll see a definite improvement.

Which leads us to:

Current state - the good, the bad, and the ugly.

One course, Six evaluations, a lot of development time and drinks containing caffein later, where are we now? 
Since evaluation 6 we added a day and night cycle, which we hypothesised would help with giving players visual feedback from the multiplier, unfortunately we were not able to properly evaluate this. The final version is playable here.

So time for some conclusions, let's start with the positive:

The good


We have a game, it looks nice, it is playable. It has Google Analytics. It has a tophat. It is published on the google appstore. I think we definitely made something we can be proud of. 

Besides the final product, we have learned a lot about evaluations, questionnaires and the difficulties of designing user experience, which beside being the goal of the course, is definitely worth something in future endavours.  

The bad and the ugly

Of course, we could have done a lot better, so let's review the major ones.

Our participants. 

Almost all of our participants were either recruited from people that spent there hours at the A-building or are people we personally know. The goal of testing with testers is to evaluate your product such that you can reason about how your whole goal public thinks about your product. If your participants don't properly represent your goal public, it's harder to properly reason about them, if not impossible. 
Our participants were probably not an accurate representation of our goal public, and thus our conclusions might have been somewhat skewed by this.
Which leads to the next point.

Virality

Virality is hard. Making people play, and talk about a casual game is definitely not one of the easiest task. We originally hoped that just by putting it on the google appstore, talking a bit about it and asking friends to play it, it would generate enough publicity and interest to do well on its own. It didn't. We did try to promote our game a bit, we made a facebook page, an imdb page, created a indiedb account. We promoted it on reddit for a bit, and talked about it on facebook. Still it did not pick up. Virality is hard. 

If we were to develop a game again, we would need to learn a bit more about social networks, and how to properly market a game, such that it goes viral, or at least goes more viral than it did this time.
I think if we had thought about a media campaign, identified the major social networks that we should target, preparing enough material to actually post there and keep it fresh, we might have done a lot better. We should have started to promote our game earlier as well, now we only started near the end, wasting precious time to get the game to be picked up. 

Of course this would have meant a lot of work, that we now had spent on the reports, questionnaires and development, each thing has its good and bad points, I guess.

If the game had gone viral, we would probably have had a better test group as well, which would properly represent our complete goal public.

Reports

Another thing that hindsight made very clear, it is good to ask the same questions, each time you do an evaluation about the same subject, whether this is a standardised questionnaire or not. This makes it a lot easier to compare the results, and actually determine whether you improved the game or not. I would even say it would be nice to do two sets of questions each evaluation (or possibly two evaluations per cycle) one in which you explore / determine the current state of the feature you want to improve next, and one you evaluate whether the proposed changes to the game actually improved your game in comparison to the previous explore cycle. 

If we were to do this again, with the information we have now, we would probably also include 
a standardized questionnaire in more evaluations. This would allow us to track our improvements better. During the development we figured that we would improve the game first, before doing a more general, standardized questionnaire. We thought that our game wasn't ready for such an evaluation. If we would have tested more often with such a questionnaire, we could have had some really interesting reports from which we could conclude which changes effected the game positively, and which negatively.

Minor other things

If we had spent more time at the beginning to properly optimise the way we collected and processed
our data from the questionnaires it would have saved us some time.
We probably should have read up on some game mechanics theory, and casual games as well this would have allowed us to create a better framework at the beginning, and could have guided us when we determined which aspect to improve on next.

What is left to do?

Now that we have looked at the things we learned during this course, and the development of DinoTopHat. Let's take one final look towards the things that we might improve on, if we were to continue development. 

In order to increase both satisfaction and learnability, we hypothesised that this could be achieved by giving even more visual feedback to the player, in the form of juicyness (in reference of this video). The day and night cycle was one aspect of the juicyness. 
Other improvements could be to have more graphics effects when a player eats a dino, or dodges an
obstacle/enemy. Improving the sounds, adding visual effects to the multiplier might also help. 

To improve the length and content of the game, more modes could be added, for example a story mode might be a nice addition, which would feature levels and a story arc (possibly involving the tophat). An other big feature that we wanted to add during the brainstorm phase, was evolution. Eating a significant amount of smaller dinos would make your dinosaur grow and possibly evolve, allowing it to eat even more (smaller) dinosaurs. 

Finally we could also add more social media interaction to the game, this could improve the virality of the game, as well as allowing players to interact with friends and strangers, thus creating a community around DinoTopHat.

Closing thoughts

Overall I think we created a decent casual game, especially considering none of us had any experience developing casual games. Could we have done it better? Hell yes, but most projects can be done better given hindsight. Given the review of our evaluations and the bad and ugly conclusion points, 

I think we have learned a great deal along the way. I personally think that the fact that we would probably change significant parts of our development process if we were to do it again is an indicator that we have learned. This is I think just as, if not even more, important as our final product.

So finally a thank you is in order for our professors and guest speakers of the course fundamentals of HCI at the KUL and of course everybody that has read this blog. 

Tomorrow will be our final presentation (which will most likely be uploaded to this blog as well). But content wise I think this will be the last major update, thank you for following us in this development process, and we hope to see you again, be it in the flesh, or in pixels.

Update:
link to the presentation can be found here.




zondag 14 december 2014

Session blog: NASCOM

In this week's session an employee of NASCOM came to talk about his job and taught us a few important lessons. In this blog we'll recapitulate on his talk and tell you what we've learned and remembered.

The first part covered model thinking. This means that humans, when thinking about something, have a certain image/model in mind. For example: When we think about the world map, Europeans picture Europe and Africa in the center, while Americans picture the pacific ocean in the center.

This is an important fact when designing. People expect a design to match their expectations. The most interesting part is thus how far we can stretch different designs and explore the boundaries of each design.

The second part describes his job at NASCOM, they help companies design certain projects and help build them. The biggest problem appears to be, that the views of certain things (goals, users,...) are not always shared by different employees. One of the first things he does when assigned to a project, is talk with all the people in one room to make sure that all visions are the same. Communication is a very important factor.

The main things we remember from his talks are: People think in models, the view of a product must be the same among all employees and communication is very important.

It was very interesting to have someone of NASCOM come and talk about his job. Our education is somewhat related to this topic and it might become one of our jobs in the future.

donderdag 27 november 2014

Leaderboards and Achievements

In the past two weeks we have experimented with online leaderboards for the Android version of our casual game. Giving players the chance to see the scores of other people (and their friends), instantly makes a casual game a lot more competitive and fun. Online leaderboards were also a request from multiple test users during our evaluations. In this blog post we will go over the challenges we encountered and the results we now have in our casual game.

Research

It was already decided during the first brainstorm session, that leaderboards and achievements were
something we really wanted in our game. That's why we started researching our possibilities early on too. The most popular library we encountered was the Google Play Game Services library. However this required a Google Developer account and although it was possible to use the account of the KU-Leuven HCI department, we decided to look for a free alternative first. 

Swarm

The free alternative we found was Swarm. This library was very easy to integrate into our game, after 20 minutes it was already up and running. At first everything seemed to work just fine. However, it soon became clear that there were some problems with this library, that started to bother us (and some of our testers). We've searched for a way to fix them, but ultimately decided to move on to another leaderboard library. 
The main reasons we chose to drop Swarm were:








  • Players didn't stay logged in consistently. Every time the game was booted up, there was a small chance that players had to manually log in again.
  • Sometimes the leaderboard just refused to go online, even when players were connected to the internet.
  • The leaderboard screen could only be shown in portrait mode (vertical), while the rest of the game is in landscape mode (horizontal).
  • Players that beat their highscore while offline, had it overwritten with their previous highscore when they tried to go online.
  • The popup notification whenever a player submits a score to Swarm, glitched on the phone of one of our testers. It remained visible even after our game had been shut down. This tester had to reboot his phone to make it go away.
  • The look of the leaderboard didn't match the look of our game at all.


Swarm login screen
Swarm leaderboards
Swarm notification glitch

Google Play Game Services

While experimenting with Swarm, we released DinoTopHat on the Google Play Store through the Google Developer account of the KU-Leuven HCI department. Since now our game was already published on the Google Play Store, it required much less extra effort to get started with Google Play Game Services (GPGS) than before. This is why we decided to have a go at implementing our leaderboards with this library. We also discovered that if you want to use the GPGS  library, it was necessary to register at least five ingame achievements. Thus it was decided that we would make use of this opportunity and implement the achievements along with the leaderboard. Just like the Swarm leaderboards, implementing the GPGS leaderboard was pretty straightforward. We also added 9 achievements and are very pleased with the result. The leaderboard looks better and all other issues we had with Swarm also seem to be fixed.

GPGS login screen

GPGS leaderboards

GPGS achievements

zaterdag 22 november 2014

Planning

With only 4 weeks left on the clock it's important that we manage our time well. For this a planning was made and a lot of work still has to be done. Below you can find our schedule on how we plan to improve and evaluate our game.

This weekend (22/11 - 23/11) :
  • Implement extra control schemes (swiping,dragging, tap under/above dino)
  • Implement global high scores + release in the wild (Google play + facebook)
  • Improve feedback from Google Analytics
  • Continue drawing parallax effect graphics
  • Finish up the 3th report on game-mechanics

Week (24/11 -30/11)
  • New evaluation of the controls
  • Make a report of the evaluation (should be finished Wednesday)
  • Continue drawing parallax effect graphics + start drawing of the day/night cycle graphics
Week (1/12 - 7/12)
  • Improve multiplier explanation + rework the tutorial
  • New evaluation of the tutorial and multiplier
  • Make a report of the evaluation (should be finished Wednesday)

Week (8/12 - 14/12)
  • Implement parallax animation and tweak the game speed and spawn algorithms
  • New evaluation of the game tweaks
  • Make a report of the evaluation (should be finished Wednesday)
  • Included SUS- questionnaire for this evaluation
  • Finalize the game

Week (15/12 - 21/12)
  • Prepare presentation of our game

We'll try to get everything done on this list and give a good presentation about our game!

donderdag 13 november 2014

Report evaluation: Blokoj

This week DinoTopHat will join forces with a fellow student group called "Blokoj". Just like us they had to create a casual game and keep improving it systematically. Now it's time to collaborate and critically analyze each other's evaluation reports in order to obtain a better end product. Blokoj  (and us) should see this as a means of improvement and not as a means to thwart each other's work. Below you'll find a list of good and bad things (according to us) and which things we'll take into consideration for our next evaluation.

Good things:

  • The Goal of the evaluation in report 1 is clear 
  • The sections in report 1 stay on the subject of the report and do not wander off.
  • Phenomenons are clearly explained as are the cause of actions.
  • Report 2: added figures are good and should be done more often.
  • Report 2: Result section is thoroughly written and recap of the results is good (way better than in report 1).
  • It was good that you clearly separated the expert from the other users. Even contacting him a second time to receive more base knowledge was good.


Bad things: 

  • A better kind of lay-out  would improve the readability and different sections (eg.: colors like the appendix).
  • The amount of "we" is very high in both reports, this should be avoided as it almost immediately bothered us when reading the report.
  • The link to the questionnaire in report 1 and the link to the results are broken.
  • The results in report 1 (where the amount is standing between brackets) is very unclear. In report 2 this is handled a bit better. Graphs and box plots should be included for more readability.
  • In report 2 it was unclear why you won't test on desktop anymore (this is just a minor comment).
  • In both reports it seems like you want to test a lot of things at the same time, some might even influence one another. A big part of report 2 is the same as report 1. 
  • Report 2 didn't seem to add much results to report 1 (report 2 was worked out way better but the end conclusion in both reports seemed almost the same). Wouldn't it have been better to immediately add a tutorial (or multiple tutorials) and test in evaluation 2 which one would be better? 
  • Small typo in report 2 page 5 in "praktisch verloop": "Hierna moesten ze enkele vragen te beantwoorden".

Small recap and what to remember:

The first things we noticed while reading the report was the amount of "we", try writing some of these sentences different. The full-black lay-out should be made a little more colorful to increase readability (in our opinion). Adding figures and graphs/box plots for the result section would make a conclusion more visible. Adding these features to a report would increase the quality drastically.

Testing a lot of things at the same time can sometimes lead to unclear parts of an evaluation, smaller reports testing only 1/2 features is key to a clear and well written report.

The reports of DinoTopHat do not hold into account all of the above. The students of Blokoj might find other things important that we did not notice. Their critical evaluation will be key for improving our reports even further and we hope that our evaluation can aid their cause.

woensdag 12 november 2014

Big wall of text known as the recap of our evaluations

Introduction

Taking a systematical approach to improving our casual game is important as to not lose sight of the bigger picture and get caught up polishing some minor feature that is considered unimportant by users.
Let's start off with recapping our current work and summarize our findings so far. We will then look at the important aspects of our game, and determine the current state of our game. Finally we look at the road map that we have planned for our game.This should create a solid foundation on which to polish our game even further!

So what have we done so far. We developed a minimum viable product, and did two evaluations of it. In between these evaluations we added some additional content, mainly improving the core game mechanics. 

First evaluation

In the first evaluation we explored the possibilities of the touch-screen controls, focusing on determining a "perfect" control scheme. This evaluation was done with a small set of fellow students. 
At this point we decided to test two tapping control schemes, one where a left tap moved the dinosaur up, and a right tap moved it a lane down, and its inverse. 
Our conclusion was that left tap - up, right tap - down was most natural. Furthermore people suggested swiping and clicking on lanes to switch as methods for controlling your top-hat dinosaur. 
At this point we were somewhat convinced that we were on a good path with the control schemes, however due to the small number of participants we deemed it necessary to substantiate these conclusions with a bigger test, involving more participants.

Second evaluation

In the second evaluation we got a total of 22 people to answer a questionnaire about our control schemes, we also added a third tapping control scheme, with a tap on the top of the screen moving the dinosaur up, and a tap on the bottom side of the screen moving the dinosaur down. 
We expected that the results would collaborate our first conclusions, however, this was not necessarily the case. While people did seem to have a small preference for the control scheme put forward in the first evaluation, they did not fully dismiss the other control schemes. 
Furthermore a lot of people again considered swiping, and instant lane switching as possible options for other control schemes. 
At this point we decided to keep all three control schemes in the game. We will keep the other possibilities in mind, while we first focus on other aspects of our game. If time permits we will do another evaluation, which will have these other two suggested control schemes included.

Furthermore during the first evaluation, we noticed that people found several game mechanics somewhat confusing.  At the beginning we assumed that with the principle of learning by dying people would understand quickly enough what to do. This however was not necessarily the case, whether this was due to the game not having enough visual cues to what was happening, or this being a bad assumption is not yet explored. We decided to add a small tutorial screen to the game to improve the learnability, which was evaluated during the second evaluation. We concluded however that this did not minimize the learnability completely, and room for improvement still exists. At this moment players found both the scoring and the dodging of obstacles confusing. Further improvements and evaluations will help reduce this problem.

Simple breakdown of casual games.

Before we move to a proper analysis of both the current and future situation, let's first break down our game into several bite-sized chunks that provide us with handles to evaluate the different parts. 

We'll divide the game in three important parts.

Core game mechanics.



Core Game-mechanics
Rules Schedules Tokens


Rules describe the rules of the game. In our case these would be :
  • If you eat a small dino you increase your score.
  • If you run into a big dino you die.
  • If you run into an obstacle you die.
  • You can only move to adjacent lanes.
Schedules  describe when events happen
  • After x time the game speeds up
  • After x time the multiplier increases.
Tokens describe points, in-game currency etc.
  • The score
  • The amount of eaten small dinosaurs.

Interaction Mechanics


Interaction Mechanics
Control Schemes Interaction Menu navigation

Control schemes describe the static control schemes of input.

the touch screen devices:
  • left - up, right - down
  • left - down, right - up
  • top - up, bottom - down
Keyboard devices:
  • 'w' - up, 'a' - down
  • 'up' - up, 'down' - down
Interaction describes how users use the control schemes.

Aesthetics


Aesthetics
Visual Aesthetics Sound
Menus Game-Mechanics Music Interaction

Aesthetics describe the look and feel of a game.  
Visually includes the menu and the representation of the current game mechanics. Sound is the background music, and the sound associated with interaction.

We can evaluate different aspects of these three major subsets according to the criteria.

Current State

Given this breakdown into three areas, what is the current state of this project:
So far we have evaluated the interaction mechanics of our game on touch-screen devices.

We have also evaluated the learnability of our game-mechanics, and found several problem areas that could benefit from improvement.

Now you could ask "Why would you evaluate the learnability before evaluating if people actually like your game, and its game mechanics?"
The answer has an element of the chicken and the egg in it. On the one hand the game-mechanics need to be fun before you spend time on improving the learnability, but on the other hand can game-mechanics truly be fun if they are extremely difficult to learn?

We chose to optimize the learnability first due to several reasons.

  • We noticed during the first evaluation that people seemed to enjoy playing the game, and were neither bored nor extremely annoyed. Playing the game ourselves, we came to the same conclusion, the game-mechanics are enjoyable.
  • Casual games have a small retention rate. If gamers do not understand the goals of a game, they will most likely just install a different game, instead of trying to understand the goals designed by the developers. So it is important to minimize the learnability.
  • Having people understand the mechanics allows them to give better feedback on the actual (intended) game-mechanics.
Of course opting for this strategy has drawbacks. Most importantly you could theoretically waste resources by creating, optimizing and evaluating the learnability of a game-mechanic, that later on in the development process gets canned, when the actual game mechanics are evaluated. 

However when a game mechanic really takes too long to improve, or users find it hard to understand, it needs to be evaluated whether the game-mechanic in itself is working. Thus opting for this strategy also acts as a preliminary evaluation to explore which game-mechanics work, and which do not.

So far we have found that the learnability of the game could be decreased. The areas that were mentioned most often by users, were the multiplier and the dodging of obstacles.

Rough roadmap - Future evaluations and plans.

Our first aim is to have our casual game released 'in the wild', which will mean releasing a web-browser and android version of our games, and spreading the word about this release on social media. We will not release an iOS version, due to the notorious slow process it takes to actually get an app released on the app store. 

Before we can do this release we want to incorporate some form of analytics. At this moment we are looking at the google play game services, which should allow us to track different kind of statistics when people play our game. 

Furthermore we wish to work on several of the problems that surfaced in our first learnability evaluation. We identified the main problem to be the scoring and multiplier, and the dodging of obstacles and big dinosaurs. We plan to solve this by improving and possibly redesigning the tutorial. We will also add several visual indicators to the scoring and multiplier. These improvements would need to be evaluated by user testing, once implemented.

Possible longer term areas that could be improved and/or evaluated are (in no particular order):
  • Evaluation and tweaking of the core game mechanics.
  • Experimentation with non-tap game controls on touch-screen devices. These could be for example swiping and instant lane switching.
  • Several proper standardized questionnaires related to the current state of our game.
For future longer term plans, we would like to add evolution/growing as a game-mechanic. We also hope to be able to improve the aesthetics of the game. This would include animations for the different dinosaurs, and a redesign of the looks of the lanes and background of the game.

Of course these plans are only a rough road map, if we encounter any pressing problems these will get priority.

TL;DR:

So far we have done two evaluations in which we explored the controls of touch-screen devices and the learnability of our current game. 

At this point we decided to incorporate three different control scheme lay-outs, and we will experiment more with other schemes as well.

We learned that the learnability of our game could be decreased. We plan on doing this by adding visual elements to scoring, and improving the tutorial. 

In the future we would like to evaluate the core game-mechanics, do more testing related to the controls and do several standardized tests to see how our game performs. 

dinsdag 11 november 2014

Implementation: The road so far

In this blogpost we will look back at the implementation of our digital prototype and how it evolved to what it is today. Now that DinoTopHat has set its first steps into the world with our release through social media, it seemed like a good idea to already have a little recap of where we came from, where we are now and where we want to go with DinoTopHat. That's why we will go over the implementation in this blogpost.

In the first session we decided to look at the Libgdx development framework for the implementation of our game. Since none of us had any experience developing games with Libgdx, we started out by learning how to work with it. One of the first things we did after installing it, was implementing a little game described one of the Libgdx tutorials. This game was about catching falling raindrops with a bucket. A screenshot of this game is posted below.
Tutorial Game
This gave us a good view on how to develop games with Libgdx. This game was furthermore easily adjusted to a very early version of a digital prototype for our own game. We needed to change some sprites, introduce the lanes, alter the controls and the first protoype was finished. That way we quickly had a game that visually resembled our first concept art we posted on this blog. Our only goal of making this version was simply to get the hang of Libgdx and maybe to see if the controls we intended to use were feasible. We can't post an actual screenshot of that version, because we no longer have the code.
The first concept art
In one of the next sessions we started building and evaluating the paper prototype for our casual game. This way we decided which other screens we wanted to put in the game and which features we would be adding to our first official digital prototype. So we set out to implement all features we had in mind and a week later our first official digital prototype was finished. The most important features we added were (with screenshots below):

  • A Home Menu with buttons to play, view the higscores and mute the game.
  • A Highscores screen displaying the five best scores on your device.
  • Jungle themed music by SilverPoyozo.
  • Big dinos and palm trees.
  • Increasing speed and spawn rate.
  • Custom sprites, backgrounds and button textures.
  • A simple tutorial intended to explain the controls.
  • A Game Over screen displaying the current score, highest score and a retry button.
  • The player died if he let a small dino escape or if he got hit by a big dino or a tree.
Home Menu
Highscores Screen
Tutorial Screen
Play Screen
Game Over Screen
Since then we have further improved our digital prototype in several ways, into the casual game we have today. Not only the gameplay itself was changed, but also the code behind it. We refactored our code to be much more object-oriented, extendable and readable. This problem arose because we were still building our first digital prototype on the code we got from the Raindrop Libgdx tutorial. This was something we needed to fix as soon as possible, before it would become a bigger problem later on in the development process. Features we added since our first official digital prototype are:
  • Updated sprites.
  • Eat and Death sound effects.
  • Improved IO for storing highscores.
  • Better gameplay balance and hitboxes.
  • Score multiplier system:
    • Eating many small dinos in a row increases the multiplier.
    • Missing a small dino resets the multiplier to the base multiplier.
    • Surviving long increases the base multiplier.
    • eg.: Eating a small dino while having a multiplier of 3 gives you +3 score instead of +1.
  • Slightly improved tutorial.
  • Multiple control schemes.
Control Selection
Improved Tutorial Screen
New Play Screen
This is the version of the game we used to gather results from our first test subjects and also the version we released through social media. DinoTopHat has already come far, but still has a long road ahead of it, but that's a story for a different blogpost. 

Stay tuned for more!