What Happened to Dead Five?
Dead Five was therefore intended to be a 12-month "get rich quick" scheme where: we'd build a bunch of hypercasual games, give them slightly more sophisticated treatments and meta-design than was normal for the genre, get a hit and make millions!
But things never quite work-out like you expect huh?
Initially the studio was quite successful. We got a ridiculous number of prototypes out of the door in the first couple of months (so many in fact that I struggled to service them all properly), several felt like they had potential, our initial tests were really promising and we started to build-up great relationships with the hypercasual publishers. I felt like we were on to a winner.
But then came the challenges.
It would be easy to blame the pandemic and it's true that shifting a team that's very used to working intensely together with a lot of collaboration and oversight to remote-working practices isn't easy but: we did it, handled it better than most, got some cool games built and one even won an award :).
So what were the real problems? I think we made a couple of key mistakes when we were planning Dead Five:
- We totally underestimated how chaotic, fragile and trend-driven the hypercasual scene is. These games make a relatively small amount of money on average per player and getting CPIs (marketing costs) down to a level where you can turn a game to profit is dependent on a huge amount of luck. People *think* that they can tell you what works but they're wrong. It's simply a case of building and testing enough stuff until you get that all-important hit.
On top of that, market tests can be unreliable, for example:
When we initially tested "Light the Fires" we did a CTR test which suggested that our CPIs would be around $0.6 and we expected the LTV (average total revenue per player) to be around $0.75-1. The project looked like a winner so we decided to build it out to beta without testing any further for fear of the game being "emulated".
Beta complete, we tested again - this time a CPI test. With the same creative the CPI came out at over $2.5! We ran multiple tests through multiple accounts and the results were all-over the place. The best we achieved was $1.25 but other tests came back at nearly $4 and there was no clear correlation between creative and results.
I *think* the key learnings here are that:
- CTR tests are an unreliable predictor of CPIs. IMO they're only really useful for comparing creative treatments.
- CPI tests are also unreliable unless you have something which really works and gets you close to the magic target of sub $0.50. Typical testing budgets and processes don't generate enough impressions to get a reliable read on anything that's materially over that.
- When you're working on a game (or any product for that matter) you basically have two key variables to work with: How much money you make from the customer and how much is costs you to reach them. In the current market it's much easier to influence the former than the latter, at least if you have the technical and design skills to do it.
- I under-estimated the amount of contingency we needed to build-in to the process. Our original plan allowed for around 20% wastage of production time, a figure that I've used effectively for years in previous studios and on over a hundred different games.
But, in hindsight, most of the previous games I'd worked-on involved significantly less unknowns. The Flash Games scene of the early 2000s made it easy to reach an audience, having the Red Bull brand on a game helped with featuring and overall credibility and creating sequels to existing game IPs meant that we could focus on optimising the monetisation to make the game profitable.
I think that when you're trying to break new ground you need to allow for a lot more wastage (upwards of 50%?) and if I'd anticipated this we might have made different production decisions, managed expectations better and - perhaps - gone out to raise some more money after our initial SEIS raise.