A nearly complete and utter history and description of the McKinsey Solve test. Updated for 2025, with all three current mini-games (Ecosystem, Redrock, Seawolf).
Table of Contents
What is McKinsey Solve?
A pre-screening test used by McKinsey
The McKinsey Solve test is a pre-screening test (one given to job applicants before interviews) used by the consulting firm McKinsey & Company. The test is composed of 2-3 “mini-games” which on the surface are about ecosystem research and preservation, but in fact, deep down they are all designed to simulate consulting work.
It was developed initially by a company called Imbellus, which has since been acquired by Roblox, although the test contents are likely highly influenced by McKinsey recruiting staff. The test is not used anywhere else outside McKinsey
The test is known by many names ever since its inception in 2020, when it was called “Digital Assessment”. In 2021 it was renamed “Problem Solving Game / PSG”, to signify its status as successor of the previous paper-based “Problem Solving Test / PST”. Then finally, in 2022, it’s renamed to “Solve”. Colloquially, it’s also called “the Imbellus game” after its first developers.
Solve is taken before the interviews, at home
The test invitations are sent out after resume application, but before the interviews.
Because McKinsey Solve is fully digital, it can be taken at home (as long as you have a working internet connection). Use this to your advantage and set yourself up with the best possible equipment (and snacks, because the test is almost hilariously long).
But here’s the catch - do not assume that you have passed resume screening once you received the invitation. They explicitly say on their FAQ sheet that the resume is reviewed alongside Solve results. I know many cases where candidates have near-perfect Solve scores and still get rejected for resume-related reasons.
Nearly everyone gets the Solve test
McKinsey sends out its “invitations to Solve” emails to about 90% of consulting applicants. In some programs and/or events, the invitation rate seems to go up all the way to 100%. Compared to the 30% during the time of the paper-based PST, this is three times higher.
The difference can be explained by simple economics. With the paper-based PST, you had to go to a McKinsey office to take the test, so the cost was much higher (when I took the test in 2011 they had to spare an entire office room for me only). With automated, computerized testing, you can send out tests to everyone with virtually no extra cost.
This gives the recruiter more information to each candidate, and allows each candidate more chances to get an interview (say, to make up for a sub-optimal resume).
Before the test
Invitation comes after 5-20 days, with 3-to-7-day window
McKinsey sends out their Solve invitations in large batches, usually 5-20 days apart. So if you don’t see your invitation in a week after application or so, don’t worry. It’s usually just McKinsey staff taking their time. On a side note, you can always contact your local recruiter for information.
Once the email arrives in your inbox, though, you often have a 3-to-7-day window to complete the test. The deadline almost always hinges on a Sunday (i.e., the midnight between Sunday and Monday).
Trust and follow the mandatory system check
If you click on the test link in the email, you will be prompted to complete a system check, to make sure that your computer is sufficient to run the test.
I think this should be obvious, but because I’ve seen cases where candidates fumbled their tests due to errors or lags: please trust the test and make sure you get their green ticks on everything.
The thing is, the McKinsey Solve test uses a lot of 3D animation (Roblox-style, by the way), so if your internet connection is slow or the computer you use is, for lack of a better word, weak, you might have a lot of trouble running it.
The time limit is now 100 minutes, with 3 mini-games
Once you’ve passed the check, you will see a list of the so-called mini-games.
Pre-2023, the number of mini-games was nearly always two, them being “Ecosystem Placement” and “Invasive Species” (or, Ecosystem Building and Plant Defense as people often call them). The former was very math-heavy, while the latter felt like Plant vs. Zombies. With those two mini-games, the total time limit was 71 minutes.
In 2023, the Invasive Species / Plant Defense game was replaced by the Redrock Study Task, which is definitively not a game because it’s chock full of math-and-chart exercises. The time limit remains virtually unchanged, at 70 minutes total.
Since 2025, however, the number of mini-games is now three, Ecosystem, Redrock, and a newcomer called “Seawolf” (which is often referred to as “the Ocean game”). The Seawolf game is also math-heavy, and in many ways, similar to Ecosystem. With this addition, the total time limit is 100 minutes.
Note that, although pre-2023 the time limit allows for flexible allocation, this kind of flexibility was no longer given post-2023. Nowadays, the time limit for each game is separate, so even if you complete one game faster than required, the remaining time will not be carried over.
Also, if you search on Google, there are mentions of mini-games such as “Disaster”, “Disease”, “Migration” as well, but you don’t need to worry about them because the last time they were used was 2021, 2021 and 2022 respectively.
Mini-game overview: Ecosystem (2025)
Read main article: McKinsey Solve - Ecosystem deep-dive
In the 2025 version of Solve, the Ecosystem game is the first one you take. I rate the level of “gamification” for this mini-game to be 2 out of 5. It feels like a puzzle, but not excessively so.
You should read the deep-dive article for more details on every part of this game (description, challenges, how to solve, tips, etc.).
Overall description
In the Ecosystem mini-game, there are two main tasks. First, you must build an ecosystem of 8 species, in a way that every species survives. Then, you must place that ecosystem in a location that is suitable for every species (in your ecosystem).
For the first objective, you are given a database of 39 species (plants, animals, fungus, etc.), each of which comes with their own calories intake and output figures, and a set of “Eating Rules”. The species must be chosen so that when the Eating Rules are applied on them, their calories intakes are satisfied, and their calories outputs are not exhausted.
For the second objective, the species-information cards also come with requirements on things like temperature, elevation, wind speed, pH. You must then compare these requirements to the numbers on specific locations, using the so-called “monitors” (which display the conditions of a location).
The time limit for this game is 35 minutes, although it can be solved in about half that time if you come prepared and know “the tricks”.
The game has 3 variants: Forest, Mountain, and Coral Reef, which are only different in visuals and naming conventions (for example, land-based species replaced by sea-borne animals, Elevation changed into Depth). The game logic stays the same - in fact, it’s practically unchanged ever since 2021.
First challenge: the Eating Rules
The first main challenge in the Ecosystem game is the Eating Rules. They are very confusing and un-intuitive, there is no visual aid to understand them, and it’s very easy to mis-understand them under the stress of the test. These are the rules, quoted verbatim:
“(1) The species with the highest 'Calories Provided' eats first. It eats its 'Food Source' with the highest 'Calories Provided'. In case of a tie, it eats equally from both species. (2) When a 'Food Source' is eaten, its 'Calories Provided' decrease permanently by an amount equal to the eating species' 'Calories Needed'. If the eating species needs more calories, it eats another 'Food Source' based on current 'Calories Provided'. (3) Then the species with the next highest current 'Calories Provided' eats. Species who end with their 'Calories Needed' fully met and more than zero 'Calories Provided' survive.”
Do you see what I mean?
That said, with a few practice runs, preferably with mock tests and visualized result explanations, these rules can be learned easily.
Second challenge: the 39-species list
The second main challenge in the Ecosystem game is the list of 39 species. The interface is designed so that you can only view 2-3 species at a time, making it hard to grasp the “big picture”, and making it easy for you to forget species you have viewed.
The trick here is to not view each species individually, but to group them into 3 groups of 13 species each. Species of the same group will have the same location requirements, and they can only eat other species of the same group, so all your 8 species will come from one group only.
This way, you can test each species group at a time. Some sources advocate testing using excel sheets, or even “solver sheets”. That approach works fine, but it’s suboptimal, simply for the amount of time it takes for you to input all the data. It’s better, and faster, to use pen and paper.
Most valuable tip: use pen-and-paper
Excel sheets look fancy, but in the context of the Ecosystem game, they are clunky and of very limited use. I actually made myself a full-on solver-sheet once, and I do offer a web-based solver from our partner on this website. I tried using their solvers too.
Yes, you can generate correct answers that way, but it would take you some 15 minutes just to input the data. Not to mention some so-called “solvers” don’t actually solve the exercise, but only tell you which group contains answers - they stop shy of the most important task, which is to actually select 8 species from a group of 13. That means you have to add some 5-7 minutes on top of the 15 minutes consumed to input the data.
On the other hand, if you use just pen-and-paper, it only takes you somewhere between 10-15 minutes. Our customers have been doing it this way ever since 2021.
There’s another advantage to pen-and-paper: it’s legit. Solvers, on the other hand, can be construed as cheating. McKinsey reserves their right to ask candidates to explain their logic, so if all you do is just put numbers into boxes and let the computer spew out an answer, it’s hard to fool them.
Mini-game overview: Redrock (2025)
Read main article: McKinsey Solve - Redrock deep-dive
In the 2025 version of Solve, the Redrock “game” is the second mini-game that you take. I have to put air quotes here because… it is really not a game at all. It’s just a normal, math-and-charts test with some fancy interactive features. 1 out of 5 on my gamification scale.
You should read the deep-dive article for more details on every part of this game (description, challenges, how to solve, tips, etc.).
Overall description
In the Redrock… “game” (okay I’ll stop with the air quotes) you are supposed to go through a research on the ecosystems of the titular Redrock Island, completing objectives and solving math-and-charts exercises along the way.
Redrock is divided into two main parts: Study and Cases. The former is further sub-divided into three phases: Investigation, Analysis, and Report.
(1) During Investigation, they give you a 400-or-so-word-long (with charts and tables) brief on some kind of problems on Redrock Island. There will be an objective, and you will follow that objective to collect relevant information onto a Research Journal. (2) Then comes the Analysis phase. You will use the collected data to answer two to four math questions. (3) After Analysis comes the Report. Here you must complete a few paragraphs of text using the answers from Analysis, choose a chart type to visualize it, and finally, build the chart.
All the tasks and questions in Study are interrelated, so if you get the early ones wrong, it’s likely to cause other mistakes later.
In Cases, you will solve 6 separate math-and-chart questions. They tend to be easier than the ones in Study, and they are not related to each other, so if you can get the Study part right you don’t need to worry much about Cases.
The time limit is 35 minutes, in which you will have to do about 13 tasks (1 from Investigation, 3 from Analysis, 3 from Report, and 6 from Cases), which makes for 2.7 minutes per task. This limit is about just enough. Most trained candidates will complete the test in 60-80% of the given time, while untrained ones tend to use up 80-100% of the limit.
First challenge: the math
Understandably, the first challenge in Redrock is math. Luckily, the test gives you an on-screen calculator (a quite convenient one at that because it logs all your answers, instead of just the most recent ANS like most common handheld calculators).
To do well in Redrock, math-wise you will need to be comfortable with dealing with multiple consecutive percentage / ratio / change calculations, and be able to tell apart similar-sounding but actually-different math terms (such as “percent” and “percentage point”). You don’t need accounting and/or business knowledge for this test, because the calculations are all made in non-business contexts.
The only way to prepare in Redrock is to practice. On average, about 20-25 practice sessions will be enough to get you familiarized to most possible math types on Redrock.
Second challenge: the charts
The next challenge in Redrock, as you can already guess by now, is the charts. Nearly every question in Redrock involves reading a chart and/or table, and some 20% of the questions will ask you to choose a suitable chart.
Luckily, most of the charts in Redrock belong to simple types (line, bar, pie, stacked, scatter plot, etc.). On the other hand, you do need to grasp these simple types well. You need to know what they are used for, what they emphasize, what kind of data they need.
As with math, there is no replacement for practice, and the number of mock tests recommended is the same (20-25 mock tests).
Most valuable tip: get a numpad
Excel sheets are useless in Redrock because the formula in each question is different. Pen and paper works BUT you can easily get the calculation wrong. Handheld calculators don’t store many answers, so they are much less convenient than the on-screen one the test gives you.
As such, you should always, and I repeat, always use the on-screen calculator.
There is one tip to enhance your performance using that calculator, though: use a keyboard with a numpad, instead of clicking on the virtual keyboard as the tutorial would tell you to. That calculator receives input from the keyboard, and using your physical keyboard would shave off about 20% time consumed per Redrock session, as I found out with our Simulation.
That said, do keep your pen and paper ready. You might want to use it to help figure out the formulas, and it’s definitely helpful in other games.
Mini-game overview: Seawolf (2025)
Read main article: McKinsey Solve - Seawolf deep-dive
In the 2025 version of Solve, you’d find the Seawolf game at the last / third position. On the surface, it is quite similar to Ecosystem (so much so that we expected it to replace Ecosystem), but fundamentally the games are different. Like Ecosystem, I’d rate it 2 out of 5 on my gamification scale.
You should read the deep-dive article for more details on every part of this game (description, challenges, how to solve, tips, etc.).
Overall description
In the Seawolf game, your job is to clean up three ocean sites, each polluted by a different kind of waste. You do so by building “treatments”, each of which consists of three microbes. The microbes are chosen from an imaginary database of potential candidates.
The game will give you information on the characteristics of each microbe, and corresponding requirements by the sites. There are quantitative characteristics (such as Energy, Adhesion, etc.) as well as qualitative ones (such as Heat-resistant, Aerobic, etc.)
The choosing process consists of four steps:
Step 1 is to set a target microbe profile. You do so by choosing two (no more, no less) characteristics, as if setting filters on a database. This step does not affect Step 2.
Step 2 is to categorize microbes between two possible sites (the one you’re cleaning, and the next one), or reject microbes entirely. This step does not affect Step 3 of the same site, but it does affect Step 2 of the next site, and it can spawn a “Step 0” in the next site where you review microbes from Step 2 of the previous site.
Step 3 is to complete a “prospect pool” of ten microbes, in which six are already given. You choose the remaining four by picking the best-fit microbe from four groups of three microbes each. The prospect pool built here will be used for Step 4.
Step 4 is to select three microbes from the prospect pool carried over from Step 3 to form a “treatment” for the current ocean site, with maximum efficiency in mind (which is decided by a set of rules given to you at the beginning of the game, mostly concerning “average characteristics”).
3 x 4 = 12 (steps). If you include two “Step 0” at Site 2 and 3 too, that would make for a total of 14 steps. The time limit for this game is 30 minutes, which makes for 2.1-2.5 minutes per step on average, which poses a real danger because even trained candidates often use about 80-90% of that limit, and it’s very easy for untrained candidates to run out of time by about mid-Site-3.
First challenge: the lack of clear rules (Step 1-2-3)
The first main challenge in Seawolf is the relative vagueness of scoring criteria in Step 1, 2 and 3. The “efficiency rules” given at the start are only applicable to Step 4. There are no rules dictating what is the “best profile”, “best category”, or “best prospect” - you have to create the rules yourself, using the efficiency rules as guidelines.
The trick here is to stay consistent. Real consultants are consistent and methodical in how they approach problems, failing to do so in the Seawolf test will likely result in low scores.
For example, in our guidebook, we use a simple underlying principle for all steps: rule out only microbes that are absolutely unusable; if there is even a small chance the microbe can be used, it should be kept considered a potential.
Second challenge: the efficiency rules (Step 3-4)
The second challenge that emerges in Seawolf is the very tight “efficiency rules” in Step 4. The prospects in Step 3 and and the rules in Step 4, when combined, gives very little room for mistakes.
To put things perspective, out of 81 possible prospect pools that you can create in each Step 3, often only 2-4 would yield 100%-efficiency treatments. And for the 120 possible treatments that each such working prospect pool can yield, again, only 2-4 can achieve 100% efficiency.
Put together, that means there are only 4-16 full-efficiency treatments out of 9720 possible treatments for each site. That’s 0.04% to 0.16%.
And that’s not all of it. McKinsey explicitly says that for some sites, there can be no 100% treatment. We estimate from candidate reports that the chance of such sites seems to be 1 in 3.
In our guidebook, we do offer a “trick” to help narrow down to the correct answer quickly, by identifying and ruling out the aforementioned “completely unusable” microbes, which takes up about 70% of the pool. Once this is done, you can basically “eyeball” the correct answer.
Most valuable tip: use an excel sheet
You can make a simple spreadsheet for use in Step 4 to help calculate the average characteristics of your treatment. It’s much faster, and more reliable than pen-and-paper or hand-held calculators.
You might ask why I suggest a spreadsheet now, while claiming that “solver sheets are cheating” just a while ago? Well, I am not suggesting a “solver sheet”. It’s just a simple sheet to perform exactly three “average” calculations. It’s still squarely in the realm of legit-ness, because you have to use this spreadsheet in conjunction with the “narrow-down” technique I mentioned earlier (which I think is the whole point of this test).
A full-on solver sheet (say, you put in the 10 prospects and it spews out correct treatments) would likely be considered cheating, so use it at your own discretion.
After the test
You will be graded in percentile scores
McKinsey explicitly states on their FAQ that they grade and arrange candidates using percentile scores, then group them into four quartiles (the fourth quartile being the top 25% scoring candidates). That percentile is affected by much more than the number of surviving species, correct answers, or treatment efficiency, even though these factors contribute quite significantly.
There is no way to know which quartile you belong to, except explicitly asking your McKinsey recruiter. I’ve heard from candidates who reached out, and they almost always received an answer. So if you wonder about your percentile, don’t hesitate to reach out to McKinsey.
Given historical pass rates at McKinsey recruitment programs, we suggest you aim to get into the top quartile (top 25%). To do that, in most mini-games you need to get all-correct answers (or at least, the best possible answers).
The results are screened alongside resumes
Getting high Solve scores does not automatically grant you an interview, and likewise, having low scores does not mean you’re immediately disqualified.
This is because instead of reviewing Solve scores alone, McKinsey recruiters would review both your Solve scores and your resume, then make a holistic decision.
This is actually how most companies use automated testing these days. When I was working on a gamified test project a few years back, one of the most demanded features was a combined-view dashboard allowing screeners to view scores and resumes side-by-side.
The survey has does not affect your results
After the last mini-game ends, there will be a survey from McKinsey, asking you about things like the general experience of “playing” the games; whether you used Excel sheets, calculators, pen-and-paper, or nothing at all; whether you have experience playing video games beforehand.
They explicitly disclose that the results of the survey will not be used in screening decisions, so if I was taking the test I would not hesitate to give them a truthful answer.
After all, McKinsey has shown (over the past few years) to improve on the UX/UI side of the tests quite significantly, and change tests in a way that made them less biased towards video-game players (which I suspect was an issue with Plant Defense.
How hard is the McKinsey Solve?
Our estimated pass rate is around 30-33%
We actually made a video on this topic last year, in which our educated guess for the Solve pass rate is at around 30-33% (i.e. one interview offer for every three Solve invitations).
This would triangulate well with the observation that nearly everyone (let’s say, 90%) gets Solve, and the observed interview pass rate of one-in-eight, a number that is agreed upon by most coaches. This means McKinsey would invite only about 30% of all candidates to interview, and 30% / 90% = 33%.
These numbers are also observed by a survey we did in 2022 (albeit, the games were different back then), and the patterns seem to hold, as observed by our customer service team. Candidates without any preparation would have only a 30% pass-rate.
This pass rate would suggest that only the top-scoring 25% in Solve are in the “safe zone”, the second-highest quartile would be at significant risk, while there is relatively little chance for the lower two quartiles of candidates.
“Product scores” matter more than “process scores”
The other thing we observed both during our surveys and customer service queries, was a vastly increased tendency for candidates with high “product scores” (number of species surviving, correct answers, treatment efficiency, etc.) to pass the test.
Meanwhile, “process scores”, a concept prep sites like ours use to refer to how McKinsey “tracks your movements and behaviors”, seem not to influence too much.
Specifically, if you get all-correct answers your passing chance would double, to about 68% according to our survey. And that would lead to the next point: preparation matters.
Pass rate improves significantly with preparation
McKinsey spends a lot of effort trying to convince people “you don’t need to learn anything before Solve and it won’t help”, but the truth is out: you need to prepare beforehand.
Just from logical deduction: if you know what the test is about, and you know how to “behave”, you should be able to get more correct answers as well as “behave” in a way that’s more consulting-friendly. And as we surveyed, candidates with the most preparation and the highest scores (self-proclaimed, of course), can push their pass rate all the way from 30% to 90%.
And let’s be real, how the hell can you know which chart to choose to visualize a table, differentiate “percent” from “percentage point”, and perform probability calculations, and tell “mean”, “mode” and “median” from each other (all part of the Redrock test) if you don’t prepare beforehand?
That leads to the next question: how do you prepare?
How to prepare for McKinsey Solve?
Spend 2-3 weeks, do 15-25 mock tests
For best results, you should start about 15-20 days in advance. That way you would have 5-7 days for each mini-game (there are three of them now). My recommendation is to do 15-25 mock sessions per mini-game, that way you can get yourself exposed to all kinds of scenarios in the test and get really fluent with the interface.
If you wait for the invitation email to come, then by the time that happens you would only have 7 days left, which means only 2-3 days per mini-game, which in turn leaves very little room for mistakes and unexpected issues.
Now, I know why people wait for the invitation to come (which will almost surely happen anyway, as we’ve already seen): they feel whatever money they spend on it would be wasted, if they don’t receive that invitation, and surely I can sympathize with that.
So, here’s a way to get yourself started before the invitation, without spending any money: focus on the fundamentals, and make use of all the free online content. We have a lot of free deep-dives on our site. Read them all, pay attention to the illustrations and examples, and you can get yourself acquainted with Solve without spending any money at all.
This is true for all Solve mini-games, but it’s especially true for Redrock. And, because Redrock is mostly concerned with math and charts, whatever you learn there will also be useful in the case interviews, as well as other tests from similar companies.
Then, once you have received the invitation, you can buy the mock tests. But, again, that’s not something I would recommend; if you can afford mock tests (as in, you don’t have to go above your means to buy them), you should get them about 15-20 days in advance.
Focus on math and charts first
Math and charts should be the first items on your preparation list because 1. They are the basis for Redrock, and 2. Math is present in both Ecosystem and Seawolf.
As for Redrock: the math-and-charts exercises there are mostly related to data-analysis tasks, such as reading and creating charts, making comparison-related calculations (change over time, percentage, ratio, etc.), statistics (mean, mode, median), and probabilities.
As for Ecosystem and Seawolf, what you want is good mental math (same as case interviews). It’s not that you have to do the math there mentally (you should not, they allow calculators), but because if you are good at mental math you can “eyeball” the right answers much quicker.
Then comes problem-solving
Next on the list, after learning math-and-charts, is problem-solving. Specifically, consulting problem-solving, in all its issue-trees-and-hypotheses glory (in plain English: you have to raise hypotheses and test them repeatedly).
You need this specifically for Ecosystem, because the theory behind that game is exactly what we consultants use on the field. You will have to group species into groups, and test each group to see if it is feasible to build a food chain from it.
The same theory applies in Seawolf. There you will have to set criteria yourself as to what makes an “usable” microbe, test microbes against those criteria, and hopefully, from already-tested microbes, build a treatment for each site.
No need for business knowledge and video games
Business-specific knowledge (such as business strategy, accounting, finance, etc.) is not required in Solve. McKinsey explicitly says as much, and I think that claim is justified. Every single test takes place in pseudo-biological settings (i.e. involving animals, plants, and ecosystems).
Likewise, contrary to my own claims a few years earlier, video games don’t help. Especially with Invasive Species / Plant Defense (the most game-y of all the Solve games) out of the picture. It’s for one simple reason: there is basically no game on the market resembling the current Solve games (I mean, Redrock is straight up a math test).
Test-taking tips for McKinsey Solve
Stay comfy and well-rested
I mean it, seriously. The Solve test is currently the longest among all MBB tests, at 100 minutes not counting the animations, the tutorials, the surveys and disclaimers. McKinsey says candidates should reserve 110 minutes for the test, but I think even that amount is not enough, since I’ve heard from candidates who took more than two hours to complete it.
Heck, my back hurts just from taking a full mock-test session (which is quite a bit shorter than the real test, given the lack of animated tutorials and all the other fluffs that come with the test).
So, before taking the test, make sure you’re well-rested. Take breaks in-between the mini-games. Make sure you have your adequate supply of fluids (and snacks, if you like).
And, if possible, enjoy the test. Personally I find the mini-games quite enjoyable. Keeping yourself calm and collected is crucial, especially during a very time-restricted game like Seawolf.
Pen-and-paper, double-screen, good internet/computer
Even if you don’t end up using them, having some pen and paper ready on the table can be quite useful in Solve (and similar tests). You can use it to make quick calculations in Seawolf, as well as draft and test formulas in Redrock. In Ecosystem, specifically, pen-and-paper is faster than using a clunky Excel spreadsheet.
You should also have a secondary computer screen. You take the test on your primary screen, and make notes / calculations on the secondary one. Having two screens would make it easy for you to double-check if you have entered the numbers correctly, and do away with tab-switching.
Make sure your internet and computer are good enough for the test. When in doubt, go to an internet cafe, or borrow the best-specced computer you can get your hands on. Don’t risk your own chance by using an old, bug-filled laptop, I know at least 4-5 candidates who failed the test for that reason.
Make use of untimed tutorials
All the tutorials in Solve are untimed, so make use of them effectively.
The tutorials themselves only cover about half of all the rules (they cover all the main rules, but tend to leave out the minor ones that can differentiate an 80% performance from a 100% one).
So, don’t just blindly follow the tutorial. Explore all possible corners of the games. Try to click on all the buttons on each screen before you move on, make sure you know what each of them do. Read all the instructions. You get my point.
Contact help immediately if anything goes wrong
My last tip for any candidate, however obvious it may sound, is to contact for help immediately if you run into a bug (say, a blank screen, the game crashes, etc.).
If possible, screenshot and/or record the error. Yes, McKinsey technically tells you to not screenshot and/or record the test, but this should be an exception, because such things are proof that you did run into an error, and that would help them fix the test.
All the candidates who I talked to, who said that they contacted the staff for their problems during the test, all said that they received help immediately. If necessary, they would allow you to retake the test (if you don’t contact them, they won’t allow retaking the test).
Past / other Solve mini-games
Plant Defense (last seen in 2023)
Plant Defense was, pre-2023, one of the two staple Solve games (the other being Ecosystem). This is by far the most gamified of all the Solve games, which I rate 4/5 on my gamification scale.
In Plant Defense, the objective is to protect a “Native Plant” from invading animals like groundhogs or foxes, using your own defending predators, like eagles, wolves, and snakes, as well as terrain transformations, like cliffs, forests, and rocks.
The game is played on a square-grid map, with the native plant in the center, and invaders moving in from the edges. Each candidate must play three such maps for each Solve test, with their sizes being 10x10, 14x10, and 12x12 respectively. Strangely, the game gets easier as it progresses (because “the player”, so to speak, gets more time to prepare with a bigger map), and in Map 3, the game gives you an extra-powerful defender.
It’s a turn-based game: at the start of a turn sequence, the candidate arranges their defenses and terrains, then in each turn, invaders move by one square each forward towards the plant. The candidate can adjust their defenses each turn. The plant must survive at least 15 turns.
Regarding defenders: each has a different range and attack power, with the longer-range ones generally being weaker, while the stronger ones are short-ranged (except the extra-strong one at Map 3, which is both long-range and powerful). Each must be placed on a different type of terrain.
Regarding terrain transformations: cliffs block movements, forests slow down foxes, and rocks slow down groundhogs. Each of them enables a different type of defender, depending on the map.
Regarding invaders: groundhogs and foxes are only different in “which kind of terrain slows them down”. Also, they get gradually stronger until they overwhelm the defenses. Usually even the best defenses can only survive 20-25 turns (Map 1), 25-30 turns (Map 2), and 35-40 turns (Map 3).
The strategy here is relatively simple (and intuitive if you have played tower-defense video games like Plant vs. Zombies or Kingdom Rush): build your defenses close to the plant so it’s protected from all directions. Shorter-range but powerful defenders stay in the innermost layer, next to the plant, while longer-ranged but weaker defenders form the outer layer. The extra-strong, long-range defenders are generally placed on the outer layer, too.
All of this being said, the game had been phased out by 2023, so there is not much reason to worry about it here. It’s freely included as part of our simulation.
Migration (last seen in 2022)
The Migration Management mini-game is a turn-based puzzle game. The candidate is required to direct the migration of 50 animals. This group carries a certain amount of resources (such as water, food, etc.), often 4-5 resources, each with an amount of 10-30. Every turn, 5 animals die and 5 of each resource is consumed.
It takes 3-5 turns from start to finish for each stage Migration mini-game, and the candidates must place 15 stages in 37 minutes. The candidate must choose among different routes to drive the animals. In each stage, there are points where candidates can collect 3 additional animals or resources (1-3 for each type), and choose to multiply some of the collected resources (1x, 3x and 6x); the game tells the candidate in advance which resources/animals they will get at each point, but not the amount.
The objective is to help the animals arrive at the destination with minimal animal losses, and with specific amounts of resources.
With all of these limited insights in mind, here’s what I recommend for the strategy:
(1) Nearly every necessary detail is given in advance, so use a scratch paper to draw a table, with the columns being the resources/animals, and the rows being the routes. Quickly calculate the possible ending amount for each resource, assuming you get 2 at every collection point (good mental math will come in handy).
(2) Choose the route with the highest number of animals, and “just enough” resources to meet requirements.
Disaster (last seen in 2021)
In the Disaster Management mini-game of the Solve Game, the candidate is required to identify the type of natural disaster that has happened to an ecosystem, using limited given information and relocate that ecosystem to ensure/maximize its survivability.
With the two main objectives in mind, here’s how to deal with them:
(1) Identify the disaster: this is a problem-diagnosis situation – the most effective approach would be to draw an issue tree with each in-game disaster as a branch, skim through data in a bottom-up manner to form a hypothesis, then test that hypothesis by mining all possible data in game (such as wind speed, temperature, etc.)
(2) Relocate the ecosystem: this is a more complicated version of the location-selection step in the Ecosystem-Building mini-game, with the caveat that you will first have to rule out the locations with specs similar to the ongoing disaster. The rest can be done using a spreadsheet listing the terrain requirements of the species.
Like the Ecosystem Building mini-game, you will solve this mini-game only once, unlike the Plant Defense and the next Disease Management mini-games with multiple maps.
Disease (last seen in 2021)
In the Disease Management mini-game of the Solve Game, the candidate is required to identify the infection patterns of a disease within an ecosystem and predict the next individual to be infected.
The game gives you 3-5 factors for the species (increasing as the game progresses), including name, age, weight, and 3 snapshots of the disease spread (Time 1, Time 2, Time 3) to help you solve the problem.
There is one main objective here only: identify the rules of infection (the second is pretty much straight forward after you know the rules) – this is another problem-diagnosis situation. The issue tree for this mini-game should have specific factors as branches. Skim through the 3 snapshots to test each branch – once you’re sure which factor underlies and how it correlates with infection, simply choose the predicted individual.
Past / other McKinsey tests
McKinsey Problem Solving Test (PST)
The Problem Solving Test is an old test that was in standard use by McKinsey from around 2010 to 2020. You can think of it as a hard, business-based version of the GMAT test. By 2020, it had been replaced by the Solve test in most cases.
However, in some programs and offices, the PST is still being used, even to this day. Usually, the recruiter would tell you if you have to take the PST instead of Solve.
From what we observe through customer service queries, offices and programs that involve the use of the PST are generally from Europe, targeting highly experienced / advanced degree candidates (who most of the time still take Solve).
The chance of any given candidate having to take the PST is extremely rare, being less than 3-5%.
McKinsey SHL tests
Some McKinsey programs and offices (which, like the PST-using ones, mainly from Europe) opt to use the aptitude tests from SHL and similar companies. Things like numerical reasoning, verbal reasoning, logical reasoning, so on and so forth.
At McKinsey, these tests are even rarer than the PST.
And when McKinsey does use SHL-made tests, they usually come with just two question types: numerical reasoning and verbal reasoning.