Data Interpretation – Strategy

When attempting a test like the CAT, a fair number of people follow the (mindless) strategy of “attempt question 1, move to question 2 only after that is done, then question 3…” and so on, often failing thereby to see all the questions. One should remember that one need not win every single battle to win the war. Given that most people (even most people who will eventually make it!) will not attempt all the 100 questions, your “shot selection” becomes very crucial. You need to recognise – as Rahul Dravid used to so elegantly – which balls to hit and which ones are best left alone, and the faster you can judge this the better.

This becomes all the more crucial in the “set-based” questions (DI, LR and RC) as it generally takes significantly longer to judge these. A typical singleton question (such as a parajumble or remainder or PnC or vocab-based one) can be weighed in 20-30 seconds, and judiciously left without feeling too bad about it. But a set can take up to 3-4 minutes just to understand in detail; and if after spending so much time you realise that you have not been able to decipher it, it can induce a panic.

In the past 5 years, there were 30 questions in each section and there would be around 9-10 questions each (or 3 sets each) of DI or LR. This year, it is likely that there would be 4-5 sets, and as many as 15-18 questions, of each. This means that, as in days of yore, your choice of which sets to attempt first could become crucial. In this post, then, I will try to look at possible mechanisms and criteria to help in the decision-making process (as always, this is indicative and you will have to adapt it to your own areas of strength and weakness) so that you can pick and choose which sets to attack first without actually getting into the nitty-gritty details.

Here are some questions you could ask yourself:

1) Is the given data in a familiar form or in an unfamiliar or haphazard state? And is it precise or ambiguous?

If the data provided is in a standard table / line graph / pie chart and is complete and precise, then even if there is quite a lot of data it should still be very manageable. However, if

  • the data is provided in an unfamiliar form – a histogram, a scatter-chart, a cumulative table, or some even more esoteric format (which means you would require an inordinate amount of time just to understand how to interpret it properly)
  • there is some data missing (which means solving the question might entail a lot of pre-work)
  • the data is not precisely readable, such as a bar- or line-graph where the values can only be approximately estimated to around 5-10% accuracy (which means that even with your best efforts accuracy cannot be guaranteed)
  • the data is presented as a caselet (which means you might have to spend precious time at the start to bring it into a manageable form)

then it might be worth leaving the set for later.

2) Is there additional data provided in the questions?

I’ve observed that a lot of people base their judgement of the difficulty level of a set solely on the pre-information before the questions. In my opinion, though, the questions are almost always worth a dekko; if they are straightforward queries like “Who is sitting next to Mr Sivaramasubramanian?” or “In which year is the profit of Megahard Corporation the maximum?” or “How many people failed in maths?” and the options are not of the “Cannot be determined” flavour then it is a pretty fair indicator that the set is going to give you definite answers in one go.

But if you see questions such as “If Germany defeats Brazil 7 – 0 in the final round*, then who will end up in 3rd place overall?” or “if in 2014, Froogle reports a 10% growth in sales and an 8% growth in costs over 2013, then what will be their percentage profit in 2014?” or even more flagrantly evil question-types like “Which of the following cannot be true” giving three statements and options like “I and II only”, “All of I, II and II” and so on (which effectively means you have to solve 3-4 questions for the price of one), then I would recommend you skip lightly on to the next set and return later.

* This example is a work of fiction and any resemblance to any real-life match is purely coincidental. 

3) Does the set lean more towards the calculative or the reasoning based?

This can be a powerful decision point, depending on your skill-sets. A set which involves intensive calculation is unlikely to go out of its way to confound you with traps, while one which involves simple numbers will often require careful reading and weighing of alternatives. (Personally, calculation is something of a strength for me and hence I choose to do the calculative sets relatively early on, but as I said earlier, this decision has to be based upon your knowledge of your own strengths and weaknesses.)

4) How far apart are the answer choices?

This is a useful question to ask yourself when deciding between two calculative sets. If the answers are far apart – on a percentage rather than absolute basis, mind you! – then you can approximately fairly wildly and still arrive at a correct answer confidently. (For example (3, 7, 12, 18) or (35.7%, 52.8%, 88%, 122%)) However if the answers are close together, then you calculations must needs be carried out with nit-picking accuracy and this will affect your speed as well. (For example (62375, 62525, 62845, 63635) or (35.7%, 36.8%, 38.7%, 39.9%))

5) How many questions are there in each set?

If the sets are similar on the above parameters, then this could be a tie-breaker – a set with 4 questions will give you more value for money (i.e. more marks as a result of the time spent on it) than a similar set with 3 questions.

Try and apply the above criteria to a set in real-time, under test conditions. An excellent case-study would be the DI-LR section from CAT 2008, which had 7 sets covering a wide range of types and difficulty levels. If possible, in a future post I shall try to do a video analysis of the same and roughly demonstrate how one could have approached the section during the test.

regards

J