CAT 2016: A good but flawed test

In my previous post, I mentioned that the shoddy infrastructure and unprofessional management of the test was extremely disappointing. The actual test itself, though, was mostly refreshing; it felt like CAT one might say (which has not always been the case in the online papers). With two difficult sections, even the well-prepared knew they had been in a fight this time.  

The overall structure was close enough to the mock they had uploaded. The first section stayed easy, as has been the case for the past couple of years. Again, 24 RC questions and 10 para-based VA questions, in my opinion a structure which leaves a lot to be desired. Again, all the 10 VA questions were TITA, but only in parajumbles did that actually matter, as the “odd sentence out” and “summary” types were really just MCQs where you had to type instead of clicking (with the added bonus of no negative marks!). Again, as has been the case for the past two years, the passages and paragraphs were all very readable, covering a wide range of interesting stuff. In other words, no surprises for the well-prepared and so a good attempt in this section should have been well over 20 (and of course any unsolved TITAs should have been attempted on principle as there was nothing to lose). I felt VARC-2016 to be between 2014 and 2015 in level, just slightly tougher than last year. Cut-offs for this should be somewhere in the 50s I would feel, with 75 being an excellent score.

The DILR section had 8 sets of 4 questions each, with more DI than LR. This was a deceptive section; while it seemed easier than last year’s, I felt it was as tough. Sure, the sets were easier to understand, but they were also way more time-consuming, with the later questions of practically every set containing additional information and requiring rework. Also, a lot of people tend to prefer LR to DI, so they were frustrated by the shortage of LR. If you aimed at attempting 3 or 4 sets, it was easier than last year; if you planned to attempt all 8, it was tougher. Cut-offs might be low again this year, even 14-15 good attempts and a raw score in the low 30s might prove enough, provided accuracy does not let you down.

The QA section caught a lot of people off guard, given the very easy QA in 2014 and 2015. While not as nasty as the toughest paper-based tests (such as 2007), the paper was not all sitters either. As in 2012 and 2013 (which, however, were 20 question sections), there were mostly medium level questions, with a leavening of sitters and a sprinkling of really nasty questions in between. One could make a hearty meal of this section if one managed to avoid breaking one’s teeth on those 5-6 speed-breakers. I found it a very well-balanced section, which forced one to think. No questions requiring abstruse knowledge, but plenty requiring basics and care. As has been the recent trend, Arithmetic was the single biggest chunk in the paper. Numbers was more prominent than it was last year, at the expense of Algebra, which almost vanished. Geometry was marked by some unusually tough questions.

Unfortunately, the square root – pi fiasco, which affected 3-4 questions in either section, marred the otherwise high standard of the paper. Additionally, there was reportedly one wrong question in the morning slot (unacceptable, in a test like this, but no one in charge seems to care!). Had it not been for these glitches, it could have been one of the better QA papers in several years. As it is, I suspect the cut-offs will be noticeably lower than they were last year; a score in the early 30s might prove acceptable from a cut-off point of view, though the Quant wizards might still get 75 or more.

Overall, given two difficult sections out of three, I expect raw scores to drop from last year (and the magnitude of scaling to consequently be larger). I suspect a 170 would prove to be a very good score and even a 125-135 might prove sufficient for a few good calls*.  The top raw scores should still be close to 240 I would guess (and scale to nearly 280) but I’m going to go out on a limb here and say that in my personal opinion, a raw score of less than 140 might be enough for a 99 percentile (though it will scale up to around 170). Don’t get your hopes up too much though, nearly every expert opinion I have seen pegs it much higher than that.

regards
J

*(Statutory warning copied from my last year’s blog: (a) I am talking of raw scores, not scaled and (b) these are just guesses, and I have no particular statistical evidence of how accurate they might be. However if I don’t put in some estimate here, the comment sections is going to be flooded with variants of “I score xyz, how much percentile will I get?” I might as well say straightaway that I will not answer any such queries. Your guess is as good as mine).

CAT 2016 – Game of Stools

I wasn’t planning to write an analysis this year, but several people have asked me for one and I figured it would be simpler to type it out once rather than again and again. As in the past, I’ll divide this into two posts, this one detailing the overall test-taking experience (which could be of use to next year’s candidates, I suppose) and another short one with my take on the test structure and level.  Some of you might directly wish to jump to the other one 🙂

Pre-test procedure:

I wrote CAT ’16 in the afternoon slot, at ARMIET Shahapur (ARMIET being Alamuri Ratnamala, um,  something something…this college has a name a South Indian could envy). Shahapur being a little beyond Asangaon – the local train frequency to which is abysmal – I had to leave by 10:30 and reach by 12:15. The whole train was full of CAT-takers as the next train was scheduled an hour and a half later. We were milling around outside till 1:15 (in extremely hot and dry weather), resulting in headaches and grumpy faces galore. In the meantime, some people got calls and messages from those in the first slot and got some inkling of the now-infamous pi-root confusion. Also, there were rumours that the DI section was easy, which seemed to make people happy.

Officially, we were allowed to carry only the admit card (to repeat what I said last year on the topic of admit cards, please make sure the print is decent; black and white is fine, but the photo should resemble you and the signature should be reasonably clear, and you need to stick one recent colour photo on the card) and an ID proof. The security check was surprisingly lax with no proper frisking and people with bags wandering all over the place trying to figure out where to go for the photo/thumb impression procedure (at least two people in my lab had carried their own pens and one his wallet). Also the labs were embarrassingly ordinary, with no AC and with three-legged backless stools instead of chairs. The first computer I was given did not start up. After half an hour of increasingly irritated hints and reminders to the invigilators I was finally allotted another (which turned out to have a mouse issue – more on that later). As in the past couple of years, everyone was handed a sheet of A4 paper and a pen . One could ask for more paper if one so desired, but I stuck to my policy of environment-friendliness and managed with just the 1 sheet.

During the Test:

I will talk about my own experience further down, but first a few general points worth noting:

  1. Scoring: the test instructions stated: for MCQs: +3 for a correct, -1 for a wrong and no penalty for unattempted questions. For TITA: +3 for correct, no marks deducted for wrong. However the individual questions mentioned +1 / -0.33 and +1 / 0 respectively. It should make no difference, either way the marks will be scaled to 300 I suppose.  However, this rattled quite a few people apparently.
  2. The sections were not further subdivided – last year VA had two sub-tabs for RC and VA and one could freely move between those during the available 1 hour. Similarly the DILR section had separate tabs for DI and for LR. This year each section was all in a single lot.
  3. The question palette was adjustable: it could be shrunk to the side on a click, and brought out again on another click. In theory, this was a nice idea, but in practice I think the implementation fell a bit short. As a result, every time one clicked an answer, the entire palette took a second to refresh. Consequently, the interface was not as smooth as in the past.
  4. The Calculator was a fairly basic one, unlike the (useless) scientific one of last year.
  5. As in the mock, there were more DI and fewer LR questions, much to the dismay of the majority who prefer LR. Also, once more there were 24 RC questions and reading skills were at a premium in the Verbal section. 
  6. The number of TITA questions reduced from last year.

For me, personally, the act of actually taking the test turned out to be easily the most irritating testing experience I have had over the past few years. It started off smoothly enough as the VARC section seemed easy. However, a few questions into the section I realised that something was seriously wrong as many questions I had answered were showing unmarked. After a few minutes of frantic experimentation I found the problem – the mouse was double clicking most of the time. So when I marked an option, it got marked and unmarked again in the same click. This led to a frustrating experience for the rest of the test, wherein I would click, check to see if it registered, try again…. In some cases it required as many as 5-6 attempts to get a question answered. And of course, it meant it was impossible to use the calculator, because typing a number like 1569 gave a result like 155669. Also, at the end the mouse seems to have unmarked one question in VA as the final tally showed me 33 and not 34 attempted.

On the whole, other than raising my blood pressure, this did not affect me much in the QA and VA sections since I usually have time left over in these (though I could not check my answers at the end as I normally do). In DILR, though, I rarely have spare time (less than 2 minutes, last year) and additionally had to calculate everything manually. As a result I ended up leaving 5 questions there, 1 set and 1 extra question. Still, the set I ended up leaving was arguably the nastiest of the lot (most people I have heard from seem to have left that even after trying it) so I suspect no great loss there. Overall I ended with 94 attempts, the first time since the CAT went online when I was unable to attempt everything. The questions I left were anyway the ones which seemed the nastiest, so it might not have much adverse effect on my score. However, that is kind of beside the point.

Had I been a serious aspirant, such an experience would certainly have severely hurt my performance. The frustration alone would have been traumatic. Add to it the lax security, the terrible seats, the announcements on the PA during the test – on the whole it left a lot to be desired, a disappointment given the much better experience of the past two years with TCS. In fact, my worst experience since the 2009 debacle. And my disenchantment was not over yet….

After the test:

Again, a long journey back (I got home at 8:30 pm eventually). Eventful, too, as the saga of the Facebook posts during the test was all over social media by then. I wish it could be brushed aside as a one-off aberration, but having seen the casual nature of security in my centre and heard what happened in other places….

I think the IIMs need to take a long hard look at these problems for next year, even if they choose not to publicly admit that anything went wrong. Anyone can say “concluded successfully” and “detected and dealt with” and brush it off. But it needs to be true as well. The trust of people in the sanctity of the test can be pushed only so far; and it is in the IIMs’ own long-term interests to maintain a certain standard. It is easier to maintain a reputation than to rebuild it.

I will shortly put up another post with my take on the paper. For a couple of other points of view, check out T’s post at CAT 2016  and V’s post) at CAT 2016

regards
J

A brief history of CAT 2015

As I did last year, I’ll divide this into two posts, this one detailing the overall test-taking experience (which could be of use to next year’s candidates, I suppose) and another short one with my take on the test structure and level.  Some of you might directly wish to jump to the other one  🙂

Pre-test procedure:

I wrote CAT ’15 in the morning slot, at Mira Road (Shree L. R. Tiwary College of Engineering). Since this involved a train journey of close to 2 hours, this meant a 4:30 wake-up. As it turned out, I slept at 2 am, so I was not in the most cheerful of moods when I awoke. Some extremely strong coffee helped (a little) and I managed to push myself out of the house. I reached the venue at the dot of 7:30, and within 5 minutes the gate opened, letting us in with the standard basic checks. I believe people were allowed in (in my centre at least) till 8:15 or later, but I don’t know for sure. Also, an interesting development this year was that there appear to have been separate centres for male and female candidates. My centre had some 6 labs with well over a hundred candidates in all, I would estimate. I met a few friends there and we passed the time chatting while waiting to be let in.

The registration process was pretty smooth as usual – a quick webcam mugshot and Left Thumb Impression – and then we were directed to our seats and had about 45 min to kill while waiting for the test to start. As always, you cannot carry anything personal inside (people were not allowed even jewellery, apparently). Bags and other worldly possessions were to be left just outside the lab (no shelves etc) but as far as I am aware there were no issues with that. Strangely, this time we were also asked to leave our shoes outside (I suspect, though, that this was a requirement of the specific centre and not the CAT). We were allowed to carry only the admit card (to repeat what I said last year on the topic of admit cards, please make sure the print is decent; black and white is fine, but the photo should resemble you and the signature should be reasonably clear, and you need to stick one recent colour photo on the card) and an ID proof.

The system provided was good, the seating space was quite comfortable even for a portly gentleman like myself and the mouse worked just fine. This year, unlike last time, they got the “signature in presence of invigilator” stuff done during the last 15 minutes of this time rather than after the test had started (I found that very irritating last year!). Everyone was handed a sheet of A4 paper and a pen (looks like this is going to be the standard for the TCS regime – those who were habituated to pencil and eraser solving would probably have been a bit miffed). One could ask for more paper if one so desired, but I stuck to my policy of environment-friendliness and managed with 1 sheet.

During the Test:

The interface was smooth, with no significant glitches. A few points worth noting:

  1. Scoring: the test clearly and unambiguously stated: for MCQs: +3 for a correct, -1 for a wrong and no penalty for unattempted questions. For TITA: +3 for correct, no marks deducted for wrong
  2. The initial instructions (probably copypasted from last year) said that RCs would have 4 questions each and DI/LR sets could have 2 or 4 questions. However, the actual test proved to have RCs with 3 or 6 questions and LR/DI with 4 questions as promised in the Mock Test uploaded on the CAT site.
  3. The first two sections were further subdivided – VA had two sub-tabs for RC and VA and one could freely move between those during the available 1 hour. Similarly the DILR section had separate tabs for DI and for LR.
  4. As in the mock, there were 24 RC questions. I did not expect them to actually go ahead with such a pattern; this was a development I did not foresee. I like RC so I was quite happy with it, but those who hate reading must have had a miserable time (especially since the rest of the VA questions too were paragraph based)
  5. There were as many as 33 TITA questions – 10 in VA, 8 (2 complete sets) in DILR and a whopping 15 in QA. This made things more time consuming on average as uncertainty crept in (especially in the Parajumbles, which had 5 sentences each).
  6. When a question was answered and marked for review, it was not listed in the “answered” count obtained by hovering over the section name. However, we are assured that those questions (indicated on the right by a violet dot with green tick) will also be evaluated.
  7. When the 60 minutes were up, the test automatically skipped to the next section.

Once it was over, we all trooped down to hand in our rough paper and pens, and dispersed – in most cases, it seems, muttering rude things about the LR-DI section. (Note: please don’t forget to take along your id proof while leaving – as you would probably have pushed it into some corner of the desk, out of your way, it is surprisingly easy to forget)

I will shortly put up another post with my take on the paper. For a couple of other points of view, check out T’s post at “CAT 2015 analysis same wine in three bottles” and V’s post (added bonus – advice for the path ahead now that CAT is done) at “dilrwale cat15 le jayenge”

CAT 2014 – My Take

Here’s my take on the CAT as I perceived it (16th morning slot). Note that the opinions expressed are entirely my own 🙂 Also it is slightly long…not that that will surprise anybody!

Overall Structure: There were 4 sets of 4 questions each of DI, LR and RC. 34 singleton questions in QA and 18 in VA rounded things off.

QA: As many people have noted already, the QA was pretty easy. However, it was not the cakewalk some have made it out to be; there was the typical emphasis on testing the basics with deceptively simple but very precisely worded questions (and as always there were a few elegant traps in the finest tradition of CAT). The topics covered all the usual suspects (Geometry, Algebra, Arithmetic, Numbers, Modern maths all had significant representation) and no really unusual ones (no, after 40 years CAT has still not seen fit to ask a questions requiring Pick’s theorem or Fermat’s Little Theorem. Much to the sorrow of those who have been studying such stuff faithfully). It would seem CAT still rewards those who stick to the basics, but do those really thoroughly.

DI: While not exactly difficult, most of the sets would have troubled those who had only learned to handle standard data presentation forms; they required quick analysis and structuring of significantly non-standard data formats. Time-consuming, for sure, but a pleasure to attempt if you like that sort of thing.

VA: Much to my chagrin, direct Vocab-based questions remained elusive for the second year in a row. Instead, Grammar, Parajumbles, Critical Reasoning (Inferential), Incorrect Sentence in the Para, and Summary questions made up the numbers. I felt that a little over a third of them were pretty straightforward (a pleasant surprise after last year’s VA where nearly every question gave me a headache).

RC: The RC section was surprisingly pleasant, passages of a very reasonable length and on topics which did not put on to sleep (philosophy, I’m looking at you here!). The questions, too, were not as ambiguous as they have typically been in recent years – in many cases I could arrive at an answer without doubt or hesitation, which is unusual, at least for me! Those people who neglected the RC section in this one out of habit are probably going to live to regret it – this could have been a good scoring area even for someone who is not an English maven. Given the level of LR, the CAT RCs were catharsis, you might say.

LR: Even more so than in DI, the LR sets were non-standard. Only one of the 4 sets could be described as straightforward – unfortunately that was also the longest and had the most conditionalities and hence a good number of people ignored it totally. Two of the sets were quite impressively tricky to grasp. I found them refreshing and challenging, and unusually, even after solving them I was not confident of my answers (which rarely happens to me in LR). Certainly the most daunting area in this slot.

Overall, the DI and LR together called to mind the heyday of CAT’s DI and LR (during 2002-2008) in terms of the precision of wording, the skill-sets and the quick thinking required, while at the same time being entirely new in the specifics (which obviously I cannot talk about here!). The closest comparison I can draw is CAT 2006, where the Pathways set and the Erdos number set, while relatively quite easy, confounded most test-takers by being totally unlike anything they’d seen before.

In the QA and DI section, my personal take is that a score of between 65 to 75 would probably be an acceptable performance. (This would probably require 30+ attempts with a pretty decent accuracy, quite achievable under the circumstances). In the VA section, a score of 60-65 should prove sufficient; possibly as low as 50, since a lot of people who were relying on LR underperformed horrendously.

Now to address some of the interesting statements I’ve been encountering, the FAQs you could say:

FAQ 1: The paper was so easy, 98%ile cutoffs will go to 200, I have heard

No. Really, people, no. Easy or not, 150 would be a fair score and 175 an awesome one in any paper. 98%ile means close to 4000 people; even in the easiest of the Sims after all, you rarely saw a 200 – the idea that 4000, or even 400 people would be able to hit that level seems very improbable, to be frank.

FAQ 2: But so many people are posting scores like “84 attempts with 90% accuracy”…

Yes, they are. So are they more foolish for posting those, or are you more foolish for being gullible enough to swallow those estimates? People are notoriously bad at estimating how well they have done – and over three-fourths of people tend to overestimate (in public, at least). Ask yourself these questions:

  • Whenever you have written SimCATs in the past, after you submitted, but before the score appeared, you must have made some kind of estimate of what you expected. How often was this accurate (or even in the same ball-park, really?)
  • How many people do you know who can actually manage a 90% accuracy reliably? (I can’t. And I have been doing this stuff for over a decade and a half). In QA, perhaps. But given the subjective nature of VA, even 80% there requires some luck.
  • Assuming that your friend is speaking the truth and actually is sure that his 84 attempts have 90% accuracy. He must therefore have known that 8 questions were wrong. Why did he mark them then, I wonder?

If you are still not convinced, try a little experiment. Chances are that some of you who are taking the test on Saturday will be taking a last practice test today or tomorrow. If so, do me a favour – after the time is up but before you submit, write down on a piece of paper your attempts and your estimate of how many you got correct and wrong in each section and overall. Then submit and see how accurate you were.

FAQ 3: So then what about those people who are getting 99.99 in percentile predictors?

Percentile predictors, even if accurate, (and that’s another kettle of fish) depend on the accuracy of the input you give them. Garbage in, garbage out. And as pointed out above, most people’s estimates of their accuracy are greatly exaggerated.

And while some percentile predictors at least try to give an honest opinion, most are like fortune-tellers; they tell you what you want to hear. They rely on the human tendency to be flattered; if one tells you that you are going to get 88 and the other says 97, and you actually get an 89, you will still remember the latter one more fondly despite it being wildly inaccurate. As a result, you have loads of people who are joyfully shouting from the rooftops that it has been predicted that they will get a 99.99 or similar (never mind that they haven’t actually crossed 90 in a single practice test so far).

However, stop a moment and think – if 2 lakh people take the exam, only 30 people or less will actually achieve a 99.99 or more.

FAQ 4: But isn’t 150 too low? QA cutoffs will go to a 100, surely?

You would think so, but it probably won’t even come close. What most of us seem to forget is that the majority of people are scared of maths. Even easy maths. They come in with an aim of “20 good attempts” and even faced with an easy paper, they rarely go beyond 30, if that. The textbook example which is the closest comparison would be of CAT 2006, which had a tricky DI/LR section, but featured a QA section which was at least as straightforward as Sunday’s (and what’s more, 2 minutes per question, on a paper-based test; more than what we have here). Yet the QA cut-off for a 95%ile score was under 40 out of 100. Assuming that people haven’t miraculously gotten smarter in the decade since (a safe assumption) I don’t see a comparable cut-off crossing 70 this time.

FAQ 5: I’m writing the paper on 22nd? Will the level and breakup of questions be the same?

Short answer: we don’t know J

To the best of our knowledge, the level and breakup varied slightly between the two slots on Sunday – the LR was noticeably easier and the QA was almost certainly a bit tougher, for example. And a sub-area which featured 3 questions in the morning had none in the afternoon. So for all we know, the papers on 22nd might feature vocab or DS (or maybe even Pick’s theorem, though I’m betting against that). You try to predict the CAT at your peril!

My gut feeling is that the overall level of the paper will not change too much. However, the “difficulty distribution” might well undergo a drastic revision – for all you know the LR might be easy-peasy arrangements while the RCs might feature Spinoza, Kant and Freud. Or even good old Derrida. My only advice on this (and that hasn’t changed in its broad essence over the past ten years) is “don’t carry any pre-conceived notions with you”. As C. P. Cavafy says in his lovely poem “Ithaka”
      Laistrygonians and Cyclops,

      wild Poseidon—you won’t encounter them

      unless you bring them along inside your soul,

      unless your soul sets them up in front of you.”

If you go there expecting easy QA, and it turns out tough, then you might panic and end up missing the easy LRs that accompany it. Or the easy RCs. As happened on 16th, with those poor souls who had pre-decided “I will do all the LRs and not look at the RC” and who, even now, are probably regretting their rigidity. Have a plan by all means, but be prepared to change it at a moment’s notice if necessary. Flexibility might be crucial to survival. As I am fond of quoting “no battle plan survives the moment of first contact with the enemy”

And of course, don’t forget that most invaluable piece of advice from the Hitchhiker’s Guide to the Galaxy:

dont-panic

regards

J

CAT 2014 Experience

I wasn’t actually planning to put a “CAT Experience” post, but since many people have requested one, here goes. As many people have said, the CAT was not that tough this year, a sight for sore eyes…

Soft kitty

I’ll divide this into two posts, this one detailing the overall test-taking experience and another one with my take on the test level and what it might entail for future slots. This will also give me an opportunity to address many rumours and fears which seem to be proliferating in the aftermath of Sunday’s slots.

Some of you might directly wish to jump to the other one 🙂

Pre-test procedure:

I wrote CAT ’14 in the morning slot on Sunday. My friend and I arrived at 7:15. We were let in at around 7:45 or a little after, with a first round of basic checks i.e. admit card + id proof. (I believe people were allowed in till a little past 8:15 at my centre. However, don’t take risks on this – there were reports from some centres of latecomers being summarily ousted. That extra twenty minutes of sleep could cost you a year. Unlike in previous years, there was a board outside with a list of names and the allotted labs/computers. There were two labs at my centre, with nearly a hundred students.

Things were pretty well organised at the centre (Aruna Manharlal Shah institute, Ghatkopar, in case you’re wondering) – they had even opened the cafeteria so that once we got in, we could refresh ourselves with some basic breakfast (dosa and chai, in my case). After a jolly half an hour there laughing at the people doing frantic last-minute mugging, we went up to the second floor where the labs were. There was a registration room for the final formalities, and waiting rooms to await our turn (these rooms were where we were required to deposit our worldly goods, such as they were; no tokens were provided but as far as I know everyone got their stuff back without incident)

When leaving the waiting room, you were allowed to carry only the admit card (on the topic of admit cards, please make sure the print is decent; black and white is fine, but the photo should resemble you and the signature should be reasonably clear, and you need to stick one recent colour photo on the card) and an ID proof. A quick webcam mugshot and Left Thumb Impression later, we were directed to our hot seats.

The system provided was unexceptionable with a large and clear screen and a responsive mouse. The space between rows was cramped, though, and the icing on that cake was that the on/off switches for the computer were located on the floor directly beneath the monitor. Right where I would normally put my feet. Which means I spent the entire test with my legs carefully cramped as far back as they would go. Non-ergonomic, to say the least!

Having got in, another short wait ensued – during which we were handed a single sheet of A4 paper (don’t worry, you can ask for more if you like, they keep count and you have to submit them all at the end) and a single ballpen. We were asked to enter our passwords and then wait till the server reckoned it was 9:30 and told us to start. (The keyboard, although present, is not meant to be touched – one enters one’s password via the mouse using an onscreen keyboard. We were assured that touching the keyboard would stop your test and put your session in jeopardy; not surprisingly, none of us tried the experiment)

During the Test:

The interface was smooth, no significant glitches. A few points worth noting:

  1. Scoring: the test clearly and unambiguously stated: +3 for a correct, -1 for a wrong and no penalty for unattempted questions.
  2. When a question was answered and marked for review, it was not listed in the “answered” count obtained by hovering over the section name. However, we are assured that those questions (indicated on the right by a violet dot with green tick) will also be evaluated.
  3. Highlighting feature was absent in the RCs (I don’t use it myself, but those who rely overmuch on it should probably beware)
  4. In some RCs, a curious thing happened – three questions were asked, then an LR set came up, and then the same RC popped up again with a 4th We don’t know yet whether this was intentional or fortuitous, whether it was a feature or a bug.
  5. There were colourful graphs in the DI section. I don’t know about other folks, but I was quite cheered by them!
  6. There were frequent interruptions – a couple of attendance sheets were passed around, the invigilators came to collect the admit cards mid-test (you have to sign in their presence and hand over the card), and a few times the invigilators’ phones made weird noises. So be prepared to keep a firm grasp on your concentration.
  7. When the 170 minutes were up, the test automatically stopped (making the submit button possibly the most redundant piece of coding I have seen in years) and the screen showed a summary of attempts in each section and overall. Then we all trooped down to hand in our rough paper and pens, and walked free into the wide open spaces. (Note: please don’t forget to take along your id proof while leaving – as you would probably have pushed it into some corner of the desk, out of your way, it is surprisingly easy to forget

regards

J

 

Data Interpretation – Strategy

When attempting a test like the CAT, a fair number of people follow the (mindless) strategy of “attempt question 1, move to question 2 only after that is done, then question 3…” and so on, often failing thereby to see all the questions. One should remember that one need not win every single battle to win the war. Given that most people (even most people who will eventually make it!) will not attempt all the 100 questions, your “shot selection” becomes very crucial. You need to recognise – as Rahul Dravid used to so elegantly – which balls to hit and which ones are best left alone, and the faster you can judge this the better.

This becomes all the more crucial in the “set-based” questions (DI, LR and RC) as it generally takes significantly longer to judge these. A typical singleton question (such as a parajumble or remainder or PnC or vocab-based one) can be weighed in 20-30 seconds, and judiciously left without feeling too bad about it. But a set can take up to 3-4 minutes just to understand in detail; and if after spending so much time you realise that you have not been able to decipher it, it can induce a panic.

In the past 5 years, there were 30 questions in each section and there would be around 9-10 questions each (or 3 sets each) of DI or LR. This year, it is likely that there would be 4-5 sets, and as many as 15-18 questions, of each. This means that, as in days of yore, your choice of which sets to attempt first could become crucial. In this post, then, I will try to look at possible mechanisms and criteria to help in the decision-making process (as always, this is indicative and you will have to adapt it to your own areas of strength and weakness) so that you can pick and choose which sets to attack first without actually getting into the nitty-gritty details.

Here are some questions you could ask yourself:

1) Is the given data in a familiar form or in an unfamiliar or haphazard state? And is it precise or ambiguous?

If the data provided is in a standard table / line graph / pie chart and is complete and precise, then even if there is quite a lot of data it should still be very manageable. However, if

  • the data is provided in an unfamiliar form – a histogram, a scatter-chart, a cumulative table, or some even more esoteric format (which means you would require an inordinate amount of time just to understand how to interpret it properly)
  • there is some data missing (which means solving the question might entail a lot of pre-work)
  • the data is not precisely readable, such as a bar- or line-graph where the values can only be approximately estimated to around 5-10% accuracy (which means that even with your best efforts accuracy cannot be guaranteed)
  • the data is presented as a caselet (which means you might have to spend precious time at the start to bring it into a manageable form)

then it might be worth leaving the set for later.

2) Is there additional data provided in the questions?

I’ve observed that a lot of people base their judgement of the difficulty level of a set solely on the pre-information before the questions. In my opinion, though, the questions are almost always worth a dekko; if they are straightforward queries like “Who is sitting next to Mr Sivaramasubramanian?” or “In which year is the profit of Megahard Corporation the maximum?” or “How many people failed in maths?” and the options are not of the “Cannot be determined” flavour then it is a pretty fair indicator that the set is going to give you definite answers in one go.

But if you see questions such as “If Germany defeats Brazil 7 – 0 in the final round*, then who will end up in 3rd place overall?” or “if in 2014, Froogle reports a 10% growth in sales and an 8% growth in costs over 2013, then what will be their percentage profit in 2014?” or even more flagrantly evil question-types like “Which of the following cannot be true” giving three statements and options like “I and II only”, “All of I, II and II” and so on (which effectively means you have to solve 3-4 questions for the price of one), then I would recommend you skip lightly on to the next set and return later.

* This example is a work of fiction and any resemblance to any real-life match is purely coincidental. 

3) Does the set lean more towards the calculative or the reasoning based?

This can be a powerful decision point, depending on your skill-sets. A set which involves intensive calculation is unlikely to go out of its way to confound you with traps, while one which involves simple numbers will often require careful reading and weighing of alternatives. (Personally, calculation is something of a strength for me and hence I choose to do the calculative sets relatively early on, but as I said earlier, this decision has to be based upon your knowledge of your own strengths and weaknesses.)

4) How far apart are the answer choices?

This is a useful question to ask yourself when deciding between two calculative sets. If the answers are far apart – on a percentage rather than absolute basis, mind you! – then you can approximately fairly wildly and still arrive at a correct answer confidently. (For example (3, 7, 12, 18) or (35.7%, 52.8%, 88%, 122%)) However if the answers are close together, then you calculations must needs be carried out with nit-picking accuracy and this will affect your speed as well. (For example (62375, 62525, 62845, 63635) or (35.7%, 36.8%, 38.7%, 39.9%))

5) How many questions are there in each set?

If the sets are similar on the above parameters, then this could be a tie-breaker – a set with 4 questions will give you more value for money (i.e. more marks as a result of the time spent on it) than a similar set with 3 questions.

Try and apply the above criteria to a set in real-time, under test conditions. An excellent case-study would be the DI-LR section from CAT 2008, which had 7 sets covering a wide range of types and difficulty levels. If possible, in a future post I shall try to do a video analysis of the same and roughly demonstrate how one could have approached the section during the test.

regards

J