CAT 2016: A good but flawed test

In my previous post, I mentioned that the shoddy infrastructure and unprofessional management of the test was extremely disappointing. The actual test itself, though, was mostly refreshing; it felt like CAT one might say (which has not always been the case in the online papers). With two difficult sections, even the well-prepared knew they had been in a fight this time.  

The overall structure was close enough to the mock they had uploaded. The first section stayed easy, as has been the case for the past couple of years. Again, 24 RC questions and 10 para-based VA questions, in my opinion a structure which leaves a lot to be desired. Again, all the 10 VA questions were TITA, but only in parajumbles did that actually matter, as the “odd sentence out” and “summary” types were really just MCQs where you had to type instead of clicking (with the added bonus of no negative marks!). Again, as has been the case for the past two years, the passages and paragraphs were all very readable, covering a wide range of interesting stuff. In other words, no surprises for the well-prepared and so a good attempt in this section should have been well over 20 (and of course any unsolved TITAs should have been attempted on principle as there was nothing to lose). I felt VARC-2016 to be between 2014 and 2015 in level, just slightly tougher than last year. Cut-offs for this should be somewhere in the 50s I would feel, with 75 being an excellent score.

The DILR section had 8 sets of 4 questions each, with more DI than LR. This was a deceptive section; while it seemed easier than last year’s, I felt it was as tough. Sure, the sets were easier to understand, but they were also way more time-consuming, with the later questions of practically every set containing additional information and requiring rework. Also, a lot of people tend to prefer LR to DI, so they were frustrated by the shortage of LR. If you aimed at attempting 3 or 4 sets, it was easier than last year; if you planned to attempt all 8, it was tougher. Cut-offs might be low again this year, even 14-15 good attempts and a raw score in the low 30s might prove enough, provided accuracy does not let you down.

The QA section caught a lot of people off guard, given the very easy QA in 2014 and 2015. While not as nasty as the toughest paper-based tests (such as 2007), the paper was not all sitters either. As in 2012 and 2013 (which, however, were 20 question sections), there were mostly medium level questions, with a leavening of sitters and a sprinkling of really nasty questions in between. One could make a hearty meal of this section if one managed to avoid breaking one’s teeth on those 5-6 speed-breakers. I found it a very well-balanced section, which forced one to think. No questions requiring abstruse knowledge, but plenty requiring basics and care. As has been the recent trend, Arithmetic was the single biggest chunk in the paper. Numbers was more prominent than it was last year, at the expense of Algebra, which almost vanished. Geometry was marked by some unusually tough questions.

Unfortunately, the square root – pi fiasco, which affected 3-4 questions in either section, marred the otherwise high standard of the paper. Additionally, there was reportedly one wrong question in the morning slot (unacceptable, in a test like this, but no one in charge seems to care!). Had it not been for these glitches, it could have been one of the better QA papers in several years. As it is, I suspect the cut-offs will be noticeably lower than they were last year; a score in the early 30s might prove acceptable from a cut-off point of view, though the Quant wizards might still get 75 or more.

Overall, given two difficult sections out of three, I expect raw scores to drop from last year (and the magnitude of scaling to consequently be larger). I suspect a 170 would prove to be a very good score and even a 125-135 might prove sufficient for a few good calls*.  The top raw scores should still be close to 240 I would guess (and scale to nearly 280) but I’m going to go out on a limb here and say that in my personal opinion, a raw score of less than 140 might be enough for a 99 percentile (though it will scale up to around 170). Don’t get your hopes up too much though, nearly every expert opinion I have seen pegs it much higher than that.


*(Statutory warning copied from my last year’s blog: (a) I am talking of raw scores, not scaled and (b) these are just guesses, and I have no particular statistical evidence of how accurate they might be. However if I don’t put in some estimate here, the comment sections is going to be flooded with variants of “I score xyz, how much percentile will I get?” I might as well say straightaway that I will not answer any such queries. Your guess is as good as mine).

10 thoughts on “CAT 2016: A good but flawed test

    • I know! I was not planning to post at all this year, but so many people were asking me, on PG and elsewhere, that I finally decided it would be simpler to post this and direct everyone here 🙂 Definitely beats typing it out again and again eh?


  1. I wonder how the scaling would be done considering a wrong question in slot 1 and 3-4 questions with pi-root fiasco to no mistakes in slot 2?
    The IIM-B panel hasn’t accepted that there was a mistake regarding the same, have they? Guess they were busy trying to live upto their MHRD ranking of No.1 but failed miserably in terms of CAT experience for aspirants. IIM-A was a lot better last year.

    • I think you are confusing scaling with normalisation. In an equipercentile system, your percentile will be determined only by comparison with others in your slot (who took the same test you did – and faced the same wrong questions). So you needn’t worry about that, at least. Also, incidentally, the pi-root fiasco happened in both slots for 3-4 questions. Acceptng errors, alas, is not something the IIMs have ever learned to do gracefully (or at all) – 2009, which was also A, was far worse than this and yet they too insisted that all was hunky-dory. My advice: don’t waste energy on what you cannot change, focus on the tests ahead.


      • How do they give percentile in common by comparing performances from slot 1 and slot 2? There has to be something which can rate them both on a common scale unless they do did it by rating slot 1 and slot 2 independently and then combining them on a common scale?

      • Assuming equal number of people in each slot: in a given section, the person ranked, say, 100 in slot 1 and the one ranked 100 in slot 2 will have the same percentile. Similar the 1000th rank in each slot (or any other rank) will have the same percentile. Pretty straightforward till that point. The marks will be scaled to match, using a reasonably complicated procedure which, believe me, you do not wish to know about at this point. If you want to know more, google “equipercentile normalisation” – but if you find it abstruse don’t say I didn’t warn you 😛


  2. How will the scaling be done considering slot 1 had atleast 1 question wrong and 3 questions with pi-root fiasco resulting in not being able to answer the question for many, including me? to no such things like that in slot 2?
    For me, IIM B convened CAT experience was worse than last year’s IIM A’s

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s