CAT 2016: A good but flawed test

In my previous post, I mentioned that the shoddy infrastructure and unprofessional management of the test was extremely disappointing. The actual test itself, though, was mostly refreshing; it felt like CAT one might say (which has not always been the case in the online papers). With two difficult sections, even the well-prepared knew they had been in a fight this time.  

The overall structure was close enough to the mock they had uploaded. The first section stayed easy, as has been the case for the past couple of years. Again, 24 RC questions and 10 para-based VA questions, in my opinion a structure which leaves a lot to be desired. Again, all the 10 VA questions were TITA, but only in parajumbles did that actually matter, as the “odd sentence out” and “summary” types were really just MCQs where you had to type instead of clicking (with the added bonus of no negative marks!). Again, as has been the case for the past two years, the passages and paragraphs were all very readable, covering a wide range of interesting stuff. In other words, no surprises for the well-prepared and so a good attempt in this section should have been well over 20 (and of course any unsolved TITAs should have been attempted on principle as there was nothing to lose). I felt VARC-2016 to be between 2014 and 2015 in level, just slightly tougher than last year. Cut-offs for this should be somewhere in the 50s I would feel, with 75 being an excellent score.

The DILR section had 8 sets of 4 questions each, with more DI than LR. This was a deceptive section; while it seemed easier than last year’s, I felt it was as tough. Sure, the sets were easier to understand, but they were also way more time-consuming, with the later questions of practically every set containing additional information and requiring rework. Also, a lot of people tend to prefer LR to DI, so they were frustrated by the shortage of LR. If you aimed at attempting 3 or 4 sets, it was easier than last year; if you planned to attempt all 8, it was tougher. Cut-offs might be low again this year, even 14-15 good attempts and a raw score in the low 30s might prove enough, provided accuracy does not let you down.

The QA section caught a lot of people off guard, given the very easy QA in 2014 and 2015. While not as nasty as the toughest paper-based tests (such as 2007), the paper was not all sitters either. As in 2012 and 2013 (which, however, were 20 question sections), there were mostly medium level questions, with a leavening of sitters and a sprinkling of really nasty questions in between. One could make a hearty meal of this section if one managed to avoid breaking one’s teeth on those 5-6 speed-breakers. I found it a very well-balanced section, which forced one to think. No questions requiring abstruse knowledge, but plenty requiring basics and care. As has been the recent trend, Arithmetic was the single biggest chunk in the paper. Numbers was more prominent than it was last year, at the expense of Algebra, which almost vanished. Geometry was marked by some unusually tough questions.

Unfortunately, the square root – pi fiasco, which affected 3-4 questions in either section, marred the otherwise high standard of the paper. Additionally, there was reportedly one wrong question in the morning slot (unacceptable, in a test like this, but no one in charge seems to care!). Had it not been for these glitches, it could have been one of the better QA papers in several years. As it is, I suspect the cut-offs will be noticeably lower than they were last year; a score in the early 30s might prove acceptable from a cut-off point of view, though the Quant wizards might still get 75 or more.

Overall, given two difficult sections out of three, I expect raw scores to drop from last year (and the magnitude of scaling to consequently be larger). I suspect a 170 would prove to be a very good score and even a 125-135 might prove sufficient for a few good calls*.  The top raw scores should still be close to 240 I would guess (and scale to nearly 280) but I’m going to go out on a limb here and say that in my personal opinion, a raw score of less than 140 might be enough for a 99 percentile (though it will scale up to around 170). Don’t get your hopes up too much though, nearly every expert opinion I have seen pegs it much higher than that.

regards
J

*(Statutory warning copied from my last year’s blog: (a) I am talking of raw scores, not scaled and (b) these are just guesses, and I have no particular statistical evidence of how accurate they might be. However if I don’t put in some estimate here, the comment sections is going to be flooded with variants of “I score xyz, how much percentile will I get?” I might as well say straightaway that I will not answer any such queries. Your guess is as good as mine).

CAT 2016 – Game of Stools

I wasn’t planning to write an analysis this year, but several people have asked me for one and I figured it would be simpler to type it out once rather than again and again. As in the past, I’ll divide this into two posts, this one detailing the overall test-taking experience (which could be of use to next year’s candidates, I suppose) and another short one with my take on the test structure and level.  Some of you might directly wish to jump to the other one 🙂

Pre-test procedure:

I wrote CAT ’16 in the afternoon slot, at ARMIET Shahapur (ARMIET being Alamuri Ratnamala, um,  something something…this college has a name a South Indian could envy). Shahapur being a little beyond Asangaon – the local train frequency to which is abysmal – I had to leave by 10:30 and reach by 12:15. The whole train was full of CAT-takers as the next train was scheduled an hour and a half later. We were milling around outside till 1:15 (in extremely hot and dry weather), resulting in headaches and grumpy faces galore. In the meantime, some people got calls and messages from those in the first slot and got some inkling of the now-infamous pi-root confusion. Also, there were rumours that the DI section was easy, which seemed to make people happy.

Officially, we were allowed to carry only the admit card (to repeat what I said last year on the topic of admit cards, please make sure the print is decent; black and white is fine, but the photo should resemble you and the signature should be reasonably clear, and you need to stick one recent colour photo on the card) and an ID proof. The security check was surprisingly lax with no proper frisking and people with bags wandering all over the place trying to figure out where to go for the photo/thumb impression procedure (at least two people in my lab had carried their own pens and one his wallet). Also the labs were embarrassingly ordinary, with no AC and with three-legged backless stools instead of chairs. The first computer I was given did not start up. After half an hour of increasingly irritated hints and reminders to the invigilators I was finally allotted another (which turned out to have a mouse issue – more on that later). As in the past couple of years, everyone was handed a sheet of A4 paper and a pen . One could ask for more paper if one so desired, but I stuck to my policy of environment-friendliness and managed with just the 1 sheet.

During the Test:

I will talk about my own experience further down, but first a few general points worth noting:

  1. Scoring: the test instructions stated: for MCQs: +3 for a correct, -1 for a wrong and no penalty for unattempted questions. For TITA: +3 for correct, no marks deducted for wrong. However the individual questions mentioned +1 / -0.33 and +1 / 0 respectively. It should make no difference, either way the marks will be scaled to 300 I suppose.  However, this rattled quite a few people apparently.
  2. The sections were not further subdivided – last year VA had two sub-tabs for RC and VA and one could freely move between those during the available 1 hour. Similarly the DILR section had separate tabs for DI and for LR. This year each section was all in a single lot.
  3. The question palette was adjustable: it could be shrunk to the side on a click, and brought out again on another click. In theory, this was a nice idea, but in practice I think the implementation fell a bit short. As a result, every time one clicked an answer, the entire palette took a second to refresh. Consequently, the interface was not as smooth as in the past.
  4. The Calculator was a fairly basic one, unlike the (useless) scientific one of last year.
  5. As in the mock, there were more DI and fewer LR questions, much to the dismay of the majority who prefer LR. Also, once more there were 24 RC questions and reading skills were at a premium in the Verbal section. 
  6. The number of TITA questions reduced from last year.

For me, personally, the act of actually taking the test turned out to be easily the most irritating testing experience I have had over the past few years. It started off smoothly enough as the VARC section seemed easy. However, a few questions into the section I realised that something was seriously wrong as many questions I had answered were showing unmarked. After a few minutes of frantic experimentation I found the problem – the mouse was double clicking most of the time. So when I marked an option, it got marked and unmarked again in the same click. This led to a frustrating experience for the rest of the test, wherein I would click, check to see if it registered, try again…. In some cases it required as many as 5-6 attempts to get a question answered. And of course, it meant it was impossible to use the calculator, because typing a number like 1569 gave a result like 155669. Also, at the end the mouse seems to have unmarked one question in VA as the final tally showed me 33 and not 34 attempted.

On the whole, other than raising my blood pressure, this did not affect me much in the QA and VA sections since I usually have time left over in these (though I could not check my answers at the end as I normally do). In DILR, though, I rarely have spare time (less than 2 minutes, last year) and additionally had to calculate everything manually. As a result I ended up leaving 5 questions there, 1 set and 1 extra question. Still, the set I ended up leaving was arguably the nastiest of the lot (most people I have heard from seem to have left that even after trying it) so I suspect no great loss there. Overall I ended with 94 attempts, the first time since the CAT went online when I was unable to attempt everything. The questions I left were anyway the ones which seemed the nastiest, so it might not have much adverse effect on my score. However, that is kind of beside the point.

Had I been a serious aspirant, such an experience would certainly have severely hurt my performance. The frustration alone would have been traumatic. Add to it the lax security, the terrible seats, the announcements on the PA during the test – on the whole it left a lot to be desired, a disappointment given the much better experience of the past two years with TCS. In fact, my worst experience since the 2009 debacle. And my disenchantment was not over yet….

After the test:

Again, a long journey back (I got home at 8:30 pm eventually). Eventful, too, as the saga of the Facebook posts during the test was all over social media by then. I wish it could be brushed aside as a one-off aberration, but having seen the casual nature of security in my centre and heard what happened in other places….

I think the IIMs need to take a long hard look at these problems for next year, even if they choose not to publicly admit that anything went wrong. Anyone can say “concluded successfully” and “detected and dealt with” and brush it off. But it needs to be true as well. The trust of people in the sanctity of the test can be pushed only so far; and it is in the IIMs’ own long-term interests to maintain a certain standard. It is easier to maintain a reputation than to rebuild it.

I will shortly put up another post with my take on the paper. For a couple of other points of view, check out T’s post at CAT 2016  and V’s post) at CAT 2016

regards
J