August 12th and all ptarmigans and teaching teams run for cover

Today is August 12th and marks the start of the shooting season for Grouse, Ptarmigan and the Common Snipe.

It also, entirely uncoincidently, marks the release day of the 2015 National Student Survey results in the UK. With much discussion around the introduction of new metrics and outcome criteria for the proposed Teaching Excellence Framework (TEF), and with HEFCE planning a review of NSS questions in 2016/17 to possibly include student engagement, it is certainly worth taking a step back to think about the mathematics of it all.

Are metrics losing the plot?

Myself and more notable others have been concerned about the gamification of these metrics and the emphasis on strategies used to encourage students to participate. In my blog post last year “NSS is the name of the game” I looked at some of the satisfaction data and pondered on the overall usefulness (“add the shoe sizes of VC’s into league tables! Would be just as accurate“) and questioned the ethics of some of the approaches to gathering the data (“we were pressed by tutors to answer certain questions in a particular way“).  I’ve heard myself someone comment to a student that if they don’t give a good NSS result, then employers won’t consider recruiting students from a particular university.

As I concluded in my blog last year after applying some statistical tests, is what we see a genuine annual increase in student claimed level of satisfaction or is it a result of a carefully honed precess for gathering data?

What about this year?

The 2015 NSS results were published by HEFCE this morning. There were no changes to the benchmarking this year, and only minor change was the cut off for inclusion of low datasets from 23 to 10 respondents.

NSS Q22 2015

FIGURE 1: Q22 NSS results 2010-2015. CC BY Viv Rolfe

 

The data above arranges English HEIs alphabetically. What we see is in 2015 (orange), the data very closely aligns with 2014 (turquoise). In 2014 the benchmarking was altered and brakes put onto the system, and this was the first year where no significant increase in overall satisfaction across the English university sector (ANCOVA tests previously reported).

By looking at Q22 in terms of annual means and standard deviations (below), a yearly increase in overall satisfaction is apparent across the 127 English HEIs. What is interesting is the reduction in variation across the institutions, and one has to question whether the NSS is becoming less discerning?

NSS Q22 Mean

FIGURE 2: Q22 Mean NSS results 2005-2015. CC BY Viv Rolfe

 

Results by mission group?

It is interesting to split the analysis by mission group, separating out the 19 Russell Group and 18 University Alliance institutions from the others. By extrapolating the data, in 2021 there will be a big party as universities outstrip the performance of the Russell Group should the survey remain unchanged. But that is unlikely.

NSS Mission Group

FIGURE 3: Trended Data for Q22 By Mission Group. CC BY Viv Rolfe

 

Commentary

Hello – David here. You may remember me from such blogs as “Followers of the Apocalypse” where I write primarily about UK HE policy making. When Viv showed me this data set I was fascinated to see trends in the NSS, and immediately started to think about implications for the proposed “Teaching Excellence Framework” (TEF). 

Figure 2, above, shows the variability in NSS scores between institutions decreasing with each NSS iteration. There could be a number of drivers for this, I would suggest that it perhaps shows institutions getting better at running the process, and getting the message out to students that a good institutional NSS score is good for the perceived value of their degree from that institution. Manifest nonsense, obviously – but if metrics are good for one thing it would be for developing faith-based belief systems!

When the NSS was originally developed the scores were primarily used at a course (or at worst, subject area) level. This allowed prospective students to compare the attitudes of students doing a similar course at different institutions. Anyone that works in a HEI will tell you that variability between departments and subject areas is huge, and indeed most of the pain experienced by academic staff on the “glorious twelfth” will be concerning this intra-institutional variation.

Johnson Minor’s TEF would (we are led to believe) be at an institutional level, and Osborne announced that it would affect an institutional ability to increase student fees to match inflation (as if inflation was some kind of an optional extra rather than reflecting the reality of rising costs). If NSS results at an institutional level are included in the proposed “basket” of metrics within the TEF, the decreasing inter-institutional variance shown in Figure 2 implies that this will have the effect of making it harder to discriminate between institutions.

Of course, this may be what BIS want (so all institutions can increase fees with the figleaf of independent oversight justifying it – see also OFFA!), but in this case it seems like a very expensive way to pretend that you are not making HE more expensive for the taxpayer. But I suppose BIS are used to spending lots of money to do that kind of thing. Such is politics.

So that’s why I think this analysis is important.

[declaration of interest: I received 1 pint of beer for writing the above]

 

Thank you David for your wisdom and insight. The discipline variation and impact on teaching teams also concerns me as often we are  held to account for much of what goes on behind the scenes of successful teaching (timetables, IT, efficiency of academic administration systems) which is not reflected in the survey.

It just remains to say, and I know all the readers are dying to know, that a ptarmigan is a slightly plump bird with a beautiful plumage. I hope that like most teaching teams today, it manages to dodge any bullets and experiences nothing but a mild ruffling of feathers.

Rock Ptarmigan

By Jan Frode Haugseth (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons

 

Process:

  • Download summary data from HEFCE.
  • Data for English HEIs cleaned – aligning university names with 2014 recorded names (e.g. The University of Bath was University of Bath in 2014). Data was then sorted.
  • Registered data was used – that is the data represented the institution where the student was registered (as opposed to Taught – where sutdents do majoriy of year 1 study).
  • Data is all full time and part time students.
  • In 2010 the data benchmarking changed and was adjusted for ethnicity. Interetingly the data is not adjusted for socio-economic background (http://www.bristol.ac.uk/academic-quality/ug/nss/research.html).

Other articles that week:

http://www.theguardian.com/higher-education-network/2015/aug/13/the-national-student-survey-should-be-abolished-before-it-does-any-more-harm?CMP=share_btn_tw

Chris Hanretty (2015). When communicating uncertainty goes wrong. Available: https://medium.com/@chrishanretty/when-communicating-uncertainty-goes-wrong-cdc5b7ae226b

Keith Burnett (2015). Available: https://www.timeshighereducation.co.uk/blog/want-raise-quality-teaching-begin-academic-freedom