National student survey – how happy are they?

So this year there was a change with the national student survey (NSS). This time it was not changes to the survey itself (last year the questions were changed and new themes introduced) but, rather, changes that it now came under the custody of the Office for Students (OfS) since they replaced HEFCE.

The changes were not dramatic but represented a subtle shift towards the students’ interests.  The data was not shared with Higher Education Institutions (HEI) before publication so that the students got to know the performance of the HEI at the same time as the Institutions themselves.

The other changes have been that the data has been presented in a slightly different, more accessible format.  It is a long way off dashboards with visualisations but the data is there for people to easily review.

So we at #VizHE decided to pick up from where we left off last year (NSS 2017: One data set, many dataviz approaches.) and pull out a few interesting nuggets for you.

The starting point was to see how the students had responded on a national level looking at the themes.  This was also tied into the monthly story telling with data challenge which was on dot plots.

NSS Theme score comparison (1)

The interactive viz: is here!

The main takeaway is that it really has not changed that much and where changes have occurred, they have been negative.  The only exception is Wales where the students have responded much positively this year compared to last.

Adam decided to reviz the headline theme performance overview originally posted on the OfS pages because he wished to make it easier to compare satisfaction with the themes rather than themes and years mixed up together.




NSS2018 themes.png

Following on from revizzing the headline NSS theme results Adam wished to dig a little deeper into the student satisfaction with the separately reported question 26 – ‘Students’ union’ part of the ‘Student Voice’ theme, highlighted above as very much lagging in student satisfaction compared to the sector theme scores.

Which unions are getting it right?

Which unions are getting it right

The interactive viz is here!

The main takeaway is that Alternative Providers (AP’s) appear to be getting it right more than Higher Education Providers and Further Education Colleges when taking the median score for each the provider types as a reference point. However, the populations are small for AP’s and this could be causing the volatility and wide distribution of scores seen in the figures. Hover over the box plot distributions to see which providers are getting a thumbs up from the students and who are in the dog house!

So that’s all from us for now. Thank you for reading and we hope you enjoyed exploring our  visualizations!

Dave and Adam

Team #VisualisingHE


The potential TEF metric – sustained employment from LEO

Back in 2016 the government released its first experimental LEO data (or Longitudinal Educational Outcomes to give it its full given name) and it has now matured to be considered under the TEF (Teaching Excellence Framework to give it its birth name) with TEF 2 including an element of it as a supplementary metric.

Unlike the salary data (which Adam did a superb job vizzing here) this metric is not going to grab the attention quite as easily. The metric is sustained employment 3 years after graduation.  The data is sourced form HM Tax returns and does have challenges (lack of self employment, maternity leave etc etc) but it does provide some interesting patterns by subject.

The first viz is a simple view showing where the HEI (Higher Education Institutions to give it its name that its Mother would use when telling it off) sit by its metric outcome grouped by subject.

The interactive viz is here

LEO 3 Year

As you can see the GB average is around 75% for most subjects with just Languages and Combined subjects falling below that. Economics, Education, Mathematical Sciences, Medicine and Dentistry and Nursing all have a GB average above the 75% seen in the other subjects.

At the moment this is just a supplementary metric which will only be used for context when selecting the TEF outcome (another excellent blog explaining TEF here) but it shows the direction of travel and HEIs need to get a handle on where they sit in this metric.

To aid that understanding Adam has created a magnificent more exploratory viz which allows the data to be split by sex, years after graduation (1, 3, 5) as well as region. It can then be viewed by subject with the ability to highlight your institution.

Take it for a spin here

LEO - Proportion of students in Sustained employment further study or both after graduating

Well I hope you enjoyed reading and found it informative – any questions, queries or feedback let us know.



NSS 2017: One data set, many dataviz approaches.

I often say that one dashboard cannot answer all the questions we may have for our data and that the way we visualise our data depends strongly on the question(s) we pursue to answer. On 24th October 2017, the latest meeting of the Midlands Tableau User Group (#MidlandsTUG) community took place in Leicester and, as part of it, attendees had the opportunity to present their take on visualising a public dataset – the 2017 National Student Survey (NSS) results. As a result, a few the attendees presented their work, thus showcasing the variety of angles an analyst can take when trying to gain insight into their data.

Further in this post you will have the opportunity to see some of these examples, presented in no particular order, but first, please remember that the analysts whose work is referenced in this post work in a wide range of sectors. Some may be more familiar to the data than others, but they may not necessarily be an expert on it, and, therefore, their visualisations should not be interpreted as in-depth analyses!

The main purpose of the ‘data hackathon’ exercise during the event was to encourage creativity and a range of approaches when analysing a common data set, so please keep that in mind when exploring their visualisations.

About the data:

The 2017 NSS results were published in July and the data is available on the hefce website. The survey itself is ‘aimed at mainly final-year undergraduates, it gathers opinions from students about their experience of heir courses, asking them to provide honest feedback on what has been like to study on their course at their institution’ (

Approach #1: Elena Hristozova

Elena Hristozova 1.png

Link to interactive viz:!/

Let me introduce you to my approach. I had worked with this data before which allowed me straightaway to bring together some context in my analysis by providing the themes in which the 27 questions are grouped. My main question for the data was: are there any trends that can be seen in the questions’ results across the subjects and the teaching institutions? In particular, I wanted to make it easier for the user to see whether any groups of questions tend to score lower than others.

I used a heatmap to show the results for all questions and subject or institutions, allowing users to make a choice. Based on no scientific evidence, I chose 75% agree score as my middle point and used colour to encode the ‘less than or higher than 75%’ results.

Approach # 2: Ali Motion

Ali Motion.png

Link to interactive viz:!/

Ali’s visualisation is a great example of a benchmarking type of dashboard where one can easily see how a selected institution has performed for each of the subjects taught at it, and how the results compare to those of other providers for a selected question. Ali has used a jitter plot where the results for each of the providers are plotted on a single axis, but the marks in the charts are given ‘some space to breathe’ and to show groupings a bit better.

Colour has been used to highlight the selected institution, which also changes between green and orange to indicate of the said institution’s score was above or below the benchmark, or the sector average. Furthermore, a user is also supported in the interpretation of the results through a very clear to read and understand set of tooltips that appear on hover.

Approach # 3: David Clutterbuck:

David Clutterbuck.png

Link to interactive viz:!/

David’s approach was very straightforward: show the scores for the Top 10 and Bottom 10 providers, show movement from previous year and breakdown of satisfaction per question. He used both colour and shapes very effectively to show the insights and he even went a step further obtaining the previous year’s data to demonstrate the change over time in the overall score.

David has also added a couple of very small touches that make his visualisation easy to explore: by clicking on one of the providers in the table, the provider’s logo appears underneath and the provider’s results become highlighted in the detailed question breakdown.

Approach #4: James Linnett

James Linnett.png

Link to interactive viz:!/

‘The way I approach it was for the end user (who may or may not have any analytical experience) to easily understand what the dashboard is portraying. By that I mean easily being able to compare their institution and/or subject to the national average.

The traffic light colours indicate how the specific institution, question group or individual question compare against the said average.’

Approach #5: Rob Radburn

Rob Radburn.png

Link to interactive viz:!/

Rob’s approach is very interesting in the sense that it has incorporated a way to visualise both, the mean score of students answers, which is a value from 1 to 5 where 1 = ‘Strongly Disagree’ and 5 = ‘Strongly Agree’ , and the level of agreement (or consensus), which is a measure he has calculated that has a value between 0 and 1, where 0 = disagreement and 1 = agreement.

Rob’s visualisation is very clear and he has provided the readers with a very simple explanation of how to read the chart: ‘The dot shows the average score for each question. The length of the line from the dot measures how in agreement students were in answering the question. This uses Tastle and Wierman’s measure of dispersal. The shorter the line, there is more agreement in the answers for the question. The longer the line, less agreement.  The questions are then ordered from more to less agreement.’

Approach #6: Neil Richards

Neil Richards.png

Link to interactive viz:!/

‘For my visualisation I was aware that there were some great examples in Big Book of Dashboards chapter 3, so essentially I just wanted to recreate a simple version of the jitter plot of the start of the chapter, with one for each question. I didn’t actually look at the book until just now writing these bits, so I didn’t realise how well I’d remembered the look of the chart!

I added the overall average line after feedback on twitter, and fixed the chosen provider to always show in the middle (with all other x positions just placed at random).’

Approach #7: David Hoskins

David Hoskins.png

Link to interactive viz:!/

‘I decided to take a more personal approach to the dataset and compare responses for three institutions where my step daughter Sophie is thinking of studying Social Work next year: Bournemouth, Leeds Trinity and Nottingham Trent.

I knew I had the perfect photo (taken at my partner’s graduation), and soon decided on a grid structure for my dashboard, with a column for each institution showing KPIs for overall comparison and the questions grouped into two categories to keep the layout uncluttered.

Narrowing the focus also allowed me to display the proportion of responses for each question as diverging stacked bars (using instructions at ) and show the detail behind the aggregated metrics.’

Approach #8: Jennie Holland

Jennie Holland.png

Link to interactive viz:!/

‘For my analysis I used the summary data at country level. The first sheet   in the workbook shows survey results for the question groups at country level. As the data wasn’t too big I wanted to be able to show this all on one sheet, with highlights on country and UK comparison to show the variation at a glance.

The second page looks at the questions asked within each question group. To help with consistency between the two sheets, I grouped the questions into the same categories used in the first sheet, and allowed the user to select the group of questions they are interested in. I was quite keen on using one scale for all question groupings that were selected in the filter, to enable the viewer to see the distribution of scores across the question groups.’

Approach # 9: Neil Davidson

Neil Davidson.png

Link to interactive viz:!/

Neil has not had the chance to provide a quick summary of his approach but straightaway it is easy to see that he was familiar with the data and the HE sector. In his visualisation he has demonstrated the correlation between an institution’s overall score for each of the questions and the said institutions’ rank in the Complete University Guide (CUG) league table.

Not all providers who have results for the NSS appear in the league table and so only higher education establishments are visualised. Though Neil’s work is still a work in progress (as he has admitted it himself), it still demonstrates that the strongest correlation of the CUG ranks appears to be with Q15: The course is well organised and running smoothly (the question with line of best fit closest to 45 degrees). Of course, there is no argument against the fact that a league table is not based merely on the results of the NSS, but the bottom line is that visualising a set of small multiples to show correlation between two variables can work well.

Approach #10: Elena Hristozova & Dave Kirk

Elena Hristozova 2.png

Link to interactive viz:!/

The last approach to demonstrate is somewhat a joint approach between myself and Dave. Dave had the idea to create a visualisation that shows how many of the questions scored below the score for Q27 – Overall Satisfaction. Due to time constraint, he was unable to complete his visualisation but after a couple of exchanged messages I realised that this was an ideal question to answer using the sunburst chart I was currently learning how to make.

The end result is the visualisation below which compares the scores for all 26 questions in the survey to the overall satisfaction score for each of the country regions in the UK.  The analysis is presented through encoding colour – negative difference presented in red and positive difference shown in blue. The intensity of the colour is an indicator of how big or small the difference is.

To sum up…

I hope you have enjoyed exploring the different ways in which the Tableau Community in the Midlands approached the 2017 NSS results. Some analysts drew inspiration from books and the literature, others took a very narrow focus on the data, and the rest just practised visualising survey data and they all had a different set of questions driving their analyses and visualisations!

Thank you,

Elena | #VisualisingHE

P.S. If there were any more submissions to the #MidlandsTUG data challenge that have been missed from this post, do get in touch!

Teaching Excellence Framework | TEF

What is this framework of excellence you speak of?


#VisualisingHE investigates…..

The Teaching Excellence Framework (TEF) aims to recognise and reward excellence in teaching and learning, and help inform prospective students’ choices for higher education.

If you want to read up a little on the background to TEF check the HEFCE pages. If not, dive in and enjoy the ride…..

TEF in a nutshell:

The Inputs:

  • Feeding into TEF are a collection of standard metrics deemed core to identifying ‘teaching excellence’ in higher education.
    • The metrics include student satisfaction on teaching on the course, assessment and feedback and provision of academic support during studies, dropout rates and employment destinations.

These key metrics are sourced from an annual survey about student satisfaction (the National Student Survey – NSS), dropout rates of students from standard provider HESA returns and employment success rates sourced from an annual destination of leavers survey (DLHE). These scores are bench marked by HEFCE for a provider to aspire to at institutional level.

  • In addition to these data driven performances, a supporting statement is written by each provider highlighting what great things they have done to date and what they are working on to improve key metrics.

The Output:


In what I can only imagine to be a ‘X’ factor styled showdown…

  • An HEFCE panel of experts assembled to assess this provider level data and textual statements to come up with a rating of Gold Silver and Bronze per provider.

What happened next?

There was a medal ceremony (well the data was released on the 22nd June 2017 to the public)


  • Some people challenged this initial synopsis
  • Many people wrote about it
  • Many hours have been spent analysing the data and understanding what it could mean for the sector going forward.
  • NEXT….Subject level TEF looming #TEFstillhotontheHEagenda

In summary

TEF has caused one heck of a hullabaloo in the HE sector……… featuring heavily in strategic decision making and has reached to the core of HE operations.

Since the data was released in June 2017, many a HE analyst and journalist have been busily crunching the data, splicing, dicing and waxing lyrical. As an example, WONKHE alone have written 140 articles tagged as TEF to date, that is a lot devoted to one topic……

VisualisingHE investigates

In this post I make no attempt to try and summarise what has been written about over the last two months, but do take a look if you have the time as there are some very interesting articles to get your teeth into. What we would like to do is viz a few facts sourced from the data.

Therefore #VisualisingHE have put together a few headline vizzes which hopefully introduce TEF to anyone that reads our blog, and, encourages a dabble into provider level performance underpinning the metrics.

Hope you enjoy our vizzes and insights in Tableau:

Higher Education Provider Medal distribution

TEF_The rings


  • Wowza – 33% of HE providers got GOLD! that’s quite a statement for HE Education in the UK.

England – Higher Education Provider distribution by region

TEF by Region

Interact with the viz: TEF by region


The TOP3 regions rich in GOLD:

  • #1 East Midlands – 89%
  • #2 East of England – 44%
  • #3 West Midlands – 42%
  • The East Midlands is by far and away the hot spot for GOLD laying claim to more than double the proportions achieving gold in other regions.
  • The North East and Yorkshire and the Humber heavier in Silver and the London Universities slightly weightier than other regions in the proportions gaining Bronze awards

UK | HE Provider – distribution by region

Who is paved in GOLD|SILVER|BRONZE?

TEF Awards by Region_HEonly

Interact with the viz: TEF by region (provider) to view which providers gained gold silver and bronze in which region.

TEF Awards by missions group

TEF colours_higher education institutions

Interact with the viz: TEF by mission group


  • Across the Higher Education sector 32.8% of HE providers achieved GOLD status, 49.3% SILVER and 17.9% achieved BRONZE.
  • For the Former 1994 Group 36.4% of HE providers achieved GOLD, 45.5% SILVER and 18.2% achieved BRONZE
  • And in the revered Russell Group 38.1% bagged GOLD, 47.6% coming in in SILVER and the remaining 14.3% of providers picking up BRONZE

TEF awards – The Metrics and splits deep dive

Take a deep breath and have a look at the underpinning demographic splits also assessed in the TEF. These splits are also presented by category to help unpick areas for a provider to address and improve.



Interact with this viz: Teaching Excellence Framework – Awards


Knowledge is with the beholder…

Interventions in the hands of the provider…


For the good of the student

Hope this has been an enjoyable and informative read.

Adam #VisualisingHE