21 Posts to date | Which have been Adams favourites?

Team #VisualisingHE have had a lot of fun and learned a lot over the last 21 months and I thought I would kick off my reflective post covering some of my favourite bits.

We kicked off the project in March 2017, to-date we have created some 21 posts, we have challenged ourselves to numerous new charts including; Sankey’s (Apr 18), radial charts (Nov 17), and even a chord diagram (Apr 18), to name just a few. Explored many open HE datasets, I have learned a ton about remote communication and working together, peer editing and critique, not to mention furthering my Tableau skills and bettering my fairly poor writing skills.

So what have been my favourite bits?

  1. Overall favourite blog post
  2. Favourite Viz – Elena
  3. Favourite Viz – Dave
  4. Favourite Viz – Adam
  5. Favourite solo blog post

Overall favourite blog post

Having written this paragraph a few times whilst in draft, it became obvious to me I have a few blog posts that I have really enjoyed (for various different reasons).

However I have settled on one standout post for me, of which I wasn’t personally involved, but admire a lot, it is the NSS 2017 – one dataset many dataviz approaches post, where Dave and Elena opened up the concept of #VisualisingHE to the #MidlandsTUG,  the result was a fantastic post showcasing the TUG’s many and varying vizzes.

Collage


My favourite viz – Elena

Elena’s contribution to the project has been invaluable. Her style is also unmistakable and has earned her quite a few Tableau public ‘viz of the day‘ accolades! My favourite is without doubt her solo post ‘destination Europe‘ a topic that really sang with her and it showed in her viz and really came over in her superb blog post.

Inflow & Outflow (6)

Interactive viz


My Favourite Viz – Dave

Dave consistently knocks out meaningful and insightful dashboards with care and thought. I love his style, ever clean and simple, which is, as we know often more difficult to achieve than the most complicated and intricate of vizzes.

I think my favourite viz of his is found in our September 18 post ‘A looksie at UCAS‘. It’s clean, simple and effective, uses a minimal colour palette but packs a punch.

UCAS subjects 2009 to 2018 (1)

Interactive viz


My Favourite Viz – created by me

For me, my favourite viz is probably one also featured in our latest post (Sept18). I have tried to develop my concept of #Coffeetableviz over the last couple of years, and this ‘looksie at UCAS viz’ gets close to the effect I have been trying to create. It isn’t totally finished and has a few elements I should tweak, however I think it packs a punch, gets the BANS over to the viewer, has a playful side, provides a takeaway but also encourages the user to take a deeper look at the interactive viz (and that’s what I wanted to achieve).

A looksie @ UCAS placed Applicants

Interactive viz


Given this post is about ‘my favourite bits’, I feel it justified to dwell on another post to which I enjoyed vizzing AND writing (I don’t always enjoy the writing bit as much as the vizzing if I’m honest!).

My favourite post and viz was my post on LEO – #Realtableau to funviz .

Why? Well I took a viz I crunched for my Exec team and Employability directorate and had a little fun with it over a few evenings at home. I really enjoyed documenting my journey of exploration, my battle with creating the radial chart and the iterative nature the viz took whilst working remotely with Dave and Elena. They have both always been on hand to comment on my viz outputs, suggest improvements and laugh with me, whilst I bash out expletives during the struggle to get a viz to behave!

The main viz for the post finished up looking like this:

infographic.6

Interactive viz

Thanks Dave and Elena, I have really learned a lot over the last 21 months.

Can’t wait for the next tranche of open HE data sets to emerge.

Thanks for reading.

Adam

Advertisement

National student survey – how happy are they?

So this year there was a change with the national student survey (NSS). This time it was not changes to the survey itself (last year the questions were changed and new themes introduced) but, rather, changes that it now came under the custody of the Office for Students (OfS) since they replaced HEFCE.

The changes were not dramatic but represented a subtle shift towards the students’ interests.  The data was not shared with Higher Education Institutions (HEI) before publication so that the students got to know the performance of the HEI at the same time as the Institutions themselves.

The other changes have been that the data has been presented in a slightly different, more accessible format.  It is a long way off dashboards with visualisations but the data is there for people to easily review.

So we at #VizHE decided to pick up from where we left off last year (NSS 2017: One data set, many dataviz approaches.) and pull out a few interesting nuggets for you.

The starting point was to see how the students had responded on a national level looking at the themes.  This was also tied into the monthly story telling with data challenge which was on dot plots.

NSS Theme score comparison (1)

The interactive viz: is here!

The main takeaway is that it really has not changed that much and where changes have occurred, they have been negative.  The only exception is Wales where the students have responded much positively this year compared to last.

Adam decided to reviz the headline theme performance overview originally posted on the OfS pages because he wished to make it easier to compare satisfaction with the themes rather than themes and years mixed up together.

Original:

nss_ofs.png

Makeover:

NSS2018 themes.png

Following on from revizzing the headline NSS theme results Adam wished to dig a little deeper into the student satisfaction with the separately reported question 26 – ‘Students’ union’ part of the ‘Student Voice’ theme, highlighted above as very much lagging in student satisfaction compared to the sector theme scores.

Which unions are getting it right?

Which unions are getting it right

The interactive viz is here!

The main takeaway is that Alternative Providers (AP’s) appear to be getting it right more than Higher Education Providers and Further Education Colleges when taking the median score for each the provider types as a reference point. However, the populations are small for AP’s and this could be causing the volatility and wide distribution of scores seen in the figures. Hover over the box plot distributions to see which providers are getting a thumbs up from the students and who are in the dog house!

So that’s all from us for now. Thank you for reading and we hope you enjoyed exploring our  visualizations!

Dave and Adam

Team #VisualisingHE

NSS 2017: One data set, many dataviz approaches.

I often say that one dashboard cannot answer all the questions we may have for our data and that the way we visualise our data depends strongly on the question(s) we pursue to answer. On 24th October 2017, the latest meeting of the Midlands Tableau User Group (#MidlandsTUG) community took place in Leicester and, as part of it, attendees had the opportunity to present their take on visualising a public dataset – the 2017 National Student Survey (NSS) results. As a result, a few the attendees presented their work, thus showcasing the variety of angles an analyst can take when trying to gain insight into their data.

Further in this post you will have the opportunity to see some of these examples, presented in no particular order, but first, please remember that the analysts whose work is referenced in this post work in a wide range of sectors. Some may be more familiar to the data than others, but they may not necessarily be an expert on it, and, therefore, their visualisations should not be interpreted as in-depth analyses!

The main purpose of the ‘data hackathon’ exercise during the event was to encourage creativity and a range of approaches when analysing a common data set, so please keep that in mind when exploring their visualisations.

About the data:

The 2017 NSS results were published in July and the data is available on the hefce website. The survey itself is ‘aimed at mainly final-year undergraduates, it gathers opinions from students about their experience of heir courses, asking them to provide honest feedback on what has been like to study on their course at their institution’ (http://www.thestudentsurvey.com/about.php).

Approach #1: Elena Hristozova

Elena Hristozova 1.png

Link to interactive viz: https://public.tableau.com/profile/elena.hristozova#!/

Let me introduce you to my approach. I had worked with this data before which allowed me straightaway to bring together some context in my analysis by providing the themes in which the 27 questions are grouped. My main question for the data was: are there any trends that can be seen in the questions’ results across the subjects and the teaching institutions? In particular, I wanted to make it easier for the user to see whether any groups of questions tend to score lower than others.

I used a heatmap to show the results for all questions and subject or institutions, allowing users to make a choice. Based on no scientific evidence, I chose 75% agree score as my middle point and used colour to encode the ‘less than or higher than 75%’ results.

Approach # 2: Ali Motion

Ali Motion.png

Link to interactive viz: https://public.tableau.com/profile/ali.motion#!/

Ali’s visualisation is a great example of a benchmarking type of dashboard where one can easily see how a selected institution has performed for each of the subjects taught at it, and how the results compare to those of other providers for a selected question. Ali has used a jitter plot where the results for each of the providers are plotted on a single axis, but the marks in the charts are given ‘some space to breathe’ and to show groupings a bit better.

Colour has been used to highlight the selected institution, which also changes between green and orange to indicate of the said institution’s score was above or below the benchmark, or the sector average. Furthermore, a user is also supported in the interpretation of the results through a very clear to read and understand set of tooltips that appear on hover.

Approach # 3: David Clutterbuck:

David Clutterbuck.png

Link to interactive viz: https://public.tableau.com/profile/david.clutterbuck#!/

David’s approach was very straightforward: show the scores for the Top 10 and Bottom 10 providers, show movement from previous year and breakdown of satisfaction per question. He used both colour and shapes very effectively to show the insights and he even went a step further obtaining the previous year’s data to demonstrate the change over time in the overall score.

David has also added a couple of very small touches that make his visualisation easy to explore: by clicking on one of the providers in the table, the provider’s logo appears underneath and the provider’s results become highlighted in the detailed question breakdown.

Approach #4: James Linnett

James Linnett.png

Link to interactive viz: https://public.tableau.com/profile/james.linnett#!/

‘The way I approach it was for the end user (who may or may not have any analytical experience) to easily understand what the dashboard is portraying. By that I mean easily being able to compare their institution and/or subject to the national average.

The traffic light colours indicate how the specific institution, question group or individual question compare against the said average.’

Approach #5: Rob Radburn

Rob Radburn.png

Link to interactive viz: https://public.tableau.com/profile/robradburn#!/

Rob’s approach is very interesting in the sense that it has incorporated a way to visualise both, the mean score of students answers, which is a value from 1 to 5 where 1 = ‘Strongly Disagree’ and 5 = ‘Strongly Agree’ , and the level of agreement (or consensus), which is a measure he has calculated that has a value between 0 and 1, where 0 = disagreement and 1 = agreement.

Rob’s visualisation is very clear and he has provided the readers with a very simple explanation of how to read the chart: ‘The dot shows the average score for each question. The length of the line from the dot measures how in agreement students were in answering the question. This uses Tastle and Wierman’s measure of dispersal. The shorter the line, there is more agreement in the answers for the question. The longer the line, less agreement.  The questions are then ordered from more to less agreement.’

Approach #6: Neil Richards

Neil Richards.png

Link to interactive viz: https://public.tableau.com/profile/neil.richards#!/

‘For my visualisation I was aware that there were some great examples in Big Book of Dashboards chapter 3, so essentially I just wanted to recreate a simple version of the jitter plot of the start of the chapter, with one for each question. I didn’t actually look at the book until just now writing these bits, so I didn’t realise how well I’d remembered the look of the chart!

I added the overall average line after feedback on twitter, and fixed the chosen provider to always show in the middle (with all other x positions just placed at random).’

Approach #7: David Hoskins

David Hoskins.png

Link to interactive viz: https://public.tableau.com/profile/hoskerdu#!/

‘I decided to take a more personal approach to the dataset and compare responses for three institutions where my step daughter Sophie is thinking of studying Social Work next year: Bournemouth, Leeds Trinity and Nottingham Trent.

I knew I had the perfect photo (taken at my partner’s graduation), and soon decided on a grid structure for my dashboard, with a column for each institution showing KPIs for overall comparison and the questions grouped into two categories to keep the layout uncluttered.

Narrowing the focus also allowed me to display the proportion of responses for each question as diverging stacked bars (using instructions at https://t.co/LyB0ikBSmB ) and show the detail behind the aggregated metrics.’

Approach #8: Jennie Holland

Jennie Holland.png

Link to interactive viz: https://public.tableau.com/profile/jennie.holland#!/

‘For my analysis I used the summary data at country level. The first sheet   in the workbook shows survey results for the question groups at country level. As the data wasn’t too big I wanted to be able to show this all on one sheet, with highlights on country and UK comparison to show the variation at a glance.

The second page looks at the questions asked within each question group. To help with consistency between the two sheets, I grouped the questions into the same categories used in the first sheet, and allowed the user to select the group of questions they are interested in. I was quite keen on using one scale for all question groupings that were selected in the filter, to enable the viewer to see the distribution of scores across the question groups.’

Approach # 9: Neil Davidson

Neil Davidson.png

Link to interactive viz: https://public.tableau.com/profile/neild#!/

Neil has not had the chance to provide a quick summary of his approach but straightaway it is easy to see that he was familiar with the data and the HE sector. In his visualisation he has demonstrated the correlation between an institution’s overall score for each of the questions and the said institutions’ rank in the Complete University Guide (CUG) league table.

Not all providers who have results for the NSS appear in the league table and so only higher education establishments are visualised. Though Neil’s work is still a work in progress (as he has admitted it himself), it still demonstrates that the strongest correlation of the CUG ranks appears to be with Q15: The course is well organised and running smoothly (the question with line of best fit closest to 45 degrees). Of course, there is no argument against the fact that a league table is not based merely on the results of the NSS, but the bottom line is that visualising a set of small multiples to show correlation between two variables can work well.

Approach #10: Elena Hristozova & Dave Kirk

Elena Hristozova 2.png

Link to interactive viz: https://public.tableau.com/profile/elena.hristozova#!/

The last approach to demonstrate is somewhat a joint approach between myself and Dave. Dave had the idea to create a visualisation that shows how many of the questions scored below the score for Q27 – Overall Satisfaction. Due to time constraint, he was unable to complete his visualisation but after a couple of exchanged messages I realised that this was an ideal question to answer using the sunburst chart I was currently learning how to make.

The end result is the visualisation below which compares the scores for all 26 questions in the survey to the overall satisfaction score for each of the country regions in the UK.  The analysis is presented through encoding colour – negative difference presented in red and positive difference shown in blue. The intensity of the colour is an indicator of how big or small the difference is.

To sum up…

I hope you have enjoyed exploring the different ways in which the Tableau Community in the Midlands approached the 2017 NSS results. Some analysts drew inspiration from books and the literature, others took a very narrow focus on the data, and the rest just practised visualising survey data and they all had a different set of questions driving their analyses and visualisations!

Thank you,

Elena | #VisualisingHE

P.S. If there were any more submissions to the #MidlandsTUG data challenge that have been missed from this post, do get in touch!