Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments.
The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot:
I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form.
I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort.
Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought.
It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive and agreed metadata associated with it but I live in hope.
Putting this data in the public domain is already paying rich dividends for me. Jen Stirrup (twitter), a whizz at data visualisation (data viz wiz????) here in London, has kindly done some further analysis on my feedback scores by downloading my spreadsheet and analysing it using a software package called Tableau. Here is the heatmap that Jen produced for me:
I’m not trying to boast about my scores here, truly I’m not (but thank you to all those that were generous with their scoring – I am rather humbled), what I want to try and show is the power of data visualisation. In this one image Jen has managed to convey much more information than I did by merely providing averages; I can see the range of all the votes, how many votes there were, outliers and highest/lowest votes. Most of all I can see why my averages are changing, be they up or down. I now find myself asking questions and making observations like:
- Ok, my scores improved in for “SSIS Dataflow performance Tuning” over “SSIS Nuggets Live”, but clearly more people scored “SSIS Nuggets Live” so which is more valuable?
- Clearly my averages for “SSIS Nuggets Live” are negatively affected by some very low scores. That’s something that could be investigated further.
- Which areas have more scope for improvement?
- In both sessions highest marks were attained for “Speaker’s knowledge of the subject area”. This was not something I particularly noticed when looking at the raw averages. Visualisation makes it stand out so much more.
This is all really valuable information and will become more so as I hopefully do more presentations in the future. Thank you for doing this for me Jen, I am incredibly grateful for your work on my behalf.
If you would like to read more about the academic theory behind heatmaps like this be sure to check Jen’s recent post SQLBits Feedback Post-Mortem where she dissects her own SQLBits feedback in intimate detail, really interesting stuff that gives a flavour of Jen’s past and future presentation material.