I work in medical education and many of our medical specialties use an end of rotation survey as one method of building up a picture of the quality of training in that rotation.
The surveys are based around a 5-point Lickert scale; statements are made and the trainee doctors are encouraged to say how much they agree with those statements. Some statements are "negatively phrased" so that agreement would be considered a problem, some are worded positively where agreement would be considered good practice in the placement. For example;
"I was too often expected to work beyond the limits of my competence" (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree)
In this case, we want to ensure that our doctors are not being asked to work beyond the limits of their competence, so "Strongly disagree" attracts a score of 100. If they strongly agree with the statement, the question gets scored 0, with 25-point increments between.
"I found my clinical supervisor to be approachable and supportive" (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree)
In this case, we want to ensure that clinical supervisors are approachable and supportive, and address any problems with those who might not be, so strong agreement with the statement will attract a score of 100, strong disagreement with 0, again with 25 point increments between.
Using this method, we can get an average score for the question for each group of respondents; the survey is taken by all trainees across every site and hospital trust in the region. I use Tableau to create respondent groups, which each get an average of the score based on the Lickert response. The score is compared to a threshold value (<40 = red, >60 = green, in between = orange) and the blobs are coloured to show that threshold value. The size of the blob is based on a countd of the survey response ID.
This allows our training programme directors and quality leads to tell "at a glance" if there are any particular issues at specific hospitals, sites or departments, however the demographic data included in the survey export (pre-loaded into the system from our training database) would allow us to group by any measure; seniority of trainees, post type, curricula being followed - potentially combining common question data from many surveys and brining in equality and diversity data could allow us to analyse across many different medical, surgical and psychiatric specialties.
I also create combined "domain scores" report - questions following a theme (in the example above, all questions relate to clinical governance) will be summed and divided by the number of questions within the domain. This is obviously much higher level, but that can often be enough to spot issues and address them.
In addition to the above, I also produce the Lickert distribution charts demonstrated in Alex Kerin's templates in this forum, however I use the score instead of the "agree/disagree" scale, to avoid any ambiguity.
I am more than happy to provide a .twbx to anyone who's interested in this method, though you'll have to give me a while to generate some dummy data for it!