Peer review document would depend on the points that are being reviewed.
I'm guessing this would include things like,
Maybe we can change this to a thread where the peers in the community comment on what points should be reviewed.
We can then use those points to create a Peer review document template for all to use.
Mounika here is trying to put together a peer review document.
Out of your experience what are the things that you would review in a tableau workbook regularly that you feel should be included in a peer review document.
Any pointers would be hugely appreciated.
2 of 2 people found this helpful
Brainstorm only so not sure if applicable so just take any that make sense:
- Aliases - Proper use of, proper naming
- Conformance to naming standards, fields, strings
- Proper location / balancing of logic (i.e. SQL vs field calc vs table calc vs included in extract, etc.)
- Hard-coded lookups which should be stored in database tables
- Coding standards ; often org-specific but may also be industry standard or best practices - Example is excess view complexity way beyond what Tableau is capable of, performance heavy, difficult to support, etc.
- View standards, use of color, view size, fixed vs dynamic, branding, etc.
- Newly-created database tables that are redundant with already-available data sources or which should become shared/controlled data sources
- that's all my brain will produce for now...
2 of 2 people found this helpful
I have read a blog where it is explained:
My first manager and mentor Simon Hamm instilled this practice in me and I will never forget it, over the years it’s caught many mistakes in teams and projects I have run. It is simple and can be applied to any BI workflow or dashboard.
Nothing leaves the team until it has been checked by a peer, that person is responsible for finding the issues you will inevitably have made.
Peer review should be the cornerstone of the business, it should apply to everyone and the assumption during checking should be that there are errors to find. So Senior members of the team are not exempt from errors (often they are the worse culprits as they take on pressure work and publish it quickly).
in the corporate world implementing a peer review policy can also be backed up by personal objectives, instilling an ethos of checking rather than a culture of blame, e.g. replace an objective saying “In the next 6 months your reports should produce no errors” with “Everything you produce should be reviewed by a peer”. People make mistakes, removing the blame increases efficiency and moral and shifts the onus onto ensuring the policy designed to catch those mistakes works.
Fact checking and other simple functional tests should be part and parcel of the early part of any report / data testing process. Producing a dashboard on the number of people in the UK, then check the top line numbers looks right. It’s often said the devil is in the detail but it’s important that the broad numbers are checked first – I’ve seen situations where reports that have been “checked extensively” have basic headline figures with glaring mistakes, which have been missed because of the focus on the detail – I imagine this could have been the issue with the Tibco numbers.
This sits firmly alongside peer review, as replicating the results of the module / workbook should the mainstay of the checking process and are the responsibility of the peer “checker”. Thankfully tools like Tableau and Alteryx (and other rapid development BI tools) make this easy. Checking a Tableau report? Use Alteryx to do some adhoc analysis and check the numbers. Checking an Alteryx workflow? Drop the data into Tableau and do some visual checks.
Hand crank a few rows of data, say for an individual or product, through the entire process – are the results what you’d expect? Checking a few rows is much simpler than checking 10 million.
Ensure processes track MoM and YoY trends; small data quality issues can be difficult to pick up and will only manifest themselves over time. Keeping headline QA (Quality Assurance) figures of key datasets can help track these trends and pick out issues with data processing.
Modular workflows like Alteryx are easy to build but in when building them people need to ensure that checks and balances are built into the logic; tools like Message and Test can be used to build simple checks – e.g. are joins 100%? Are there duplicate records? Build in outputs at each of these key stages and ensure these row level error logs are checked if they contain data. Without these checks modules can run unattended for a long time before anyone notices key lookup tables haven’t been updated and data dropped during the process.
User Acceptance Testing
With rapid development BI comes a whole new paradigm, UAT can and should be done in an agile and flexible way. Often business users are building their own reports but even if not then co-locating individuals can lead to a much better experience for both parties.
Documentation, Documentation, Documentation
Just do it! Documentation doesn’t have to be dry and Word / Visio based though.
Annotate requirements in the tool itself (both Alteryx and Tableau provide a rich set of tools to allow users to do this as they build workflows and dashboards, and other similar tools have similar features). Comment formula and use visual workflows to provide commentary on the analysis and decisions. Hide and disable dead ends / investigations but don’t delete them – they are as useful as the finished result as they show the development process.
Also document the checking processes: released a report with an error? Learn from it. Keep a diary of checks for each dataset to refer back to, there’s nothing worse than a mistake that’s then repeated needlessly later.