We're just pulling Service-Now information into our data warehouse and beginning to visualize it for IT leaders. The use case is adding visibility, specifically at first to time cards/project related costs.
We have teams looking into Jira, specifically in our Digital stream, but there are technical roadblocks (primarily surrounding easy access to the data). I could certainly see us looking into Splunk data too at some point.
No screenshots or workbooks as of yet, just finishing the first round of dev.
This is great - thank you for sharing Tim Latona!
How are you sharing the information with your IT leaders? Tableau Server? Weekly reports?
Have you noticed any impact from the short amount of time that they've been shared? Are the leaders asking more questions?
We would love to see screen shots or hear more use cases, as they're developed if you're willing to share!
1. What is your role?
In a previous job, I was a BI Developer that wore many hats, one of which leaned toward DBA work.
2. What tools do you pull data from? (I.e. Splunk, Jira, ServiceNow)
We had 99% of our critical data on SQL Server from many databases across a handful of servers.
3. What's your use case?
Automated jobs ran on SQL Server daily to load, and transform data for business use. These jobs had been created over years by business users, and were not well crafted or organized. They began running slowly enough, and suffering enough failures, I was tasked to improve the speed and quality of the whole ecosystem. Among the tasks I undertook to understand the problem, I created a query to pull in logged information about the various SQL Server agent jobs that ran daily, Calculating run times, and showing outcomes of the jobs at both summary and detailed step levels.
I used Tableau to visualize that data in a number of ways, allowing me to identify bottlenecks, sequential dependencies, jobs that were running longer than normal, cycles and patterns in performance, and a few other key metrics. This was the key to knowing where to focus attention for performance, and how to restructure the whole system for quality and clarity.
4. What kind of impact does this make?
The slowness of the jobs and frequent failures were beginning to get into mid-morning before critical data was available for reporting and analysis, so a whole team of account managers were essentially unable to do their job for up to half the day on a regular basis. Finding the problems and doing some refactoring brought the quality up to very infrequent failures, and jobs were completing at almost the same time account managers were arriving at their desks in the morning.
5. Can you share any screenshots or workbooks? If so, please do!
Before you judge, this was my hack and slash to find the problems, so I took no care to make this beautiful or polished for presentation. Most of the gritty detail is in the tooltips.
The Average Run times allowed me to spot trends and outliers, the Gantt was critical for spotting dependencies and bottlenecks. There are a bunch of filters off screen to select individual servers and jobs, to get closer views. And there were a few other sheets for looking at run times versus average run times.
Thanks for asking!!!!
Absolutely we use Tableau to analyze much of our IT data.
1. What is my role? One of my primary roles is to ensure that Tableau Server (known as Insights) is humming, fully functional and has all the right information for tracking of usage.
2. We pull from many places for monitoring of IT data including JIRA, Tabmon, Perfmon, Ganglia, and TCQA (Total Company Quality Assurance)
3. The use cases are unique to the need.
- JIRA - tracking tickets and how long it takes to close them and who works on them
- Tabmon and Perfmon and Tableau internal Postgres - monitoring of Insights usage and performance of the rendering and extracts
- Ganglia - monitoring of our DW resource queues
4. I will name the Ganglia impact. There are many ad-hoc analytic SQL queries that run against our DW. The dashboard has an immediate impact of supply of available queue slots against demand of # of queries so that the analysts can plan ahead on when to best submit their ad-hoc queries
5. Here are a few screenshots
DW monitoring - on four TV's throughout the office that are updating every 2 minutes with fresh data
JIRA ticket queues
Internal Tableau (Insights usage)
Jeff, it looks like one of the pictures you shared is on a TV monitor - is that showing some place in the office for more people to see? We have a couple of these hanging in various departments throughout Tableau and we find them really useful.
Yes, it's on at a minimum of 4 TV's in the office. And people do find it super useful!!!!
And on a capacity note (for those that care), the TV's are auto refreshed every 2 minutes via a .Net app that has the dashboard embedded. It used to be that each TV would refresh independently so the rendering was having to occur 4 times simultaneous via the guest user. This has since changed to be more optimal. Now, the .Net app references the dashboard once every 2 minutes and downloads it as a local PNG copy. Then the TV references the local copy. If you want to know more specifics, let me know, and I'd be happy to share.
How do you have your Tableau Dashboard to auto-refresh?
Thank you for responding so promptly!
That gave me some information but not enough to get started on a similar project. SoI have some more questions to clarify this task.
What is the .Net app that you're referring to - that takes a snapshot (PNG) of the webpage every 2/4 minutes?
Does the app place this PNG file in a shared location from where the it is picked up by the TV/display?
Thanks for all your inputs!