Hi Phyllis. If you're looking to trigger your alert based on whether or not the extract refresh was successful, you'll need to get that data from the Tableau Server postgreSQL repository database. I recommend using a data sources I built for this called TS Background Tasks. You'll need to enable access to the PostgreSQL database first before you can connect. Once you connect, you'll filter to Job Name = Refresh Extract (or perhaps Increment Extract too), then Item Name = whatever your data source is called. Add a relative date filter for Created At that matches the cadence of your alert (say, last 24 hours for a VizAlert that runs Daily). You can then copy in the Email ___ fields and edit them as necessary to fill in the content you want to fire off when the alert runs.
Does that help answer your question?
If you can make the VizAlerts refresh dependent on a successful extracts refresh in jams, just run tabcmd refreshextract with the --synchronous flag. That will force the tabcmd to 'watch' the extract as it runs and it will only return a success if the extract refresh successfully completed.
Our Data Warehouse DBA's have an ETL job that brings the data over every 15 minutes from Tableau server to a sql server. We are building a Tableau Dashboard to display at a glance those jobs that failed - we have success using the background tasks table and it shows those via a Tableau schedule - but we are not seeing how to determine if it was a tabcmd refresh or failure vs a refresh thru the schedule.
What actually tells you the refresh was via tabcmd - or is there any indication?
We are already running what you suggested, although not sure its the same way you're doing it.
Our DBA says he's not getting the error message back in the batch file he runs - so he's going to call the tabcmd directly from JAMS instead of calling a batch file to see if he can get better information this way.
Thanks, we will test this out.
So I'm not totally sure I understand what the problem is, Phyllis. Let me try and describe what it sounds like you're trying to solve--tell me where this is wrong:
So you have a third-party scheduler product (JAMS...fun name!) that you use to call TabCmd to automate Tableau Server data extract refreshes on data sources that rely on upstream ETL processes, so that the extracts are refreshed at the end of each ETL cycle. You have VizAlerts build that rely on these extracts (sounds like these are some kind of email burst reports). But you don't want to send out the reports if the data is still stale, so you're trying to make your VizAlerts conditional on when the extracts ran successfully, and get some kind of error message / report back to the Admin(s) if they failed.
Is that right?
I'll reply assuming that I've got things pretty much right, and hope that it helps answer your questions. So the ETL->tabcmd->Extract refresh process could fail at a lot of different points, right? JAMS could fail, the ETL could stop running, or fail, the tabcmd call could fail, the extract refresh could fail. But what you really care about at the end of the day is, did I get fresh data into my extracts, or not? If I did, then send my batch emails out via VizAlerts. If I didn't, then email the admins and tell me that something broke, and ideally, what it was.
If I've got that right, then I think your best approach would be to trigger your VizAlert based on whether or not the data in your extracted data source in Tableau Server is fresh. There might be some indicator in the data that would allow you to do this. For example, say you've got a bunch of sales transactions in your data extract, with a Date Created and a Date Modified for each sale record. If you knew that every day you should expect to see updated dates in there, you could use that to send your VizAlerts. It would be something like creating a new calculated field to return the later of the two date fields, then testing for MAX([latest date]) being within the last 24 hours or so. It might require doing an LOD fixed calculation to ensure you're spanning the entire set of data, and ignoring whatever other filters you've got on your sheet. So in the end, the logic would be, "if there are any records anywhere in my data source that were created or modified within the last 24 hours, then send the VizAlert reports out". So in theory, that would prevent your reports from going out if the data had not been updated. You'd need to be careful that you should actually expect the data to have changed every 24 hours, though. You could then also build a new VizAlert using the same exact filter, but set to False, which would trigger an email to your Admins alerting them that the data had not changed in 24 hours.
Another approach would be to use a Success record in the TS Background Tasks data source I described earlier, and use that to trigger your VizAlert reports. You'd need to blend that data into your report, probably based on some arbitrary field you match on both data sources (1 = 1, effectively). So that approach would basically be saying: "If the Tableau Server successfully refreshed the data extract(s) my VizAlert relies on in the last 24 hours, then send the VizAlert reports out". You'd have reasonably good assurance that things were correct with your data at that point. You could also construct a report that looks for a NULL or False value to alert you if no successful extract refresh had taken place, and alert the admins accordingly.
Let me know if I'm on the right track, here...
We have a process in place like you described above for one of our very critical time sensitive VizAlerts, but we have other VizAlerts that are being refreshed via the tabcmd that we want other team members to monitor who are not admins or owners of the workbooks. We built a tableau dashboard that resembles the error messages that display in the Admin tool that shows the Data Source:, Failure:, Refresh Failed, Last Refresh, Resolution Details. However, we do not believe that the data sources that are refreshed via tabcmd display here - so we are wanting to capture those.
This way we will know if the data sources needed for this workbook (VizAlert) were all refreshed - As I'm writing this - it occurs to me that we should have these other alerts set up this way as well.
But in the interim we just want to be able to determine how to tell from the background jobs which job is refreshed from a schedule versus which job is refreshed via the tabcmd and was the tabcmd successful or not.
Assume we are not using VizAlerts - how do you determine if a data source has been refreshed using the tabcmd from the tableau server? Our DBA says he's runs a batch file and it sends a tabcmd to the server, but he doesn't know if it was successful or not. How can you determine success?
Any data extract refresh ever even attempted should show in the background_jobs table. So TabCmd calls should be included as well. But Tabcmd can fail, and capturing the error messages from it is notoriously annoying in a batch script. If it does fail to make the call, no extract refresh will be attempted, and the internal postgreSQL database will have no background_jobs record of it. There may be data in http_requests that records the request itself and might allow you to detect when a call is successfully made by Tabcmd to Server to tell it to enqueue the extract refresh task. And probably the http response.
But again, because of all these possibilities, I still think it's best to look at the result, and not the individual upstream processes. That approach has worked well for us in monitoring our own ETL failures.
What column is it on the background_jobs table and what value would indicate whether a refresh was performed by a tabcmd instead of a tableau scheduled job.
Actually, I'm not able to find any way to track tabcmd calls that specifically refresh extracts. I checked all the tables I know. I can find them for Login if you're using it to login, then refresh the extracts, but unless you're using a unique login for each tabcmd refresh call per ETL workflow, I don't think that'll help much. In lieu of a solution for that problem, can I ask why it's important to know which process caused the extracts to refresh? Again, it would seem to me that the important thing is to know that the data was current when you expected it to be--and if that's the case, there are other ways to skin that cat, as they say.