1 of 1 people found this helpful
Hey Arnold. The logs would likely be helpful in determining what the biggest bottleneck is for each execution.
Typically the biggest time suck is going to be the rendering of the customized dashboard(s) though, so it that's the case, the best way to improve throughput in the short term would likely be to improve the dashboard performance. Since you're always passing in certain parameter filters, if there's a way to optimize your back-end based on that info, it might be worth looking into. Changing the dashboard to extracts rather than a live connection, if it's not already using them, would be another approach. There's Designing Efficient Workbooks | Tableau Software if you want to go deeper on that route.
If you're attaching multiple PNGs of a few different dashboards, ask yourself if they should always be included or if you could make them conditional? If you don't have to generate them, then it can save time.
Finally, the best solution is this: If you or anyone at your org knows Python, you're welcome to propose a code change to how VizAlerts works with multi-threading so that we can accommodate faster performance on these kinds of bulk alerts. My only concern with increasing it is to figure out how to make sure that Tableau Server doesn't get inundated with too much activity at the same time. If we implement a separate thread pool for each alert, then if you're running 4 VizAlert worker threads and 4 "action" threads (for lack of a better term) for each VizAlert, that's a total max of 16 threads that could all be trying to render vizzes from Tableau Server at one time! That might be tough on smaller clusters. Setting a total max of, say, 4 threads and then allowing each VizAlert to make use of any that aren't currently busy would be a better implementation but a bit more work to code (it's probably the "right" way to do it though).
2 of 2 people found this helpful
A few points to add to what Matt said. If there are 4 distributions on the same schedule then you'll need to also make sure you've bumped up the number of threads (assuming you are on VizAlerts 2.0) to 4. If you've already done that, then what we're theoretically seeing is 50 emails per thread taking 30 minutes, or roughly 36 seconds per email. That seems like a lot, so definitely check the VizAlerts log file for what might be slow.
A rough outline of what VizAlerts does is:
- download the schedule
- split up work into different threads
- for each thread:
- download a trigger view
- run some validation checks on the view
- request the images & PDFs and download them (one at a time)
- send the emails
Of these the three most time consuming parts are generally rendering all the images & PDFs, then downloading them, and then uploading them as part of the emails. So the main things to check are Tableau Server's performance, the performance of the rendered sheets, and your network performance.
There's also one sort-of-sneaky performance optimization and that is to not send images at all, instead use calculated fields to make the subject & body of the actions in the trigger viz have the necessary details with HTML formatting in the body. So instead of a viz saying "CONGRATULATIONS YOU MET YOUR TARGET" the body could be some fancy HTML that says the same thing and maybe has a VIZ_LINK() content reference to make a link to Tableau Server, then you can avoid the overhead of rendering & downloading the images.
Thanks for quick response, we did do 4 trigger views so that would be inline with the 30 minutes estimate. I will talk with our admin team on the threads setting. Definitely this dashboard is extract based and the extract is very large as its carton data per courier for all shipments in all of our FCs in the US and their performance. So the rendering of the filter might take a bit of time. I will work with the developer to see if we can build in some changes to improve the rendering time. We already removed the PNG in the email body as we displayed the same info in the PDF attachment.
Hey Arnold. The latest version allows for multiple threads running within a single VizAlert (task threads). After all content references are processed, the actual emailing / SMS-ing will be split among these threads, which will hopefully improve throughput. If you've got a ton of unique content references to process, they'll still be single-threaded, so the savings may not be amazing in that case. But you'll still see a throughput increase during the emailing portion. Eventually we will add content refs to the task thread queue as well, but that was a bigger change than we could get in this latest release.