2 of 2 people found this helpful
Having not tested this but having a very good idea of Tableau's architecture, I suspect that quickfilters might just be faster than Sets simply as a set is a pre-filtered list of members, Tableau would still have had to perform a select distinct against the relevant db fields in order to obtain the list of underlying members that support the Set.
Think of a Set like this:
- Identify the distinct members of a field which takes time as every record in the table must be touched
- Build a list of these members
- From this list, identify which of these members exist within the set
Unfortunately, when it comes to filtering, your options are quite limited to dynamic filters and hard-coded parameters.
Other more complex operations can be used such as using a summary table inner joined to the source table and using this for the source of your parameters though this is where you begin to introduce unnecessary complexity.
Dear Steve Sir,
Thanks for your valuable thoughts. Actual there is one scenario where I have to use 9 quick Filters but because of these filters my Viz is taking so much of time to load which is an expected Filter. Even i cant use Parameter in custom SQL as there are 9 to 11 Fields are present under one dimension which I m using as Quick Filter.So I thought may be set based Filter can help me ?
Is there any other approach you would like to suggest that can solve your problem.
2 of 2 people found this helpful
When you say load, do you mean in terms of first opening?
If this is the case, you can stall the process by pre-filtering and selecting a smaller data set.
Just as an example, when Tableau loads a workbook, it shall scan every record in the dataset to identify the distinct members, but if you can limit the set, you can limit this activity:
One of my smaller tables is around 400 million records - a beast of data based on experiment results made up of around 130 live tests. Naturally scanning the complete table will seriously affect performance so using custom sql and a type-in parameter hooked into the experiment id where clause, I can have the workbook open and ready as Tableau is unable to render against no data.
Once the user enters their experiment id, the result-set of this experiment becomes the primary set so instead of performing a search against 400M records, Tableau is invariably performing its search against 10M records instead (the tests vary in number of records); result is, the process is faster.
In order to speed things up further, we are using Tableau Server, I have a summary table with the list of experiments and use a url filter that when the user selects the experiment id, it opens the proper workbook inserting the experiment id as it is opening.
So, if you have an overall level your users may want to initially filter on, region or division for example, you could potentially heavily improve performance.
You could also crack open the tableau logs - the data server ones are the ones not prefixed "Log n.txt", identify the code and paste it to your server solution, then use the explain plan function you can see if you need to optimise the source.
Thank you so much sir. I will definitely follow your approach and will update you my result.