You can do this with two sheets/dashboards. Users fills out the one that computes results which writes to a file/database then switches to the second sheet after load completes which reads from that file or database. In some cases you may need to refresh the data in the second view to see the effects.
Thanks for the response.
When will the script (Python or R or SQL) to perform calculations be triggered? Is it triggered whenever the user changes the drop down or it needs to be triggered manually?
Can you please elaborate on how to trigger an external script (say python) when a drop down on the Tableau dashboard is changed? Also, how to pass the selected values in the drop down to the python script?
You can find out more about it in TabPy documentation page.
You can pass parameters also as arguments e.g. _arg1, _arg2 etc. as shown in examples but you will have to pass them as scalars as opposed to vectors which would mean for example _arg3 instead of _arg3. That's the only difference. If the Python calculated field (SCRIPT_ function) is being used in a view, changing any parameters that are being passed as arguments to the SCRIPT function will automatically trigger a refresh. If current selection of MyParameter is 'East' when you write a function like
SCRIPT_REAL("your python script here where you pass _arg1", MyParameter)
the value 'East' will replace what is _arg1 in the script section and automatically refresh.
re: the 2nd request for write-back and then refresh, I've tried this with R (the Python workflow is much the same) and the user experience (as a developer and interactor) is "sub-optimal", though I'm tempted to use a less polite word.
As an interactor, the challenge is that I do something (like enter a parameter value) and the view updates, however due to Tableau's order of operations the write back from Python or R happens *after* the query to the data source so to see the updated data I *must* do a second view refresh. In addition Tableau aggressively caches data (and once in awhile there's bug there, for example 10.0.0 had a bug) so in my experience that second view refresh isn't 100% guaranteed to work every time.
As a developer I have to deal with the fact that *every* full view refresh for whatever reason (changing a parameter or a quick filter, using a filter action, etc.) is going to be calling that Python or R script, and depending on the view it can be challenging to identify whether I need to write back data or not. In the proof of concept I built for write-back (it was using a parameter and field values to add mark labels to a view) I just accepted that I was going to get a lot of replicated data and built the Tableau view to accept that, and the anticipated volumes were small enough that I could live with it. This just wouldn't work in a larger deployment. In addition if I'm writing back to a CSV file that is using Tableau's default connector then every time Tableau is actually refreshing from the file Tableau needs to rebuild the shadow extract and in large files this can be a huge performance hit.