The Extract API you mentioned is for creating extracts. You'll want to use the REST API for publishing and refreshing extracts.
Starting in Tableau 10.5, you can use the Tableau Extract API 2.0 to create .hyper extracts.
For other tasks that you previously performed using the Tableau SDK, such as publishing extracts, you can use the Tableau Server REST API or the Tableau Server Client (Python) library instead. You can also use the Tableau Server REST API to refresh extracts.
- For more information about the Tableau Extract API, see the Extract API 2.0 documentation.
- For more information about the .hyper format, see Extract Upgrade to .hyper Format.
- For more information about the Tableau Server REST API or the Tableau Server Client (Python) library, see the Tableau Server REST API documentation or the Tableau Server Client (Python) library documentation.
The REST API documentation includes links to sample code you can use as a starting point to build what you need
Hello Saket - are you just trying to run the extract refresh on a schedule? I run these daily using Windows Scheduler and the Tableau Refreshextract command in a batch file.
This works very well for our visualizations here in teh community such as the Ideas viz.
Let me know if you run into troubles but I find this easier to work with and what's great is that my production server which has no access to local data stores is being updated daily with the local data.
Thanks for reply on my Question.
I am right now exploring Batch file and API Method to run extract. Personally I am more comfortable writing Batch file, But want to see how things work when we use API. Using Batch, Migrating Data Sources from UAT to PROD is working perfectly fine for us (So this process is in place now ).
Now I am look for the way to run data sources which are published to Tableau Server. If time permits I would like to use API method to do so, other wise will go again with Batch file approach.
Let me see, If I can make API method working for us.
Our Tableau Server has 8 backgrounder service and we have near by 60 Data Sources published to server.
So using batch file how we need to queue jobs to run 8 jobs in parallel and as soon as any queue finished job switch to next job in the queue.
Basically, when i ran extract for one Data Source from CMD it wait for that job to complete. So how to kick remaining 7 jobs.
>tabcmd refreshextracts --project Sales --datasource Sales --synchronous (Sample Command which i ran )
Could you explain me the process. I think i am missing something here.