There are multiple steps to any import. The common ones are read of the db source metadata, create the extract, run the SQL on the db, import the data. I believe this all stays with hyper. But, as part of creating an extract in prior-Hyper versions, there were more steps such as materializing of calcs and compressing the columnar datastore. Hyper doesn't need to do this because it works differently internally. Once it imports the data, then it's done.
I will read up more about extract creation back-end logic and also about Hyper.
Another quick question:
Let's say I work with my Tableau Admin team mate and convince him to go the script route with updated script for .Hyper extracts. would I still run into the problem with schedules you mentioned in one your comments on the blog?
Your comment on that blog read this : " I have an external job scheduler that will trigger the refreshextract as a task. The problem with the workaround solution at this point is that the tabcmd publish unschedules it from the designated schedule. Is there a way that you know of to publish but leave the schedule as is or attach onto a given schedule?"
The blog is really ancient. Can you provide me a link for context?
In terms of keeping the schedule upon publish, the schedule is retained.
this is the link I was referring to. (Refreshing Large Extracts Faster ) do you have anything like this written/blogged ? would be really helpful to see how you have implemented in your org, even a high level overview of your step by step approach implemented within you organisations architecture would be really really insightful.