Hi Kelvin. The only way to extract data is to do full refresh of all the included data in the extract, or incrementally pickup where you left off. This being said, there are scripting workarounds to do what you need. What you have to do is save a create an initial version of the extract up through the date that the data is static (i.e. October 2018). We term this the "immutable extract" because it doesn't change, but rather acts as the starting point. Then each time that you need to refresh the data through current, you can take this "immutable", publish it as the "incremental" extract, and then run an incremental refresh on this data. The net effect is that you are starting the refresh as of the end of the immutable, and you capture all the up to date data. And then all your reports point at the "incremental" extract. Does this make sense? There's a real old thread that goes over the concept in a little more detail here, and most of it still applies. Refreshing Large Extracts Faster
hey Jeff Strauss
Good stuff! Thank you so much!
Could you please elaborate a little bit more here:
"What you have to do is save a create an initial version of the extract up through the date that the data is static (i.e. October 2018)."
How do I do this actually?
Sure. The way we do this is by setting a filter "Extract inclusion filter" to true. And the first time we run it, we set the parameter "ExtractType" to "F" and the parameter "CutOffDate" up through the point of static data. Upon publishing it each night as the incremental version (as the starting point), we change the parameter value "ExtractType" to "I".
hey Jeff, this is awesome. I have not had a chance to try this now but I'll do it first thing tomorrow when I get to office. I think it will work. Thanks a lot!