While the second larger extract that's attempting to be run with Hyper, can you open task manager and have a look at memory?
My thinking is that memory is maxing out because of the # of columns. And if this is the case, then you have options of either increasing memory (recommendation is 8 gig per core), or decreasing the # of columns needed in the extract (which can be done easily with hiding unused fields).
For both, .tde and .hyper extracts, upto 98% of RAM is being utilized on the servers.
Version 10.3 still manages to complete the refresh cycle with proper data, 2018.1.* just timesout.
2018.1 is running with Hyper and likely requires more RAM. Do you have the option of upping the RAM to 32 gig?