In Prep, you'd need to split it by ; then pivot. The problem you'll run into, however, is that split will only break the value into 10 columns, which won't really handle up to 100 different values. You could repeat the splits a number of different times, but that could be somewhat painful. That said, other than writing code, I can't think of any other great options here.
That's what I was afraid of. The splits and pivots were getting tedious and I was hoping there was some switch or setting to make it easier.
Looks like I'll have to code something up.
2 of 2 people found this helpful
The split limited to 10 columns is no longer the case in current versions of Tableau Prep. In a recent release (sometime in 2019.1, I think) they changed the limit for automatic/All split to 150 columns. So something like this setup should get you all the email addresses:
Fantastic! Thanks Jonathan. I didn't realize that had changed.
I applied Jonathan's split and pivot for "all" and it worked, sort of.
When I do the split and pivot for 2 rows of data with ~100 items to split, the results matched a manual test. I call that good and in the right direction.
My next test was to push the split and pivot "all" to the thousands of rows that I mentioned in my original post. The results were almost as good, but didn't line up. The two earlier rows (now in a larger set of thousands of row) were split and pivoted, and the output was different. The output when I split and pivot two rows of data was about 10% more rows than when those two rows were in the larger set of thousands.
When I compared the two, the differences became more obvious.
Did I cross some processing power requirement? Split and pivot "all" for the two rows by themselves was seconds, but the 3000+ rows took nearly 50 minutes to complete on my i7, 16GB Mac. I'm actually impressed that the job completed w/o frying my system, but not so much after I noticed the data differences.
My data is proprietary, but if need be I can anonymize it for others to try. I just don't know if anyone wants to turn their system into a space heater for almost an hour to test.
In any case, does anyone know of a setting of some sort that might let this crank through my data with more accuracy?
This might be a bug in Prep...Prep is only about 1.5 years old and still under rapid development and could also use performance tuning. In my opinion that operation on 3K rows should take somewhere between seconds to a couple of minutes, not 50 minutes given that the result set would be max ~300K rows (3K input rows * 100 output rows).
So I have three suggestions:
1) If you haven't already, try running the flow in full with an Output Step to a CSV. When Prep does a full run it a) doesn't sample and b) doesn't do the live updates and sometimes that can be a lot faster (and more accurate). I'm suggesting a CSV for maximum performance in Prep so there's no overhead from the indexing & compression to build a Hyper or TDE.
2) Submit this to Tableau support so it can get logged.
2a) If you're up for some testing then try running the flow several times with 10, 50, 100, 500, 1000, etc. rows of input. I have a suspicion that what's going to happen is that the flow will run very fast and have no problems up until some # of rows and then the performance will nosedive (and potentially result in the bad data). That would be good feedback to give in the support case.
3) In order to get something that works right now do the split & pivot outside of Prep. Free options that come to mind are Python, R, and KNIME (the latter is an open source ETL tool that can do a lot more than Prep). I also use Alteryx, which is a commercial tool, and that has a handy Split to Columns node that with one click on the configuration splits to rows.
Prep v2019.3 (currently in beta) has a new script step supporting Python & R, for me a major use case for this feature will be working around present limitations of Prep.
Quick updates on this:
1. Running flow through output to csv does run in a few minutes. It appears that the intermediate steps of the split and pivots were the cause of the extra long processing time on the samples.
2. I tried to create a similar flow on another dataset of similar (slightly larger) size, and the extra long processing time kicks in as soon as I do any more cleaning on the dataset after my pivot. Prep essentially jumps into sample processing and shoots the cpu to 900%+ as it tries to work on the split on the newly pivoted data. There is no way to stop this processing. I tried to immediately add an output to csv step, but Prep is stuck in the split step. It will not let me open a dialog for the output step, it just sits and processes. I will have to kill the process.
3. The data disconnect appears to occur based-on the data you choose to pivot on. If you choose the wrong one, the size of the pivot automatically reduces the dataset. I have to look into more specific...
Thanks for the updates....I'm thinking you should submit this to Tableau support so it can get logged.