4 Replies Latest reply on Apr 5, 2018 10:21 AM by michaels.kale

    Level of Detail filtering issues




      I've had an issue with LOD expressions for a while when I attempt to integrate filtering and I'm wondering if anyone has hit the same problem and knows a way around it - or if I'm simply approaching the problem incorrectly.


      I attached a small sample workbook. The issue comes about when I try to incorporate a total count with Country / State filtering.


      Count = sum(Count)

      Total Count All = {FIXED [Country], State, [Type] : sum([Count])} (i.e. this is my attempt to get the sum(count) of the particular country, state, and type shown or filtered for)

      % of Total = sum([Count]) / sum({FIXED [Country], State, [Type] : sum([Count])})


      The initial state of the dashboard is correct. However - if I filter for Exempt, you'll see that PC shows a count of 60, 60, 100%.


      2018-04-05 10_20_54-Tableau - Book1.png

      In reality - PC Exempt should show 60 (exempt), 110 (total), and 54% of total.


      2018-04-05 10_22_41-TestData - Excel.png


      The reason it is not showing the extra 50 (USA, CA, PC, NonExempt) - is because there does not exist an "Exempt" status for USA,CA,PC. I thought this wouldn't matter because I'm using to LOD expression to ignore the fact that I'm filtering for Status (at least my intention).


      In fact - If I were to add a line that is USA, CA, PC, Exempt - with a NULL or 0, Tableau will correctly pick up the NonExempt 50 and show 110:


      2018-04-05 10_26_10-TestData - Excel.png


      2018-04-05 10_27_13-Tableau - Book1.png


      To me this is a bit baffling as it means that the problem isn't that I've filtered for Exempt (because it still grabs NonExempt in the last example), but the problem is that there does not exist a combination of Exempt/NonExempt for every single Country/State/Type.


      In fact the only way I've ever found to fix this problem is to create a Cartesian join/cross join to insert a bunch of fake data with NULL's. But this isn't really practical and I can't do that in large datasets where I may be duplicating tens of thousands of rows.


      Am I going about this the wrong way - or is there a solution I haven't thought of to get around this problem?