So one way to do this would be to use sets (we could also use FIXED LoDs, which would follow the same logic).
So I created a set from the Segment Dimension. I use "Conditional" and put in the formula
MAX(IIF([Customer Name has "NEO" in title],1,0))=1
I can the use this set a filter.
Hopefully that makes sense, but let me know if not and I can explain a bit further.
Simon, thanks for the fast response!
I thought I could use a simple example and extrapolate, but I cannot. Attached is a more advanced version that I am really getting (should have posted this first).
Same scenario, instead of being based on "Neo", it is actually based on a relative date. E.G. If ANY customer name had an Order Date within the last X months, show me the segment that are in and EVERY Customer Name in it. Make sense?
All Value Example.twbx 1.2 MB
Here it is, You need to Create this FILTER Calc and Set it TRUE in Filter Shelf.
If it Helps, pl mark it Helpful and CORRECT to Close Thread.
This is for Neola, your Choice of Customer
So we can use the same basic idea, but just change the logic in the formula
IF [Order Date] >
DATEADD('month',-5,[Last Date of Data])
THEN 1 ELSE 0 END) = 1
btw [Last Date of Data] is just a formula that picks up the max date of data (as Super Store isn't to today), but you can switch this for today() for your real world situation.
Been trying this approach, but a set makes my data quite slow. About 15 minutes to make an update. Is there another approach outside of a set?
Yes 15 mins is definitely too long!
How many rows of data do you have? and what is your data source? (live against a database? or TDE/Hyper?)
Rows change depending on filtering. Live table, partitioned. Rows range from 5 million to 15 million depending on the filter selection.
Any ideas on how to approach this?
So the problem (performance wise) with both FIXED LoDs and Sets is that they create sub-queries at the level specified and then left join that subquery back onto the main data (at that level), so if you are using LIVE against .csv, Excel or a low-spec machine DB this can be very slow. Both these features were introduced when you want run the calculation at a different level of detail to the display level of detail. However the same can be done with Table Calcs, which should be more efficient.
So first I hard-coded the "Last Date of Data" (this just adds an extra LoD to the logic, which if you can use TODAY() will mean we don't need this)
I created this row-level calculation
[Bought in Last 5 Months - Hard Coded Date 1/0]
IF [Order Date] > DATEADD('month',-5,#31/12/2017#) THEN 1 ELSE 0 END
and then used a Table Calc to determine if the Segment had any member
[Include Segment - TC]
WINDOW_MAX(MAX([Bought in Last 5 Months - Hard Coded Date 1/0]))=1
and set up like
Hope that speeds things up a bit. If your data doesn't have to be (to the second) fresh I'd also consider extracting to a .TDE or .Hyper as these will be considerably faster than most databases.
Brilliant! Much easier than I expected. Thanks, Simon!