It sounds like you essentially want to include values that are not present in your underlying data in a calculation. This subject is known as Domain Completion or Domain Padding.
It's not the easiest thing to learn--I still struggle with it, but there is plenty of content on the forums around it.
I know it was the topic of a Think Data Thursday not too long ago...
In my opinion, the simplest solution is actually to force a value for every day into your data using custom SQL. We have a table we join to regularly that has every day (including the future through the next year) since our company started. That way, we can join our data pulls to that source to ensure at least a record with no values gets included in the data.
How would you then need a secondary calculation (to replace "Number of Records") that corrected for the padded data?
I thought about doing that while passing through the back end but didn't see a clean way to count if I padded each type of data.
If my confusion isn't clear below is an example:
For the below data I would like to look at the average volumes per day by site and product.
Date Sold State Product
1-1-13 NY A
1-1-13 CT A
1-3-13 NY A
1-4-13 NY C
1-5-13 NY A
1-5-13 CT B
To pad it out completely I would get the following table (truncated for space)
Date Sold State Product Padding
1-1-13 NY A No
1-1-13 CT A No
1-1-13 NY B Yes
1-1-13 NY C Yes
1-1-13 CT B Yes
1-1-13 CT C Yes
1-2-13 CT B Yes
1-2-13 CT A Yes
1-2-13 NY A Yes
1-2-13 NY B Yes
1-2-13 NY C Yes
1-2-13 CT A Yes
1-2-13 CT C Yes
This would cause both a programming and storage nightmare particularly with data sets of dozens of columns but ignoring that issue entirely how would I be able to count the transaction rate at this point? is the only way to do it having an identifying column telling me that the value is added for padding and not to count it? wouldn't the exclusion of those rows put me back into the same place I was without the padding?
Generally, when completing a domain, you would cross join all dimension combinations possible and then leave NULL the measures (I prefer this over setting to 0 so that they are omitted in aggregations like SUM and AVERAGE.)
So If you have 5 dates, 2 states and 3 products, you produce the cartesian product and generate 30 placeholding records. Then left join to your actual data and you will still get 30 records, but only those with data will have non-NULL measures--so the NULLs can be easily filtered back out if you don't want the padded data.
Does that help?
That answers the question I had.
However, I would like to return to the issues I ignored initially. I can get more storage space so we can ignore that piece, but because I really have years worth of dates specific to the hour, and about 30 data fields each with at least 8 categories the Cartesian product would be on the magnitude of 8,000,000 rows and growing daily, wouldn't that much padding cripple the speed?
Tableau has handled 30M+ record data sets for us before.
If you create an extract, it shouldn't be too bad. The only thing I can say is try it and see. It does depend on how complex you make your Tableau views. If you have a lot of parameter-based calculated fields, that might not be so snappy, but if you're just doing reading and rendering, it shouldn't be crippling, as you put it.
Additionally, you don't need to store the exploded table in your source system--just create a dimension table for each field (your date-time to the hour will be the largest) and cross join them in custom SQL as your Tableau source.
The cross join becomes a "scaffold" upon which you attach real results data via a left join.
Jonathan Drummey's blog has had a couple segments around this too, if you are looking for more info.
That sounds like a good methodology. Thank you.