3 Replies Latest reply on Dec 5, 2018 8:30 AM by Ciara Brennan

    How to normalize ratio values for dynamic range

    Marcin Tymczak

      HI Guys,


      I've got a problem I cannot resolve, which is an important requirement in the reports I develop for my client.


      I've got a calculated measure called [Error Ratio] which is an aggregate:


      SUM([Error Count]) / SUM([Number of Records])


      This measure needs to be shown on a bubble chart as a color mark. Size of the bubble is error count. Bubbles represents various operating system versions. The dashboard itself is very interactive - multiple quick and action filters.


      The distribution of error ratio values is very different depending on the filter you pick on the dashboard, The are many situations when the bubble chart doesn't highlight os version that really needs an attention because:

      - there are some os version with extremely high error ratio but small amount of errors, which is not that important

      - if I filter the data for a certain period of time when the overall error ratio and error count was big, the bubble chart turns all red (because the values for particular os version are also high) and it;s really hard to distinguish them.


      I decided to normalize them using standard calc: (x - min(x)) / (max(x) - min(x)) where x is error ratio.

      The problem I face is that error ratio is an aggregate. Another problem is dashbaord interactivity - I cannot really use FIXED calc here. The dimension of the bubble chart stays the same (OS Version) but the data in the background is being filtered on multiple levels.


      I would post a workbook which would probably be more helpful, but I've got NDA on my back, and the reports are feeded from database server - anonymizing them is not an option.


      I would really appreciate your help!