I believe I am running into performance issues by using the IIF for filter out zero values in my fact table. My statement that I am using is the following: IIF(Measures.[Test Scores] > 0, Measures.[Test Scores], NULL). Unfortunately this calculated measure is being referenced quite a bit and I believe this is causing major performance issues once the users start to drilldown into the lower levels of the hierarchy.
I orginally tried placing a filter on a named set I was using, but it was not filtering out the measures with a zero value. The statement I was using was the following: FILTER([Course Score Tests], Measures.[Test Scores] > 0) where [Course Score Tests] is a named set that I am referencing. Unfotunately this was not removing the zero test scores when I started to reference this newly filtered named set in one of my calculated measures to try and bypass using the IIF statement. I went back to using the IIF statement on the calculated measure and that is working, but I am see major performance issues and I am currently unable to drilldown to the lowest level of the hierarchy and the memory on the server is taken major hits.
At this point I am at a loss and I am unsure how the rectify the situation. Any advice would be greatly appreciated. I am using SSAS 2005 w/ SP1.
Is [Test Scores] a calculated measure or a real measure ? If it is a calculated measure - what is the expression for it ? If it is a real measure - can you define it as nullable - then you won't need calculated measure to do the conversion.|||Thanks for your reply. [Test Scores] is a real measure, not a calculated measure. I have actual zero values in the fact table that I want to filter out for a particular calculated measure, but not necessarily permanently. Are you refering to changing the NullProcessing on the measure from Automatic to ZeroOrBlank?|||
In that case, you may consider creating a calculated column in DSV which will convert zeros to NULLs, and creating two different measures - one with zeros and another one with NULLs. It would be an overhead during processing, of course, and the cube will be somewhat bigger, but at least it should solve the performance problem.
No comments:
Post a Comment