It has been 2 years since I first wrote an article about SUM() vs SUMX(). I decided to write a new article to refresh this topic and add in the things I have learnt over the last 2 years.
SUM() vs SUMX()
First let me start with an overview of both of these functions.
- SUM() is an aggregator function. It adds up all the values in a column you specify in the current filter context. SUM() has no knowledge of the existence of rows – all it can do is add everything in the single column it is presented with.
- SUMX() is an iterator function. It works through a table, row by row to complete the evaluation in the current filter context. SUMX() has awareness of rows in a table, and hence can reference the intersection of each row with any columns in the table.
Both function can end up giving you the same result (maybe, maybe not), but they come up with the answer in a very different way. Before I move on, it is worth making sure you have a clear understanding of filter context.
Filter context simply means “the filters that are influencing the data model for this evaluation”. This is important because in DAX, the exact same formula can (and normally will) return a different result with a different filter context.
Filter context can come from:
- The visual in your workbook. A Pivot Table if you are using Power Pivot for Excel, or anywhere on the report canvas if you are using Power BI. This is called the Initial Filter Context.
- The use of a CALCULATE() function. CALCULATE() is the only function that can change the Initial Filter Context. Of course there may or may not be a CALCULATE() function in your DAX formula. If there is a CALCULATE() function, it can add to, remove from, or modify the initial filter context.
Note: Filter context can also come from an implicit CALCULATE() function. Every measure has a hidden CALCULATE() function wrapped around it that you can’t see. In certain circumstances (such as inside a row context), this hidden CALCULATE() function can give you a different result than you may otherwise expect. This is a complex topic and outside the scope of this article.
Filter context is important because it will affect the results you get from your formulas. Let’s briefly look at how initial filter context works in Excel and Power BI.
Power Pivot for Excel
In the following Pivot Table example, the highlighted cell has an initial filter context of Products[Category] = “Bikes” coming from the rows of the pivot table (shown as 1). Initial filter context can also come from Filters, Columns and Slicers in a pivot table.
Power BI Desktop
Power BI desktop is similar to Excel, but initial filter context can come from almost anywhere in the report. The highlight cell below is filtered by Products[Category]=”Bikes” (shown as 1) just like in Excel above, but there is also a cross filter coming from Territory[Country]=”Australia” (shown as 2). Both of these filters are part of the initial filter context. Filters can also come from Columns, Slicers, and the filter section on the right hand side of the Power BI report window.
OK, now it is time to look more in depth at SUM and SUMX, one at a time.
The SUM() Function
Syntax: = SUM(<Column Name>)
Example: Total Sales = SUM(Sales[ExtendedAmount])
The SUM() function operates over a single column of data to aggregate all the data in that single column in the current filter context. The filter context is applied first, and the SUM() works over the column after the filter has been applied. Hopefully you can see now why the review of filter context is so important.
The SUMX() Function
Syntax: = SUMX(<Table>, <expression> )
Example: Total Sales SUMX = SUMX(Sales,Sales[Qty] * Sales[Price Per Unit])
SUMX() will iterate through a table specified in the first parameter, one row at a time, and complete a calculation specified in the second parameter, eg Quantity x Price Per Unit as shown in the example above for the current filter context. Once it has done this for every row in the specified table in current filter context, it then adds up the total of all of the row by row calculations to get the total. It is this total that is returned as the result.
The only reason SUMX() can complete calculations this way is because of the existence of a row context. Row context can be a confusing term to beginners, but if you think about it, the logic is obvious. If I said “multiply column A by column B”, how would you do that? The only way is to multiply “one row at a time”. That is all row context means.
Which One Should I Use?
Which you use really depends on your personal preference and the structure of your data. Let’s look at a couple of examples.
- Quantity and Price Per Unit
- Extended Amount
- Totals Don’t Add Up
1. Quantity and Price
If your Sales table contains a column for Quantity and another column for “Price Per Unit” (as shown above), then you will necessarily need to multiply Quantity by the “price per unit” in order to get Total Sales. It is no good adding up the total quantity SUM(Quantity) and multiplying it by the average price AVERAGE(Price Per Unit) as this will give the wrong answer.
If your data is structured in this way (like the image above), then you simply must use SUMX() – this is what the iterator functions were designed to do. Here is what the formula would look like.
Total Sales 1 =SUMX(Sales,Sales[Qty] * Sales[Price Per Unit])
You can always spot an Iterator function as it always has a table as the first input parameter. This is the table that is iterated over by the function.
2. Extended Amount
If your data contains a single column with the Extended Total Sales for that line item (ie it doesn’t have quantity and price per unit), then you can use SUM() to add up the values.
Total Sales 2 =SUM(Sales[Total Sales])
There is no need for an iterator in this example because in this case it is just a simple calculation across a single column. Note however you “could” still use SUMX () (like shown below) and it will give you the same answer.
Total Sales 2 alternate = SUMX(Sales, Sales[Total Sales])
Despite what your intuition may tell you, this alternate formula using SUMX() is identical in performance and efficiency to the SUM() version. More on that below.
3. Totals Don’t Add Up
There is another use case when you simply must use SUMX() that is less obvious. When you encounter this problem, you will need to use an iterator like SUMX to solve it. I have created a small table of sample data to explain the problem.
The table above shows 4 customers with the average amount of money they spend each time they have shopped as well as the number of times they have been shopping. If I load this data into Power BI and then try to use aggregating functions to find the average spend across all customers as well as the total amount spent, I get the wrong answers (as shown below).
Here are the measures from above.
Total Number of Visits = SUM(VisitData[Number of Visits])
Avg Spent per visit Wrong= AVERAGE(VisitData[Spend per Visit])
Total Spent Wrong = [Avg Spent per visit Wrong] * [Total Number of Visits]
The first measure [Total Number of Visits] is correct because the data is additive, but the other 2 measures give the wrong result. This is a classic situation where you can’t perform multiplication on the averages at the grand total level. Given the sample data that I started with, the only way to calculate the correct answer is to complete a row by row evaluation for each customer in the table as shown below.
In this second table above I have written a SUMX() to create the Total Spent SUMX (row by row) first. Only then do I calculate the average spend per visit as the final formula.
Total Number of Visits = SUM(VisitData[Number of Visits])
Total Spent SUMX = SUMX(VisitData,VisitData[Spend per Visit] * VisitData[Number of Visits])
Avg Spent per visit Correct = DIVIDE([Total Spent SUMX] , [Total Number of Visits])
In this second case, SUMX is working through the table of data and is correctly calculating the result, even for the Total row at the bottom of the table.
Preferring the Storage Engine
The last thing I want to talk about is the performance implications of using SUM vs SUMX. Given that SUMX is an iterator, you may think that SUMX is inherently inefficient. Generally speaking this is not true as the software has been optimised to handle the scenario efficiently. Having said that, bad DAX can definitely cause SUMX to be inefficient. Let me explain.
Power Pivot has 2 calculation engines, the Storage Engine (SE) and the Formula Engine (FE). The SE is faster, multi threaded and cached. The FE is slower, single threaded and not cached. This is a complex topic in its own right and I will only scratch the surface in this article, however the implication is that you should write your formulas to leverage the SE where possible. Of course this can be hard if you don’t know exactly how to do this, but there are a few simple tips that will help you.
- SUM () always uses SE for its calculations, so nothing to worry about there.
- For most simple calculations (like Sales[qty] by Sales[unit price]), SUMX() will also use SE, so all good there.
- In some circumstances SUMX() may use the FE to do some or all of the the calculation, particularly if you have a complex comparison statement in your formula. If SUMX needs to use the FE, then performance can be slow – sometimes very slow.
Regarding point 3, the best advice I can give you is to avoid writing complex conditional statements such as “IF Statements” within a SUMX function. Consider the following 2 formulas:
Total Sales of Items more than $100 Bad = SUMX(Sales, if(Sales[ExtendedAmount] > 100, Sales[ExtendedAmount]) )
Total Sales of Items more than $100 Good = CALCULATE( SUMX(Sales,Sales[ExtendedAmount]), Sales[ExtendedAmount] > 100 )
The first formula (Bad) has an IF statement within a SUMX. This IF statement forces the Storage Engine to pass the evaluation task over to the Formula Engine for a comparison check to see if each individual row is greater than 100 before deciding to include it or not in the calculation. As a result the formula engine must complete the task one row at a time making the evaluation slow and inefficient.
The second formula (Good) first modifies the initial filter context coming from the visual to add a new filter on Sales[ExtendedAmount] > 100. This new filter is applied by the Storage Engine very efficiently. After CALCULATE modifies the filter context, the SUMX can then do its job of adding up the remaining rows “in the current filter context” using the storage engine, not the formula engine. As a result this second formula is very efficient. In some simple testing I completed, the first (bad) formula took 5x longer to complete than the second (good) formula. In other scenarios it could be 100s or even 1000s times slower, so that clearly could be a problem.
If you are in doubt as to which engine is actually being used to execute your formulas, the only sure way to check and confirm is to use a profiling tool such as DAX Studio or SQL Profiler to check what is happening under the hood. Once again this is a complex topic beyond the scope of this article. I may come back and do an article on this topic another day, but until then here is an introduction to DAX studio and how it can be used for this purpose.
Compression Impacts on Performance
The second area that can impact performance is overall data model compression. The more unique values that exist in a column in the data model, the less compressed the data will be. The less compressed the data, the more memory that is required and potentially the slower the calculations will be. Let’s take another look at the tables from earlier in this article.
Example Table 1
The column of data Sales[Total Sales] has all unique values. This column would not compress well.
Example Table 2
In this table there are duplicate values in the Qty column and also the Price Per Unit column. The less unique values in a column as a percentage of row count, the better the compression.
Now of course these 2 sample tables are very small, but imagine the impact of this concept on very large tables (eg tables with millions of rows of data). It is possible that for very large tables, the number of unique values in example table 1 will be significantly larger than the unique values in the columns in example table 2. It is therefore possible that loading data as outlined in example 2 could have positive impacts on total table size and hence performance of your data model. Making a change may mean of course that you have to swap your measures from
SUMX(Sales,Sales[Qty] * Sales[Price Per Unit])
however this usage of SUMX instead of SUM is perfectly fine and highly performant. It is impossible to say what impact one vs the other will have overall; it depends entirely on your data. But one thing I would advise is that you should not load all 3 of the columns as shown in this simple example, i.e don’t load Qty, Price Per Unit, AND Total Sales. As no doubt you can see, you only need 2 of these columns of data loaded because you can always derive the value in the third column from the other 2. So if you are going to use Qty and Price Per Unit, then don’t load Total Sales. If you are going to use SUM(Sales[Total Sales]) then don’t load both Qty and Price Per Unit. I recommend you load your most used column, then determine which of the other 2 columns you will use the most and then derive the third one as needed on demand.
As always, there are exceptions to the rules. Everyone’s data is different, so test out the techniques on your own data and see what gives you the best results. If your data models are small and fast then it probably doesn’t matter. If your data models start to get large and slow, then it is time to investigate the best options to try to maximise performance.