Self Referencing Tables in Power Query - Excelerator BI

Self Referencing Tables in Power Query

I have had this idea in my head for over a year, and today was the day that I tested a few scenarios until I got a working solution. Let me start with a problem description and then my solution.

Add Comments to a Bank Statement

The problem I was trying to solve was when I download a digital copy of my bank statement, loaded it up into Excel using Power Query and then wanted to add some commentary to some of the transactions.  If you just add the comments to the source file then everything is easy.  But it is often not practical to add the comments directly into the source file for a number of reasons.  This could be because you get a new copy of the source data each time you refresh (eg you get a CSV file that replaces the old file, and the file just gets larger each month), or it could be if you are using a “combine multiple files” technique and you only want to deal with a summarised, cleansed version of the final data instead of dealing with the source files directly.

Once you have loaded the table in Power Query, it is possible to add a new column to the resulting table (shown below).  The problem with this approach is that the comments are not logically linked to the rows of data in the table.  Take the example below.  The data is loaded from table 1 using Power Query, into table 2 on the right.  I then manually added a new comment column and added some comments (as shown below).


The problem that can happen is demonstrated below.  If the sort order of the source table changes and then you refresh the query, the comments no longer align with the original data.  And this is despite having a unique ID column in the original table.


The problem is that the new column I manually added to the Power Query table on the right is not logically joined to the actual table, and hence the comments are actually in a column next to the table rather than part of the rows in the main table.

Enter Self Referencing Tables

I read this blog post from Imke quite some time ago, and that was what gave me the idea on how I could solve this problem. The idea is to load table 2 above a second time (after adding the comments), and then joining it back to itself, hence logically joining the manually added comments to the rows in the main table.

Note: Prior to completing the self referencing steps below, I used Power Query to create the second table from the source table. This was my starting point.

Then I loaded the resulting table 2 a second time as follows:

  1. Select in the second table
  2. Go to the Power Query menu
  3. Create a new query using “from table”.


I renamed this new query “Comments” and selected “Close and Load To” so that it only created a connection but didn’t load the new table to the worksheet.


The next thing to do is to merge the original query with the new query.  To do this,

  1. Go back and edit the original query for table 2
  2. Add a new step to the query using “Merge Query”
  3. Merge the second “Comments” query with the original query by joining on the ID column as shown below.


This will give you a new column in your query, and the column contains a table (as shown below).  You can then expand this new column to extract the comments.


I only extracted the comments column, and deselected the last option as shown below. Note, it is important that you deselect “Use original column name as prefix” so that the new column has the same name as the original source column.

Click OK, then close and load.


When you look at the table now, it all looks the same as before (see below), but there is one important difference.  The last column in the Power Query Table is now a part of the table and not a manually added on column.  You now have a self referencing table.  EDIT 2017.  If you get a new column called Comment2, just delete it.  Everything else will work. There was a change to the way Power Query works at some stage and this duplicate column with “2” appended now appears.  But you can (and should) just delete it – it will then all work.


To prove this, I completed the same sort test as before.  First I sort the source table, then refresh the Power Query table – you can see the results below.  Note how this time the comments stick with the row in the table – sweet!


Incrementally Add Comments

Now that the queries are all set up, you can incrementally add comments to the Power Query table any time you like.  And you can also add new data to the source data – it will all work as you would expect as shown below.


Real Life Data

The above demo is of course test data.  A more realistic real life scenario would be to download a CSV file from your bank, and use one of the many “combine multiple files” techniques to import the data.  It is likely that your source data is messy, and contains lots of columns you don’t need. In this real life scenario, it is not easy to add comments to the source data unless you open each file and add the comments there – this is not ideal.  Another problem could be that your data source could be a CSV file that uses a “delete and replace with the new larger file containing the latest records” method.  In both of these scenarios, you will need to have your manually added comments located somewhere other than the source table.  The solution provided here is a simple and intuitive way to manage this for the user (albeit with a little bit of setup first).

The example shown in this post will work just fine as long as 2 conditions are maintained.

  1. You have a unique ID for each record
  2. You don’t accidentally load duplicate data – so don’t do that!

But What if I don’t have an ID column

The next logical issue you may have is a scenario where you don’t have an ID column.  I tested quite a few scenarios including adding an ID column inside the first Power Query step (add index column).  I haven’t been able to find a solution that works 100% of the time, but there is one that can work for most people.  If your data is such that each row in the table is unique (eg date, maybe a time column if you are lucky enough to have this, amount of transaction, remaining balance etc) you could create a unique ID by concatenating these columns into a single unique column.  This should solve the issue for most people in non-commercial scenarios (those in commercial scenarios are more likely to have a reference ID anyway).  You could even create an audit query that checks for duplicates and warns you if this occurs, but that sounds like another blog post.

I hope you find this article useful, and or it inspires you to come up with your own innovative ways to use Power Query.


If you want a comprehensive lesson on how to use Power Query, checkout my training course here

29 thoughts on “Self Referencing Tables in Power Query”

  1. I desperately need this to work for me – but I don’t get the same results??
    I’ve created a simple table in Test.xls, Then loaded in Test 2.xls using Power query.
    Added the additional column, loaded from table, merged with itself etc.
    But see the behaviour in this video I posted on YouTbe.

  2. Thank you for sharing. I think I encountered the same issue as others here : if i cancel the new column, the columns are independent again and new sorting of datas leads to a mess. So I kept (and hired) the added column and it seems to work 😉

  3. But did you really have this idea rolling around in your head for a year prior, Matt? You totally stole the technique AND the name from me. Sorry, but this has been bugging me for years lol

  4. Hi, Matt! It’s a really powerful trick, thanks a lot! I have an another such an issue that I can’t handle. I need to change an already loaded value (let’s say date in your table 1) and that new value should be fixed. May be you could give me an idea to solve it..

      1. Hi, Matt, scenario is simple. I have a raw data from a DB. And I know that this field in this row is incorrect. I haven’t an opportunity to correct DB, but PBI model should be correct. I need a way to make changes on fly either it’s a fact-table or a dimension-table. Now I make an algorithm with row substitution from a table but it’s a little bit clumsy.

          1. Many thanks for reply, Matt! I’ve worked out such approach, but it would be great (in my dreams) to make it so easy as in your self-referencing solution (Just make changes in output table) 🙂

  5. Matt,
    In my sales data among Customer ShipTo names there is some redundancy, such as “ABC Cleaners” and “ABC Cleaners Inc”. I don’t have access to the primary source to remove duplicates there. Each month I get an updated file. I want to keep the original Customer ShipTo names, so I use your excellent method above to build a crosswalk/lookup table then build a relationship between [CustShipToName] (original w duplicates) in the original table to [CustShipToMergedName] in my lookup table. Works fine in Excel PowerQuery as you show here.

    Can this be done in Power Bi desktop? Imke posted that Power BI makes self-referencing tables easy, but what about your method of adding an editable column to a self-referenced table: can this be done in Power BI Desktop? I’m relatively new to Power BI, and haven’t found a way to easily edit individual cell values (without writing a new function step in PowerQuery for each edit) in the manner needed to update values in my crosswalk table. If I want to run my report again, this time merging sales of XYZ Co and XYZ Assoc because new information tells me they are same company, how best do I do this in Power Bi using your method.

    My workaround is to use Excel to create the lookup table, write the results to separate CSV file, then add it to Power BI. Can you think of a better mothod?


    1. Good question Dan. There is nowhere in Power BI that has a “cell” reference where you can take an extract from a query and then manually add data into a new column. So you can do it the way you are doing it (which is not bad). Another approach I could think of (not quite as good, but not bad) is.
      1. Write an “enter data” query to create your substitute table and load it to power bi
      2. Write an audit query that returns the new values that you need to match that currently don’t have a substitute. Return those values to an audit table in Power BI.
      3. When you refresh, check this audit table. If it contains any values, copy the values to the clipboard
      4. Go back into the enter data query, paste the new records to the bottom of the table and manually add the substitutes
      5. refresh and make sure the audit table is empty. You could even put a warning flag on your report that tells you when there are records in the audit table.

      Could that work?

  6. Matt my name is Samir and İ live in Azerbaijan. You cant belive this is what I was looking for many many month. I also added comments into my power query results and after sorting data was moved. I could not understand what is going on. Now I imagine why this occur. You are fantastic!!!! Thank you very very much!!

  7. I am trying to use this approach to update a table with daily excel files with closing stock price info but something is wrong and at some point it doesn’t work. let’s assume i start with daily stock data file stored in a workbook and read from it and import the data in power query. i then clean my data and load it to a table. then i create a new query from this table and add a new column using Date.ToText(#date(2017, 8, 11)) where essentially i say to power query that the specific file it will read from and add to the db will be for the 11th of Aug, then another one for the 14th etc…
    next step – per your post – I create a connection and then try to merge the original table with the new one and expand the date field only but i get “null” for dates. is this because during the merge i do not have unique ids? should i create an extra column with let’s say the stock ticker and the custom date e.g. MSFT11/8/2017 ??

    thank you in advance for your help.

    best regards,


    1. Zaf, my technique has unique IDs and also the new data is manually added in Excel, not Power Query. It sounds like you don’t have IDs and are trying to automatically add data. If you can automatically identify the columns in Power Query before loading to Excel, and then if you manually add data (data, not formulas) in Excel then I see no reason why it wont work. If that is not what you are doing, then you will simply have to test it and see what you can work out.

  8. Thank you for the quite useful post! I am trying to use its logic and get stock market daily data from the web and from another source (sap bi). I am not sure though what would be the best way to flag the date as in some cases I might pull the data on a later date (when downloading from sap).
    1. let’s assume i want to update a daily price table and due to vacation i delayed the update for a whole week. once i start downloading the daily data, i want my query to read each day and add it to the db
    2. in the case of the web update, i link the original source to the website and need to re-visit this daily so that it adds the new data and creates some history. what is the optimal way to do this? is there a way to schedule the data import on a daily basis even if i am not in front of the pc?

    thank you in advance for your support!

  9. Hi Matt.
    Wrote a post on Microsoft Communities. A lady replied.
    Her solution was to simply delete the additional columns that appear, denoted with the ascending number sequence.
    The query worked after that. Tried again from scratch and the numbers still show up again. At least deleting the columns fixes the issue.
    Guessing it is a bug or addon issue.
    Thanks again for a very useful post. Have implemented it at work on a multi-file and multi sheet query. It is working like a charm.

      1. I have Excel 2016 and last time I tried it worked fine. If you can share more information about the problem I would be happy to look

  10. Hi. Great and helpful post.
    Have it working, with one minor issue (tried 15 times mirroring your data and following exact steps).

    When completing the final step, after clicking load and close (after the merge), an additional Comments column shows up called “Comments2”. Everything works fine (sort and filter, open and close and the additional data remains within the relative record), but can’t get rid of (or hide) the Comments2 column.

    Also tried the same above while adding in 15 additional columns. The same thing happens where the column name is replicated and a 2, 3, 4…n is added on at the end of the table.

    Any suggestions? Thanks in advance.

          1. Hi Matt,
            Thanks for going through the extra work to make the video while testing the code. Much appreciated.
            Our steps are exactly the same. Couple of notes:
            In the video, at 1:54, you can see the table expand, then contract (image link below). It seems my version of Excel is not doing the contracting part (or removing the “Column2” column)

            Curious what your code looks like on the Comments query. Noticed after refreshing that the “Column2” column appeared.
            Image screen shots show the changes. Odd the code doesn’t change.

            Pre-refresh Code:
            Post-refresh Code:

            Excel: 2016 Version 1609 (Build 7369.2095)

            Looks like a tech support question for Microsoft. If I learn anything, will update the post. Thanks again for sharing this very useful query tool!!

            1. My code looks just like yours. Very strange. I even added a second column like you did and it still worked my end.
              Source = Excel.CurrentWorkbook(){[Name=”Data4″]}[Content],
              #”Changed Type” = Table.TransformColumnTypes(Source,{{“ID”, Int64.Type}, {“Date”, type datetime}, {“Amount”, type number}}),
              #”Merged Queries” = Table.NestedJoin(#”Changed Type”,{“ID”},Comments,{“ID”},”NewColumn”,JoinKind.LeftOuter),
              #”Expanded NewColumn” = Table.ExpandTableColumn(#”Merged Queries”, “NewColumn”, {“Comment”}, {“Comment”})
              #”Expanded NewColumn”

              Let me know if you hear something from MS

  11. Very nice blogpost!
    In order to prevent messing by accidental multiple loads you can add a Remove-Duplicates step on the index – just to be on the safe side 🙂

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top