Self Referencing Tables in Power Query

I have had this idea in my head for over a year, and today was the day that I tested a few scenarios until I got a working solution. Let me start with a problem description and then my solution.

Add Comments to a Bank Statement

The problem I was trying to solve was when I download a digital copy of my bank statement, loaded it up into Excel using Power Query and then wanted to add some commentary to some of the transactions.  If you just add the comments to the source file then everything is easy.  But it is often not practical to add the comments directly into the source file for a number of reasons.  This could be because you get a new copy of the source data each time you refresh (eg you get a CSV file that replaces the old file, and the file just gets larger each month), or it could be if you are using a “combine multiple files” technique and you only want to deal with a summarised, cleansed version of the final data instead of dealing with the source files directly.

Once you have loaded the table in Power Query, it is possible to add a new column to the resulting table (shown below).  The problem with this approach is that the comments are not logically linked to the rows of data in the table.  Take the example below.  The data is loaded from table 1 using Power Query, into table 2 on the right.  I then manually added a new comment column and added some comments (as shown below).

comments

The problem that can happen is demonstrated below.  If the sort order of the source table changes and then you refresh the query, the comments no longer align with the original data.  And this is despite having a unique ID column in the original table.

sort

The problem is that the new column I manually added to the Power Query table on the right is not logically joined to the actual table, and hence the comments are actually in a column next to the table rather than part of the rows in the main table.

Enter Self Referencing Tables

I read this blog post from Imke quite some time ago, and that was what gave me the idea on how I could solve this problem. The idea is to load table 2 above a second time (after adding the comments), and then joining it back to itself, hence logically joining the manually added comments to the rows in the main table.

Note: Prior to completing the self referencing steps below, I used Power Query to create the second table from the source table. This was my starting point.

Then I loaded the resulting table 2 a second time as follows:

  1. Select in the second table
  2. Go to the Power Query menu
  3. Create a new query using “from table”.

image

I renamed this new query “Comments” and selected “Close and Load To” so that it only created a connection but didn’t load the new table to the worksheet.

image

The next thing to do is to merge the original query with the new query.  To do this,

  1. Go back and edit the original query for table 2
  2. Add a new step to the query using “Merge Query”
  3. Merge the second “Comments” query with the original query by joining on the ID column as shown below.

image

This will give you a new column in your query, and the column contains a table (as shown below).  You can then expand this new column to extract the comments.

image

I only extracted the comments column, and deselected the last option as shown below. Note, it is important that you deselect “Use original column name as prefix” so that the new column has the same name as the original source column.

Click OK, then close and load.

image

When you look at the table now, it all looks the same as before (see below), but there is one important difference.  The last column in the Power Query Table is now a part of the table and not a manually added on column.  You now have a self referencing table.  EDIT 2017.  If you get a new column called Comment2, just delete it.  Everything else will work. There was a change to the way Power Query works at some stage and this duplicate column with “2” appended now appears.  But you can (and should) just delete it – it will then all work.

image

To prove this, I completed the same sort test as before.  First I sort the source table, then refresh the Power Query table – you can see the results below.  Note how this time the comments stick with the row in the table – sweet!

sort2

Incrementally Add Comments

Now that the queries are all set up, you can incrementally add comments to the Power Query table any time you like.  And you can also add new data to the source data – it will all work as you would expect as shown below.

final

Real Life Data

The above demo is of course test data.  A more realistic real life scenario would be to download a CSV file from your bank, and use one of the many “combine multiple files” techniques to import the data.  It is likely that your source data is messy, and contains lots of columns you don’t need. In this real life scenario, it is not easy to add comments to the source data unless you open each file and add the comments there – this is not ideal.  Another problem could be that your data source could be a CSV file that uses a “delete and replace with the new larger file containing the latest records” method.  In both of these scenarios, you will need to have your manually added comments located somewhere other than the source table.  The solution provided here is a simple and intuitive way to manage this for the user (albeit with a little bit of setup first).

The example shown in this post will work just fine as long as 2 conditions are maintained.

  1. You have a unique ID for each record
  2. You don’t accidentally load duplicate data – so don’t do that!

But What if I don’t have an ID column

The next logical issue you may have is a scenario where you don’t have an ID column.  I tested quite a few scenarios including adding an ID column inside the first Power Query step (add index column).  I haven’t been able to find a solution that works 100% of the time, but there is one that can work for most people.  If your data is such that each row in the table is unique (eg date, maybe a time column if you are lucky enough to have this, amount of transaction, remaining balance etc) you could create a unique ID by concatenating these columns into a single unique column.  This should solve the issue for most people in non-commercial scenarios (those in commercial scenarios are more likely to have a reference ID anyway).  You could even create an audit query that checks for duplicates and warns you if this occurs, but that sounds like another blog post.

I hope you find this article useful, and or it inspires you to come up with your own innovative ways to use Power Query.

 

If you want a comprehensive lesson on how to use Power Query, checkout my training course here http://xbi.com.au/pqt

Share?

Comments

  1. Very nice blogpost!
    In order to prevent messing by accidental multiple loads you can add a Remove-Duplicates step on the index – just to be on the safe side 🙂

  2. Hi. Great and helpful post.
    Have it working, with one minor issue (tried 15 times mirroring your data and following exact steps).

    When completing the final step, after clicking load and close (after the merge), an additional Comments column shows up called “Comments2”. Everything works fine (sort and filter, open and close and the additional data remains within the relative record), but can’t get rid of (or hide) the Comments2 column.

    Also tried the same above while adding in 15 additional columns. The same thing happens where the column name is replicated and a 2, 3, 4…n is added on at the end of the table.

    Any suggestions? Thanks in advance.

  3. Hi Matt.
    Wrote a post on Microsoft Communities. A lady replied.
    Her solution was to simply delete the additional columns that appear, denoted with the ascending number sequence.
    The query worked after that. Tried again from scratch and the numbers still show up again. At least deleting the columns fixes the issue.
    Guessing it is a bug or addon issue.
    Thanks again for a very useful post. Have implemented it at work on a multi-file and multi sheet query. It is working like a charm.
    Cheers

  4. Thank you for the quite useful post! I am trying to use its logic and get stock market daily data from the web and from another source (sap bi). I am not sure though what would be the best way to flag the date as in some cases I might pull the data on a later date (when downloading from sap).
    1. let’s assume i want to update a daily price table and due to vacation i delayed the update for a whole week. once i start downloading the daily data, i want my query to read each day and add it to the db
    2. in the case of the web update, i link the original source to the website and need to re-visit this daily so that it adds the new data and creates some history. what is the optimal way to do this? is there a way to schedule the data import on a daily basis even if i am not in front of the pc?

    thank you in advance for your support!

    • in point 1, you say “add it to the db”. Where is your db? Excel is not a db and you can’t use this technique to use it as a db (as far as I can tell anyway)

  5. I am trying to use this approach to update a table with daily excel files with closing stock price info but something is wrong and at some point it doesn’t work. let’s assume i start with daily stock data file stored in a workbook and read from it and import the data in power query. i then clean my data and load it to a table. then i create a new query from this table and add a new column using Date.ToText(#date(2017, 8, 11)) where essentially i say to power query that the specific file it will read from and add to the db will be for the 11th of Aug, then another one for the 14th etc…
    next step – per your post – I create a connection and then try to merge the original table with the new one and expand the date field only but i get “null” for dates. is this because during the merge i do not have unique ids? should i create an extra column with let’s say the stock ticker and the custom date e.g. MSFT11/8/2017 ??

    thank you in advance for your help.

    best regards,

    Zaf

    • Zaf, my technique has unique IDs and also the new data is manually added in Excel, not Power Query. It sounds like you don’t have IDs and are trying to automatically add data. If you can automatically identify the columns in Power Query before loading to Excel, and then if you manually add data (data, not formulas) in Excel then I see no reason why it wont work. If that is not what you are doing, then you will simply have to test it and see what you can work out.

Leave a Reply

Your email address will not be published. Required fields are marked *

Ten Things I Wish I Knew When I Started With Power Pivot.   Learn  Without The Pain!

x