Navigating over 600+ M funcitons

Whenever I’m faced with a data mashup problem in Power BI, I try to check if it can be resolved with a standard M functions exposed by the intrinsic variable #shared. Navigating this data structure when you don’t know what you’re looking for seems to be tedious task. Up until recently my function discovery workflow included to turn the output of #shared into a table and then search for keywords in the name of the functions. Hoping that the developers called the functions accordingly. However, after discovering where the documentation for the functions is stored I’ve come up with more flexible ways of navigating the 600+ definitions.

Function documentation

A while ago I’ve stumbled across a wonderful GitHub repository — Power BI Desktop Query Extensions written by Taylor Clark. This is one of those hidden treasures that is filled with golden nuggets. I recommend it to anyone interested in taking their Power Query skills to the next level. Although the repository is actually a single file with 545 LOC, in those lines you’ll see some of the best examples of text, list and table manipulation. My favorite finds are those on testing and documenting M functions.

Apparently the documentation that you see while browsing #shared is stored as a metadata record on the type of the functions.

ListPositions

In order to view it you need to first check the type of the function and then retrieve the metadata. For example:

= Value.Metadata(Value.Type(List.Positions))

Metadata

Something that looks like a record in the query view, can easily be transformed into a table. And there are so many ways you can render the information stored as a table.

Function signature

Before proceeding any further there is still one piece of the puzzle missing —the signature of the functions. I couldn’t find it neither in the documentation metadata, nor via other standard functions. So I had to roll out a custom query that generates a text like:

"function (list as list) as list"

It uses a combination of Type.FunctionParametersType.FunctionReturn and a looooong list of type to text conversion.

Stiching everything together

I’ve started from #shared record, transforming it to table, then filtering only on function definitions and then added columns one by one for Category, Description, function Examples and Signature. After some string manipulation to parse the Module names I ended up with the query below.

I was planning to represent this data in a tile chart. Similar to a periodic table. However after experiencing performance problems while rendering 600 tiles with Infographic Designer 1.5.2, I gave up on the whole idea and opted to visualize all of this information in a treemap.

I’ve added a dummy column([Uno]) with value 1 and used as a Value field in the treemap chart. Then I’ve added column [Module] to the Group field and the [FunctionName] in Details section. All of the remaining columns: [Description], [Signature] and [Examples], I’ve added to the Tooltips section of the chart. To control the treemap, I’ve used a matrix which acts as a filter on [Category] column.

treemap

Click to view the embedded version of the report ➡

Exporting Power Query tables to SQL Server

Power Query is a great tool for data mashup, however when it comes to really show its powers as a desktop ETL tool, you realise that no matter how powerful at E an T it is, the L is falling behind, as you can only load the data into the embedded tabular data model that comes with Power BI.

Ignoring R, local cubes and HTTP POST requests, I wanted to showcase a few other methods for exporting data from Power Query and loading it to SQL Server.

The logic I follow is very simple — because there is no obvious limitation to the type of queries you can run against OleDb or ADO.NET datasets — a SELECT query will be executed with same success as an INSERT query.

As a prerequisite, on the SQL Server side I’ve created a table which I populate initially with single records, then with 1M records.

OleDb

Following the logic described above, inserting a single entry into Demo table via OleDb looks fairly trivial.

However if you try to follow this approach you’ll soon notice that in addition to fiddling around with Text.Format to get your INSERT statement righ, you will also have to approve each parameter variation from your SQL statement. Which makes this approach unfeasible unless you construct your INSERT statement to include all of the millions of records in a single string.

ADO.NET

On the other hand an ADO.NET connection wrapped in a Value.NativeQuery function provides a more familiar approach to performing data INSERT. This method has the convenience of passing query parameters as a record or list inside the SQL statement. As a result you need to approve the firewall rule for the SQL query only once.

Changing the scale

The methods I’ve described so far work reasonably well for times when you need to insert a few hundred records in a SQL table. However when the problem explodes to millions of rows, you need to do some prep work before sending over all of that data. I’ll detail 3 techniques for exporting datasets with millions of records: Table Valued Parameters, XML and JSON.

As an example I use a 1M records table I generate from a list with an incremental sequence Key and GUID generated Value field. Table dummy.

TVP

When doing bulk inserts in ADO.NET, the most recommended approach is to use a Table Valued Parameter to send data to a Stored Procedure which performs the INSERT. As a prerequisite on SQL Server side I had to add a table type and a stored procedure that accepts this table type as a READONLY parameter.

Although sending a TVP into an SP is a straightforward task in .NET, it requires casting the data reader (table) to a SqlDbType.Structured data type. 

I wanted to rely on Power Query internal data type conversion and sent a table as a parameter to my newly created SP. Needless to say I was greeted with the error below


After giving up on the idea of casting a Power Query table into a SQL Structured data type, I’ve decided to replicate the internal call during of an  exec SP with @TVP. Thus I’ve built a rather lengthy 1002 row SQL statement that declares an internal table, inserts all the records from the dummy table and calls the SP with internal table as a parameter.

XML

Back in the days when XML was still cool, sending some tabular data to SQL Server was the matter of serializing a recordset into an XML document and sending it to a stored procedure that parsed it back into a recordset. Nothing has changed since on the SQL Server side.

Since Power Query lacks any functions for constructing XML documents, I found it very difficult to come up with an approach for building binary XML documents that performs on par to JSON serialization. I’ve tried various combinations of looping through the records of a table via List.Generate and List.Accumulate, nevertheless Table.TransformRows and avoiding making any calls to Text.Format seemed to deliver best performance.

On calling Value.NativeQuery, Power Query converts the binary string into SQL varbinary(max) data type, thus for the final call it requires only mapping the placeholder to the binary XML document.

JSON 

Since SQL Server 2016 introduction of JSON support, sending and receiving data to/from SQL Server got a lot easier. Power BI is no exception, sending data to a SQL Server table requires addition of a SP with JSON parameter and on Power Query side serializing the dataset as a text bases JSON object with Json.FomValue.

Because of native JSON support on both SQL and Power BI sides, this was by far the most easy and fast way of sending data across.

4 ways to get USERNAME in Power Query

Regardless of what are you requirements, be that providing some user context during slicing of data or a filtering mechanism for data querying, having a function like USERNAME() in Power Query toolbox  can greatly simplify your data mashup scripts. While trying to build a more robust approach for self referencing queries in M, I’ve managed to collect 4 methods on getting the current username in Power BI. These methods rely on a variety of dependencies — R, NTFS, LocalDB and PowerShell, and come with their own advantages and disadvantages. Based on your setup you might find some of the methods more appropriate than the others, however they are not a universal replacement for a native supported function like USERNAME().

1. R.Execute()

Once again the Swiss Army knife of Power Query — R with its hook function R.Execute, never fails at extending Power BI capabilities. Here is an example of how I call the system command whoami via R function system2 and use its output as a data source in Power Query.

Pros: The advantage of this approach is that it can be adapted to get the STOUT of other system commands as well.

Cons: It depends on calling R, which is currently supported only by Power BI Desktop.

2. NTFS

The output table of Folder.Contents in Power Query contains the field Access Date which records the timestamp of when a file/folder was moved, opened, or access by any other means. Assuming that the NTFS Access Date records for C:/Users sub-folders are up to date, then the folder with the latest Access Date should be the current user.

A note of caution here, NTFS might take up to an hour to update the file times. Which makes this method unreliable in certain scenarios. Nevertheless I’ve recommended this shortcut in the past, and it is still my preferred way of getting to msmdsrv.port.txt file.

Pros: Simple

Cons: Unreliable as NTFS can take up to 1 hour to update the Access Date records of a folder. The Antivirus and other background processes can interfere with correctness of Access Date ⇒ Current User. Also the output of this function doesn’t contain the domain name.

3. LocalDB or any other SQL source

If you happen to have a SQL Server instance lying around or by some curious accident you’ve installed SQL Server Express with support for LocalDB, you can make a trivial SQL query to get SYSTEM_USER. You don’t even need an assigned database for that.

There is always the option of installing only LocalDB feature of SQL Server Express.

install

Make sure to select the version that is relevant to your setup as the connection strings for LocalDB have changed over the years

username

Pros: You might already have LocalDB installed.

Cons: Requires access to a SQL Server instance or installation of LocalDB.

4. HTTP API for Executing PowerShell Scripts

This method relies on a lightweight service which listens to port 8888 and executes PowerShell commands sent over HTTP. The PowerShell team has released this script/service as an alternative to the SOAP heavy Windows Remote Management, for more details please read the associated blog post: Simple HTTP api for Executing PowerShell Scripts. The service itself is written in PowerShell and the 300 lines of code are very easy to follow and adapt to the relevant scenario.

Once I have downloaded, installed and imported the module, I start an instance by using the commandlet Start-HTTPListener.

Start-httpListener

To get the current user in Power Query I make an HTTP call to localhost:8888 with system command whoami.

Back in the PowerShell console, the flag -verbose provides more details to what is happening behind the scenes:

whoami

Pros: This method can be expanded to sourcing the output from other system commands as well as PowerShell commandlets.

Cons: Introduces an additional dependency which might interfere with users’ execution policies. The PowerShell commandlet has to be running in the background.

IMAO

We shouldn’t be asking Power BI team to implement only small utilities like UserName() or CookBreakfast(). Instead we should see the bigger problem and ask for a bigger hammer. I would want Power BI to expose a functions for parsing the console output, this would help with accessing a lot of obsolete data sources as well as standard system commands. Or if we would like to keep only to system specific domain how about having a connector for WMI the same as SSIS does?

Extracting Power Queries

TL;DR Another place to find your Power BI queries is in the data sources of the embedded tabular model. They are stored in a zip file which is encoded as a base64 string in the Mashup field of the connection string.

While we are waiting for Microsoft team to release new features that would allow exporting Power Query queries outside Power BI ecosystem, we might as well explore all of the other possibilities available to us. So far I’ve counted 3 ways of getting a hold of our precious M queries:

  1. Sending a frown email
  2. Parsing the *.pbix files
  3. Using the clipboard (doh!)

As I’ve been poking around this black box that is Power BI, I have discovered another place where M queries are stored — the data source connection string of the embedded tabular model. The steps that I took in order to get there are:

  1. Find the port number on which the $Embedded$ Tabular model is running on
  2. Connect to that Tabular instance and get the Connection Strings of the Tabular Schema Data Sources
  3. Parse the Connection Strings for Mashup field
  4. Convert the base64 Mashup field into a binary file
  5. Unzip the binary file and display Formulas/Section1.m file

Here is me trying to do everything in one go via PowerShell.

PowerThiefEZGIF

When it comes to scripting, the power of the Shell is far greater than that of the Query, but in this case all of the prerequisites are there to replicate the above steps in pure M.

 

Finding $Embedded$ Tabular port number

I’ve tried to solve this problem before via R scripts, however as Mim pointed out to me, there is a simpler Power Query way of doing this. Because Power BI stores its auto generated port number for the tabular model in a text file (msmdsrv.port.txt) under AnalysisServicesWorkspaces, the task is reduced to reading the contents of that file.

By the way my approach uses a short circuit logic, assuming that the NTFS Access Date records for C:/Users are up to date, apparently they might take up to an hour to update. So if you find this script misbehaving have a look at Imke’s blog post for the original script.

Get Tabular Schema Data Sources

You can read about the latest undocumented Tabular DMVs from Meagan’s blog, or fire up SQL Server Profiler, check the boxes for End Command events, and script out the tabular model from SSMS and you’ll  discover a plethora of DMV being displayed in the profiler’s trace. DMV hiding all of the connection strings is TMSCHEMA_DATA_SOURCES.

Extract Mashup field into binary

I’ve encountered some difficulties parsing the connection string with a Text.Split function so I’ve used a Splitter function instead.

Unzip Formulas/Section1.m from Mashup binary

I’ve seen at least 2 functions for unzipping files in Power Query written by KenR and Mark White. Also I’ve described before how to call R unz function from an R.Execute wrapper. This is the result of combining the two.

Wrapping all of the functions above and adjusting them to support multiple instances of Power BI while applying some minification techniques we get the final M script.

Some of the use cases that are still left for me to explore with this approach:

  • pushing M scripts to GitHub, thus replicating the query catalog functionality
  • exporting the Power Queries to *.odc files and consequently importing M scripts from Power BI desktop into Excel

 PowerShell version

For the curious minds, here is the PowerShell code I used in Power Query reconnaissance.

Please note that this approach can be used for extracting Power Queries for already loaded data and only in Power BI desktop.

R, Power Query and msmdsrv.exe port number

The quest to fetch the details of the ever evasive $Embedded$ connection to the local tabular data model has always interested the Power BI community. There are numerous tips on how to obtain the details involving either shell or PowerShell scripts or even V..B..A…

I want to share with you an approach that is based on using R.Execute() in Power Query, and has the advantage of not implying a dependency outside the Power BI environment. This is a re-iteration of a pattern that I’ve already mentioned in one of my previous posts, which explained hot to use R to get STDOUT as a data source in Power Query.

In the R script I use, the function system2() executes system commands tasklist and netstat. Because I need to output the results of these commands into data frames, I also use the function read.fwf() in order to parse the fixed delimited output of STDOUT.

Already in the Power Query world, I transform the two data frames msmdsrv and ports via function formatTable, which trims the table contents and promotes the headers. Then I match/join the two tables on [PID] and parse the contents of column [Local Address] to get the desired port number. Here is the whole script :

If you prefer keeping all the filtering logic inside the R script, here is my take on how to do it:

As a final note I wanted to add that there is actually a way of connecting to the embedded tabular data model which doesn’t require any dependencies. Try the command:

= AnalysisServices.Databases("*")

Unfortunately it uses the same connection context as the live connection so you’ll get an exception like:

Microsoft.Mashup.Evaluator.Interface.ErrorException: Connections to the server context that is currently active are not supported.

ErrorLocal

Which is not very practical.

R.Execute() the Swiss Army knife of Power Query

I rarely use R in Power BI, but when I do,  I make sure to keep the data mashup out of it. I don’t have anything against R, it’s arguably the best statistical tool out there, however when it comes to using an expressive language for doing data wrangling, Power Query (M) is my language of choice.

Although R has been introduced to Power BI ecosystem as a complimentary tool, its capabilities exceed by far those of the Microsoft’s analytics suite. That is why in areas where Power BI is falling behind, I can always rely on R to bridge the gap.

Below I describe 5 patterns of how I use R.Execute() in Power Query. I have divided these in 2 classes – as intended and unintended use cases – just to separate the expectation for how much support these patterns will be getting from Power BI team.

Note: When using R.Execute() the only rule I follow is that if I need some data out of an R routine, I must put the execution results into a dataframe. Other than that all R rules apply.

The intended use of R in Power Query

Executing an algorithm

This is probably one of the most legit use-cases for using R in Power BI. Because there is no language/framework/library that can compare to the vast variety of algorithms implemented in R. If you need to run some exotic algorithm on your data, most likely there is an optimised version written an R, you just have to find the right package (and the right repository: CRAN or MRAN to name a few).

Here is a standard R call executing KMeans clustering on iris datataset.

Calling an R function on Power Query input

Regex is the perfect example here. While we wait for its implementation to arrive in Power Query, why not use R regex related functions regexpr, regmatches, etc?

Here is an example for parsing @mentions and #tags from a twitter feed.

Or another cheeky  example of trying to salvage removed passwords from GitHub commits.

Unintended uses of R in Power Query

My favorite ones of the lot. I must admit these are the patterns that I abuse the most.

Performing file based operations

Say you have some CSV data stored in a ZIP file on your file system somewhere, the only way to get a hold of that data is either by manually unzipping the data, or use one of the lovely UnZip functions written in Power Query by either KenR or Mark White. The R way is even simpler, use read.csv(unz(zip,file.csv)) and you’re done!

Exporting data from Power Query

You’ve done your data analysts in Power Query and you’re stuck with all this data in the $Embedded$ Tabular Data Model. The only way to get the data out is by exporting this data via a table chart, into a CSV. On the other hand you can use R, and export effortlessly millions of records via write.csv  function and not worry about rendering problems that might be caused by the table chart visualisation.

Using STDOUT as a data source in Power Query

This pattern was inspired by one of the questions from the Power BI community. Again, referring back to R, you can call a system commands via system2() function and use its console output as a data source in Power Query.

Some Power Query internals

Whenever I’m given a black box, I marvel at its elegance. Then after a while, frustration settles in, as the IN’s and OUT’s don’t add up and even going through the official documentation doesn’t help. This is when I reach a point that I bring in a hammer and start banging at this black box until something gives in and secrets start pouring.

Same happened with Power BI and its elegant data mashup engine. I love the fact that it is a “pure, higher-order, dynamically typed, partially lazy functional language” , but the documentation doesn’t tell you much on how things work under the hood. So I started digging. And when I peeked at Microsoft.MashupEngine.dll internals, I was surprised to find half a dozen modules filled with Power Query (M) code. Being written by Microsoft, I assume that these modules can be used as a reference for best practices in Power Query code standards.

Some of the practices shared across all of the modules are:

Each module is wrapped in a section which

is an organizational concept that allows related expressions to be named and grouped within a document

Also the functions that are exposed to users are decorated with shared statement –

which omits the requirement to use a section-access-expression when referring to shared members outside of the containing section.

Although I’ve seen occasional use of section-access-expression as well –  LibraryModule!UICulture.GetString and TableModule!Table.AddColumns. My interpretation of these practices are, if you create a library in M, make sure to create a new named section, and mark all of the public members with shared decorator.

Moreover, the use of recursion is acceptable as long as it reduces the complexity of the code. And also I’ve seen a novel use of let expression, where the else clause of an if statement required some precomputed values (check Table.SplitColumn).

Here is the summary of Power Query modules that I’ve managed to salvage.

Module.Date.m

Date time arithmetic which doesn’t cover all of the business use-cases, however presents a very good source of inspiration for your own Date related functions. For even more Date related functions, make sure to check pquery repository.

Module.List.m

This module is a great introduction to functional programming. It contains 10 List related functions that are build upon other List related functions. My favorite one is:

shared List.RemoveLastN =
(list as list, optional countOrCondition) as list =>
 List.Reverse(
List.Skip(
List.Reverse(list), countOrCondition));

Module.SharePoint15.m

Contains the details the code for SharePoint15.Tables, SharePoint15.Contents and SharePoint15.Files. Anyone that had to deal with fetching data from SharePoint into Power BI will recognize some of the details they had to think of when using the SharePoint connector. Moreover there are some great examples on how to use OData.Feeds and Web.Contents.

Module.SqlExpression.m

Contains details on data type mapping between SQL Server and Power BI that are used in SqlExpression.SchemaFrom. An interesting observation in this module is that although you can write a switch statement using some lists constructs, Microsoft kept this function very expressive, which means that you can see a dozen of else if’s.

Module.Table.m

Probably one of the most useful modules of the lot. As everyone is using Tables to output the results of their data wrangling. This module includes the details of 78 Table related functions.There are even some details about query folding. As expected most of the Table functions are based on Record and List primitives.

Module.Text.m

The last of the lot. Although this module contains the details of a single public function Text.Format, it also also contains the details of a function which can be used for text parsing. Have a look at GetPlaceholders.

GetPlaceholders =(
        offset as number,
        openCode as text,
        closeCode as text,
        numberReference as logical
) as list => ...

I’ve added all of these modules to GitHub.

github-80257bbcc9a5340365b9ad97f4b67032

https://github.com/Hugoberry/PowerQueryModules

[Matrix] x [Matrix]

Although matrices are not first class citizens in Power Query, quite often a business problem comes along which might greatly be simplified by thinking of it in matrix terms.

You can of course imitate matrices in Power BI by means of tables, two dimensional lists or records but this approach doesn’t come with any matrix related functions. So as always you have to roll up your sleeves and roll out your own set of needed operations.

I would like to share with you an approach of doing matrix multiplication which can be translated into other algebra operations.

We’ll start by reviewing how matrix multiplication works from the diagram below.

313px-matrix_multiplication_diagram_2-svg

If we break down the problem, we can think of two matrices A and B, in M(Power Query) terms as two tables with matching size of rows in matrix A and columns in matrix B respectively. Then the matrix multiplication operation can be reduced to a problem of iterating through all row and column combinations and computing the dot product of equally sized vectors.

The dot product in M can be performed by using List.Accumulate which aggregates the product of a List.Zip which in turn pulls the respective elements of two vectors into a tuples. Here is the code.

dotProduct = (v1,v2) =>
       List.Accumulate(
       List.Zip({v1,v2}),0,(agg,_)=> agg+(_{1}*_{0}))

Next, we breakdown the matrix A into rows – Table.ToRows(A) – and matrix B into columns Table.ToColumns(B). And iterate through all the combinations of row vectors and column vectors in order to accumulate the results into a nested list of dot products.

Check the code below for use of nested List.Accumulate.


List.Accumulate(Table.ToRows(A),
               {},
               (_,ai)=> _&{ List.Accumulate(Table.ToColumns(B),
                          {},(_,bj)=> _&{ dotProduct(ai,bj) })
})

Putting the two operations together results in the following final function:

How much data can fit in Power BI Desktop

I’ve heard this question being asked multiple times, in forums, blog posts and every single client that doesn’t trust the performance of the Power BI desktop app. I’ve set myself the challenge to finding this out, while exploring the versatility of Power Query M language.

One of the most powerful data structures that M has in its arsenal is the List. I’ll start by creating a simple list of numbers.


= {1..1000}

Let’s Apply this query and check the result in the Data Model.

table_list

This list looks like a table, acts like a table – then it must be a table. While I’m here I’ll change the Data Type of the column Query1 from Text to Whole Number. This way I’ll save the memory footprint for next time I try to load as much data as possible in in this single column table.

Next step – construct the biggest list possible.


= {1..3000000000]

32bitlimit

And I’ve hit my first limit. Another try with 231-1.


= {1..2147483647}

Success! But not for long, as once I apply the changes and wait for the Query Changes to be committed to Data Model, at around 2,000,707,684 loaded records I was faced with another error.

2bn

I’ve tried to repeat the process with a smaller lists and got my first success at 1,999,999,981 records.

big

Which produced a 29GB file.

filesize

That’s a lot of records, and only to add them to the Data Model I needed 55GB of memory.

resources

Not too bad for a desktop application.

What about if I wanted to load lots of data that is not that unique? Say 232-2 of ones?


List.Generate(()=>[counter=-2147483647,uno=1],
each [counter]<2147483647,
each [counter=[counter]+1, uno=1],
each [uno]
)


This query completed in no-time, and with very little memory footprint as the entire msmdsrv.exe service stayed within the 1GB limit. The output saved on disk occupied 191KB. Go compression!

How about the wideness of data? I used a different set of queries to find the limit.

I’ve hit gold with :


=#table(null,{{1..15986}})

Which resulted in a 167MB file.

And the imprecise errors that kept pop-ing up were: