Power BI

Power BI Best Practices, guidelines, and How to Avoid Common Mistakes – a Beginner’s Guide

In our Power BI training in Sydney, Melbourne, and other Australian cities, we provide in-depth training on Power BI covering basics to advanced level content. Our blogs provide insights on different components of Power BI ecosystem which will also help you a lot. In this blog we are going to elaborate some of the most common mistakes people make in their journey towards mastering Power BI. This blog also lists down some of the best practices for learners to follow to become more proficient in working with different components in Power BI ecosystem. This blog is a combination of BI concepts and Power BI tools.

Log-term perspective vs. ad hoc analysis

We have seen people working with Power BI without getting a formal training and without having a holistic approach of Power BI ecosystem. You can certainly go for self-learning but there must be some sort of practical training of the complete ecosystem. Whenever you are preparing dashboards and doing data analysis you should ask the following questions.
  • Who are the end users?
  • What they already know and what they want to know?
  • What sort of decisions need to be made?
  • How frequent the data is changing?
  • What is the frequency of the dashboard usage?
  • How future changes will be incorporated?
  • Do you have all the necessary info to do the analysis?
  • Do you have a shared layout of the dashboard with the stakeholders?
If you are considering these key elements, you are following a long-term perspective otherwise just doing an ad hoc analysis and sooner or later will face troubles.

How to learn Power BI?

Power BI is a complete suite of tools and to become proficient, you need to learn all of them. The best approach is to use guided learning approach where you learn, practice, and grasp every component of Power BI ecosystem separately.

Getting data

Pulling in redundant data

One of the most common mistakes while getting data particularly the ‘Data Tables’ is pulling in redundant data. If you are working with small data model, you may not face any issue because Power BI is pretty good at managing, compressing and storing data but if you are working with a data tables of millions of rows, pulling in redundant data will slow down the model, and create problems when you need to add calculated columns, add more tabs to your file and want to expand your analysis avenues.

Best practices

You only need the foreign key columns of your data tables and after creating relationships with the respective dimension tables, you can pull in the desired descriptive fields in your DAX measures and calculated columns and on to your dashboards. As a best practice never import descriptive fields in data tables, they are unnecessary, and redundant. Here is the example of redundant data. All the columns highlighted in Blue are unnecessary and should be removed in the data source (not in Power Query as this will also create issues if you are working with millions of rows of tables.).

Extra applied steps

The rule of thumb in data transformations is:

“Never apply a step in Power Query, if you can do the same in your source data or in your SQL query.”

If you are familiar with SQL, you can easily understand what we are trying to explain and if not, just consider that if any transformation such as filtering Sales data table where countries are only equal to Australia, Canada and, China, do it in source data not in Power Query.

As we mentioned before, if you are working with small data, applying twenty, thirty or forty steps may not bother you, but when your model contains millions of rows of data tables, this will certainly cause headaches.

Here is a very bad example of extra applied steps in Power Query. You should avoid such crazy list of transformations particularly when working with large size tables.


The focus of this blog is to outline some of the common mistakes people make while working in Power BI and the scenarios described are based on our real experiences. Following best practices outlined in the blog help to develop clean and efficient data models which can support large data sets and provide fast and credible reporting.

Back to list

Leave a Reply

Your email address will not be published. Required fields are marked *