Spotlighting Artificial Intelligence in the energy sector.
Artificial Intelligence (AI) is everywhere. It recognizes our face to unlock our smartphones, it helps us to find our favorite product in an online shop, and it is even in our washing machines! It is no surprise then that AI has found its way also to the energy sector – it can forecast electricity demand, consumption and cost, detect problems in energy systems, and many more.
AI seems like a magical creature, but in reality, it is a product of human intelligence and lots and lots of data. AI experts combine mathematic, statistic, programming, analytical, and many other skills to create AI models which are able to perform intelligent tasks. This is a complex process, where two key ingredients are necessary for an AI model to be able to forecast phenomena of interest.
The model needs to:
1. Gain experience – through data and interactions, and
2. Know how to “think” and make decisions – which is defined by an algorithm.
At Comtrade Digital Services, we developed a solution for forecasting electricity demand. We worked on getting the data, developing an AI model that uses this data, and finally testing the whole solution.
Data – Obtain It, Prepare It
AI systems gain knowledge through the data and therefore, the data should be as best as possible, in terms of volume, diversity, and quality. Simply, the better the data the better the prediction capability of the model. However, before arriving at the point where we feed the model with the data, we have to take care of various aspects:·
Identify data sources: First, one needs to identify the data sources which contain the data relevant to the problem we are trying to solve. Besides identifying obviously important data sources, such as historical values of the variable we want to predict, in this phase we also create hypotheses and make assumptions about additional data sources that could contain valuable information. The practical usefulness of that additional data is evaluated later when we perform additional analysis and observe our AI model in action.
Obtain & process the data: Next, we need to obtain access to the data sources, collect the data, parse, validate, transform, and store it. Since, in our case, the data was obtained from various sources, it was necessary to also perform normalization by transforming the data to common format in terms of time zones, data frequency, etc. Moreover, since the real data is rarely ideal, we also take care of missing and anomalous data points. There are a lot of steps in data processing and one needs to be very careful when carrying them out since even the smallest mistake can shift or change the data so that is stops representing a realistic scenario, and consequently can influence the forecasting power of the model trained on such data.
Understand your data and the domain: While working with data, it is necessary to explore and understand the specifics of the data and to know how the transformations affect it. Also, it is essential to learn about the particular domain we are working on and to apply this valuable domain knowledge as much as possible. For instance, weather can influence electricity demand and this information should be taken into account, for example, when selecting data sources. Only if we truly understand our data, the problem we are trying to solve, and the specifics of the domain, can we apply the right techniques and prepare the data for the next step – the algorithm.
Algorithm – the Brains of the Model
Besides the data, the algorithm also plays a crucial role in the process of creating a predictive solution. The algorithm defines how the model processes the data and makes predictions. From our experience, implementation of an algorithm roughly incorporates the following aspects:
- Select a machine learning (ML) algorithm: First, one needs to select a suitable ML algorithm. A lot of different algorithms exist, but it takes a lot of knowledge and experience to choose the best one (or several) for the problem at hand. The decision is made based on the type of problem we are trying to solve, the characteristics of data, resources, and our previous experience. In our project, we forecasted electricity demand on a market and for the basic ML algorithm, we chose the one called Random Forest due to its various advantages.
- Implement the custom code: Although the base implementation of Random Forest exists in ML libraries, it was not enough to just call a function from the library, pass the data as a parameter and observe the final predictions. It was necessary to implement additional, carefully designed, custom functions for the specific problem at hand, which combine various models for predicting demand different hours in advance, process and aggregate the outputs, predict peaks, etc.
- Again, understand your work: Also in this phase, the domain knowledge and problem understanding are essential to creating a solution for predicting phenomena of interest as best as possible.
Don’t Forget to Test
Even if we carefully implement the code, patiently work with the data, incorporate expert knowledge into AI models, and truly understand every step on our way, one additional key ingredient is necessary – testing. We wrote unit tests, component tests, integration tests, and estimated the predictive power of the developed models through carefully selected and designed AI validation procedures. Quality is important to us and our goal is to deliver high-quality and robust AI solutions.
On the Whole…
Our journey hand in hand with artificial intelligence in the energy sector has been fantastic and inspiring – from collecting and processing the data to designing and integrating our predictive solution. When I think about it, I remember all the hard-work, awesome ideas, and discussions which resulted in the whiteboard full of text and diagrams, but also all the great time spent with the team and happy faces when, after hard work, goals are reached.