1. BACKGROUND AND PROGRESS REQUIRED BEYOND THE STATE OF THE ART
As a matter of fact, the European Transmission Network (ETN) has been evolving significantly in the ways that network operators work and cooperate with each other. Up to the early 2000’s, most of the electric utilities were vertically integrated from generation down to the retail distribution. Each utility had the responsibility of balancing the power delivered to their clients by means of their own generation fleet. The interconnection of the national networks basically had the sole role of sharing generation reserve in case of tripping of a generating unit somewhere in the ETN. Since the unbundling of vertical utilities, several dramatic changes have occurred:
- The liberalization of the electricity market and the implementation of the IEM (Internal Electricity Market) have impacted the Capacity allocation and Congestion management (CACM) methods and actually make use of the total available cross-border transfer capacity and call for reinforcement of the transmission network. Up to 2000, cross-border flows were much lower and more stable than today: in case of major disturbances, the interconnections were intentionally tripped in order to avoid the extension of a possible blackout. The N-1 criterion for network security was managed at national level. Nowadays, new tools are needed for improved TSOs coordination, in organizations like PSC or CORESO.
- The integration of a large amount of intermittent renewables, either dispersed in the distribution system or remotely located far away from the load centres represents a challenging issue with regard to grid reliability and provision of ancillary services; it calls for network reinforcement, protection plans ,network congestion management , cross-border balancing , load scheduling , much more sophisticated grid operation and Operational planning at transmission level, and smarter distribution grids. Overall, the European network flexibility needs to be increased using new tools for improved daily operations including close to real-time operations.
- New information and communication technologies, asset monitoring technologies and power electronics are now available to help the ETN to become smarter. This requires new numerical models to describe the ways generating units, wind farms, FACTS interface with the electric system, which in turn allows more data to be exchanged between interconnected control zones.
- For political and security of supply reasons, further extensions of the ETN are planned (Turkey) or contemplated (Russia, Mediterranean ring) leading to the most extended interconnected system in the world.
In order to address these major changes, new simulation techniques are needed in the domain of power system state estimation and transmission network time simulation. Yet, very few innovations have emerged over the past decade (2000-2010) in Europe. This comes mainly from three reasons:
- the dramatic increase of computational power has provided existing software tools with a “free of charge” performance increase: ambitious R&D activities on critical algorithmic issues have thus slowed down.
- European Transmission System Operators have spent a lot of R&D efforts, with scarcer resources, on market design and market operations (for instance the tools for market coupling ) due to the unbundling of large utilities and their new role as regulated operators
- The very same unbundling has pushed regulators to benchmark TSO’s activities and, in line with what was happening in the past bundled organizations, to restrain significantly R&D investments
Breakthroughs are therefore expected in power system simulation along four directions to serve the ETN: state estimation, optimization algorithms, time domain simulation and power system component modelling.
Even though state estimation algorithms able to run on large-scale power systems are available, new developments are needed to cope with the multiple-TSO structure of the ETN. Ideally, each TSO should be able to build a real-time snapshot of the whole ETN that will be synchronized with its own local state estimation. A two-level state estimation taking into account the constraints of real-time data exchanges seems appropriate to meet this challenge. New developments in the handling of erroneous or missing data, especially those involving grid topology, are then expected. These developments would be particularly welcome for very large systems. Further developments are also expected as regards the incorporation of PMUs and WAMS , which will make it possible in the future to perform state estimation at higher rate and more accurately.
Load Flow (LF) programs to compute the operating conditions of large power systems like the ETN have already been in use for a long time, involving Newton-Raphson methods, possibly decoupled, and sparse linear algebra which exploits at best the topological structure of power networks. One of the deficiencies of such standard LF programs when computing a (near-) future operating point is the difficulty to incorporate the control philosophy of a specific power system (e.g. setting of injections, transformer taps, security rules, etc.). An Optimal Power Flow (OPF) is therefore needed, which would also be able to determine the preventive actions or corrective actions that best meet some objective (or some combination of objectives), under the equality constraints stemming from the network equations and the inequality constraints stemming from component operating limits. Available OPFs in 2010 were capable of optimizing large-scale power systems within acceptable computational burdens, using the most efficient algorithms derived of the “Interior Point” method. However, there are still limiting factors in the application of OPF, namely:
the management of discrete variables (change of topology, transformer taps, switching of compensation devices, starting up of units, etc.),
the implementation of post-contingency security constraints which implies large systems of equations to be solved (Security constrained OPF).
For the design of the most appropriate optimization methods for handling very large systems like the ETN as well as for operating procedures and tools , new knowledge is needed to address the following issues:
The electromechanical dynamics of a power system are very complex. They follow several physical mechanisms that can help propagating an initial incident towards a full collapse of the system (loss of synchronism, voltage or frequency collapse, cascade tripping, etc.). Moreover, the system behaviour involves a mixture of slow and fast phenomena which are oscillating and often poorly damped. Moreover, the power system displays very non-linear behaviour and numerous discontinuities. The accurate simulation of such a complex system requires to numerically solving a large set of algebraic-differential equations (up to more than 100000). Implicit simultaneous methods with variable integration step size are the best solution today.
New knowledge was thus needed to address the size of the ETN (about 150,000 variables) and the increasing use of power electronics, with the ultimate objective to run a full size ETN model in real-time for coordinated training purpose towards dispatchers.
Parallel computation: parallelizing the electromechanical model of the power system has been studied so far with poor success. The reasons of this failure are well identified. Nevertheless, the advent of multiprocessor computers with high speed internal communication allows for hoping a successful fine-grain parallelization. There remain also some unexplored options like the coarse-grain or “strategic” parallelization, the component decoupling, the update of Jacobian matrix, etc.
Distributed calculation: preliminary investigations have shown some successful attempts to apply a waveform relaxation approach to medium size power system dynamic simulations. This approach shows also interesting characteristics that allow minimizing data sharing amongst TSOs
Multirate algorithm: while the best available technology uses time-variable integration step size, the multirate algorithm could take advantage of the fact that, in a large power system, fast phenomena are often local. A longer step size could be used to simulate the parts of the system far from the perturbation. No major positive experiment in power system domain has been identified. However, interesting results have been identified in the domain of electronics simulation.
It is expected that the above three research tracks, or a combination of them, will help reaching speed-up ratios of at least 10, thus making the ultimate target of running a full dynamic simulation of ETN in real time reachable.
Moreover, the needs to run numerous simulations for security analysis in control centres raise equally challenging issues for which some modelling simplifications are necessary: it then leads to a smaller set of equations to be solved with higher time steps, and smaller data base of models to be maintained.
Quasi Steady-State (QSS) simulation allows for this reduction in complexity, at the expense of some reasonable approximations. This approach has been successfully implemented by a few authors, and is used by a few TSO companies, mainly for voltage security and cascade line tripping assessment. New knowledge is needed to replace the classical static (LF-based) security calculations by QSS simulations, including:
The broadening of the scope of phenomena that can be handled and hence the field of application of QSS simulation (e.g. frequency dynamics, etc.);
deriving the QSS models automatically from the full dynamic model;
coupling full time-scale and QSS simulations
further speeding up QSS simulations by automatic simplification of remote areas (in a totally transparent way for the user)
Power component modelling
The static models of the so called “passive” components of the power system are well understood by the Power System Community and progressively accepted under the form of the American originated emerging format called Common Information Model (CIM). For dynamic models, nothing exists. Even in production-grade simulation software, some model designs are fuzzy or may have their coding bugged. The reason for this must be found in the complexity of the domain, the specificity of the power plants, the cost of model validation and parameter identification steps. On top of these permanent concerns, concerns do exist about how the models for new power system technologies (electronic interfaces, FACTS, digital controllers and protections, active distribution networks) should be introduced in the full time-scale model of the power system. New knowledge is needed to develop a general methodology for modelling active system components, including protection and control, to build a quality system for validating models and data and to specify or develop methods and tools for identifying model parameters, which in turn, would result into an improved quality for simulation results.
Modelling flexibility and user friendliness
From the user viewpoint, two critical issues must be addressed:
It mainly depends upon the company tradition and size. Some companies have developed their own suite of software applications and use it as internal carrier of the in-house expertise. Others are using standard software tools and subcontract the customization and training services. In both cases the situation is frustrating because of the difficulty to keep experts or to gain expertise. A better distinction between modelling and algorithms must be achieved by using simulation meta-languages and powerful word-class solvers. These meta-languages are designed to be used by in-house experts, in order to make their modelling work easier. For the case of dynamic simulation, user-defined modelling of controllers already exists. This concept ought to be extended to protections and automatons.
- Are the software interfaces adapted to the practical use of the application?
In-house application development suffers the huge cost of developing up-to-date Man Machine Interfaces (MMI): very little effort is made by manufacturers to improve their on-line application MMI. Activities must be devoted to the design of an advanced MMI, which will be essential to exploit in real time the information made available by the ETN state estimator in national dispatching centres, which is a specific and completely new problem.
Power system reliability management means to take a sequence of decisions under uncertainty, by undertaking reliability assessment followed by reliability control. It aims at meeting a reliability criterion, while minimising the socio-economic costs of doing so. The N-1 criterion is a principle historically (and still currently) followed by TSOs according to which the system should be able to withstand at all times the loss of any one of its main elements (lines, transformers, generators, etc.) without significant degradation of service quality. Probabilistic reliability criteria, which could supplement, enhance or replace the pure preventive N-1 criterion, are to be studied to cope with the increasing uncertainty caused by intermittent generation.
2. OUTCOMES PROVIDED BY THE PROJECTS THAT ADDRESS THE CHALLENGES OF THE CLUSTER
The PEGASE project removes several algorithmic barriers related to the monitoring, simulation and optimization of very large power systems, therefore paving the way for improved network operation. The four-year R&D activities ending June 2012 have developed new powerful algorithms and full-scale prototypes able to model the whole ETN for state estimation, dynamic security analysis, operation optimization and real-time simulation and training tools for operators.
The aim of this 4-year FP7 R&D project is to design and evaluate new power system reliability criteria to be used within the key activities of TSOs at different time scales: system development, asset management and power system operation. If successful, these criteria could be progressively implemented at the pan-European level, optimally balancing reliability and costs. Indeed, the increasing uncertainty caused by (among others) the massive renewable energy integration calls for the use of probabilistic reliability criteria to supplement and enhance the pure preventive N-1 criterion.
C.The iTesla project
The main purpose of iTesla (Innovative Tools for Electrical System Security within Large Areas) is to develop a new generation of security assessment tools, able to support the operation of the pan-European grid in the coming years. The main goals of the project are firstly to develop such a toolbox, able to support the operation of the pan-European grid in the coming years, and secondly to validate the different functionalities of this toolbox with datasets of increasing complexity and size.