The Northeast Blackout of 2003


◊ This is part of the ‘history’ series of articles ◊


The blackout in Ontario and northeast U.S. which occurred on Thursday, August 14, 2003 at 16:10 EDT, was the largest ever in North America. It affected a similar area as the 1965 blackout, however due to population growth over 38 years, a larger number of customers were impacted. The event was the end result of ineffective forestry maintenance compounded by computer application failures and human factors in Ohio.

Toronto Skyline August 14, 2003 Blackout (Credit: The original uploader was Camerafiend at English Wikipedia., CC BY-SA 3.0)

Control centre operators at Ontario’s IMO (now the IESO), Hydro One, major generating stations and large LDCs found themselves in the middle of something never experienced before. Control centre lights would flicker as standby generators kicked-in to supply critical communication, control and data acquisition equipment. Thousands of alarms would stream into operating centres and phones would jam with critical communication between various control authorities across eastern Canada and the northeastern United States. The clock started ticking as critical infrastructure began to run on battery backup. This was just the start…

What follows are two summaries of what happened on that day based on the official published documents. The first is a high-level account and the second is a more detailed sequence of event report. I have included industry background information in the detailed report where some terms need context.

All times are in Eastern Daylight Time (EDT) 24 hour format.

No other power interruption event has had such a comprehensive public investigation.

Report references

  • A basic account of the blackout is posted on Wikipedia.
  • A technical analysis of the event is available from NERC dated July 13, 2004
  • The IMO – Ontario August 14 Blackout Significant Restoration Milestones of August 2003 is here.
  • A detailed account of the sequence of events issued by the U.S./Canada Power Outage Task Force September 12, 2003 is available from the Ontario archives here.
  • The Interim U.S.-Canada Power System Outage Task Force Report of November 2003 is here.
  • The IMO – Ontario (now the IESO) Restoration Evaluation Report February, 2004 is available from the Ontario Legislative Archives here.
  • The U.S./Canada Power Outage Task Force Final Report April 2004 is available here.
  • The Northeast Power Coordinating Council Report August 2004 is here.
  • The U.S./Canada Power Outage Task Force Final Report on the Implementation of the Task Force Recommendations, September 2006 is here.

An average August day that went bad

On August 14, 2003 a series of disturbances, computer failures and operator errors on a portion of the Ohio grid compounded to trigger a blackout in Ontario just after 16:10 EDT. The event resulted in interruption of 61,800 MW of load and impacted an estimated 50 million people – over 9 million in Ontario. Power outages ranged from a few hours to over a week. Some parts of Ontario had rotating blackouts for up to two weeks.

Blackout Area – August 14, 2003, Blackout Final NERC Report

Following the incident, a joint Canada-US task force was formed by the North American Electric Reliability Council (NERC) to perform a comprehensive investigation of the blackout. The task force was chaired by U.S. Secretary of Energy, Spencer Abraham and Minister of Natural Resources Canada, Herb Dhaliwal. Their final 238 page blackout report was issued in April 2004 and can be found here. The report identified the many causes of the blackout and made 14 recommendations to mitigate the impacts of future cascading blackouts.

Ontario was not to blame for the blackout, nor did it contribute to its extent.

The causes of the blackout are numerous and complex, however according to the NERC task force report, they trace back to FirstEnergy of Ohio, a privately owned utility. NERC concluded that several entities violated their policies and planning standards, contributing to the events that started the cascading blackout. The 2003 blackout was a slow motion catastrophe that took about 4 hours to disintegrate a massive part of the northeastern grid.

The short story

For those who want a high level understanding of what happened August 14th, 2003 here is a summary based on my interpretation of the event as experienced and reported. A more detailed account is provided later in the article, however that may be too much information for many.

The blackout began with incidents in northern Ohio associated with computer application failures at MISO and FirstEnergy.

MISO is the Independent System Operator (ISO) and FirstEnergy is a utility with transmission and generation assets within the ISO’s jurisdiction. At around noon on August 14th MISO had a computer application failure that left them unable to perform their normal system reliability assessment. Later in the afternoon, FirstEnergy also suffered application failures that left them without alarms being presented to operators. The failures at both organizations were not recognized or dealt with in time to avoid the blackout on August 14.

Early afternoon the northern Ohio area began to have equipment failures beginning with a generating station and a 345 kV line. The failures caused a redistribution of power flow, overloading parallel transmission circuits. As the remaining overhead lines heated up from the increased loading, they began to sag, reducing the clearance to trees on the right-of-way. FirstEnergy was unaware of the situation as their alarm system wasn’t working. MISO  was unable to assess the increasing operational risk or take appropriate action to secure the system due to their own application failure.

Eventually, an overloaded 345 kV line sagged enough to cause a flashover to the trees in the transmission line right-of-way. The short circuit from tree contact triggered protection devices, removing the line from service and further redistributing power flow. FirstEnergy remained unaware of the line fault because of their failed alarm system.

Additional 345 kV lines in FirstEnergy’s control area began tripping from flashovers as loads redistributed to the remaining 345 kV circuits and onto the 138 kV lines. The 138 kV system overloaded and collapsed due to the additional flow. The events altered the power flows throughout the northeast as the supply to Michigan from Ohio began to shut down.

Cascade Events – from U.S.-Canada Power System Outage Task Force: Yellow arrows represent the overall pattern of electricity flows. Black lines represent approximate points of separation between areas within the Eastern Interconnect. Gray shading represents areas affected by the blackout.

The flow of power to Michigan began to surge from Pennsylvania through New York, across the ties to Ontario and through to Michigan. When protection devices sensed the conditions, additional lines and generators tripped across the northeastern states. Power swings caused voltage and frequency problems that isolated multiple jurisdictions through protection trips that started at 16:06 EDT. In under 7 minutes, generator and line trips occurred leaving 50 million customers across northeastern U.S. and Ontario in the dark.

The Reliability Coordinators (RCs) of the Northeast Power Coordinating Council (NPCC) – which includes Ontario – had no advanced warning of problems  evolving in the Eastern Interconnection beyond their boundaries. It wasn’t until 16:09 EDT when the abnormal power flows and frequency were detected that the NPCC region RCs were aware of a problem, and the cascading events were automatically unfolding.

Ontario lost some nuclear units for days due to poisoning out but was able to restore the Basic Minimum Power System by 5:20 EDT, Friday August 15th – almost 12 hours after the blackout. It took several days to restore all customers.

The longer story

Ohio – the lead up to the blackout 12:15 EDT to 16:06 EDT August 14, 2003

August 14 began as a typical summer day in Ohio. The grid was operating within normal limits. That began to change in the early afternoon, beginning with a software failure at 12:15 EDT and ending with a 345 kV line trip at 16:06 EDT when the last lights went out in the northern portion of the state. The following information is a summary from the U,S,-Canada Power System Outage Task Force Report issued April, 2004. The information presents software and grid events separately, however they were occurring in the same window of time.

Ohio Software Events

There was no evidence of any cyber security breach or malicious activity that contributed to the 2003 blackout.

The following summary is for significant events at the Midwest Independent System Operator (MISO) and FirstEnergy.

MISO is an Independent System Operator (ISO) and Regional Transmission Organization (RTO) operating in Ohio. It is authorized by the Federal Energy Regulatory Commission (FERC) to implement applicable parts of the U.S. Energy Policy Act. They don’t own transmission assets or directly operate them. MISO is also the NERC reliability coordinator for FirstEnergy. ISOs and RTOs manage the reliability of the bulk electric system in real time and day-ahead for entities within their footprint.

Five RTOs/ISOs are within the with the area impacted by the blackout, however it is the MISO software incidents that are most relevant in the hours leading up to the blackout. MISO is now known as the Midcontinent Independent System Operator.

FirstEnergy is an owner/operator of seven electric utilities. They own and operate generation and transmission assets. Utilities operate in a control area defined by their service territory. FirstEnergy operates a control area in northern Ohio overseen by MISO. Control area operators have primary responsibility for reliability according to NERC policy. They are subject to NERC and regional council standards for reliability.

The relationship between MISO and FirstEnergy is similar to the relationship between the Independent Market Operator (now the IESO) and Hydro One in Ontario.

Midwest Independent System Operator (MISO)

12:15 EDT

The first of several software failures occurred at the Midwest Independent System Operator (MISO) that oversees the Ohio region. The failure compromised the state estimator leaving the single contingency reliability assessment unavailable.

——§——

A State Estimator is a sophisticated software application which takes measurements of power system quantities (telemetry) as input and provides an estimate of the power system state (calculated volts, amps, watts and VARs). It is used to confirm that the power system is operating in a secure state by simulating the system both at the present time and one step ahead for a particular network topology and loading condition. With the use of a state estimator and its associated contingency analysis software, system operators can review critical contingencies to determine whether each possible future state is within reliability limits.

————

The System Operator was left without situational awareness normally provided by their computer applications.

12:40 EDT

The MISO Energy Management System engineer became aware that the state estimator was not resolving properly and disabled its 5 minute recurring automatic operation. MISO manually corrected the system status for an out of service line and the state estimator was able to resolve successfully.

13:30 EDT

The MISO engineer went for lunch without resetting the state estimator to automatic 5 minute operation.

14:44 EDT

MISO realized the state estimator wasn’t running and reactivated it. The state estimator could not resolve and showed a mismatch.

15:39 EDT

MISO corrected the data discrepancy in the state estimator and a valid solution was obtained. Unfortunately, the system collapse was already underway.

FirstEnergy

14:14 EDT

FirstEnergy’s control room lost alarm functionality due to a failure of its “Alarm and Event Processing Routine,” (AEPR) a key software program, followed by the loss of some remote EMS consoles. It lost the primary server that hosted the alarm function followed by the loss of the backup server.

14:45 EDT

The Energy Management System (EMS), all of the main and backup server functions were lost and no new alarms could be presented to operators. During this time FirstEnergy’s SCADA system continued to operate, displaying real time information and sharing data with other operating authorities in the region.

——§——

An Energy Management System (EMS) is a computer application used by electric utility dispatchers to monitor the real time performance of various elements of an electric system’s generation and transmission facilities.

SCADA is an acronym for Supervisory Control and Data Acquisition system. It is a remote control and telemetry system used to monitor and control the power system. It provides the data for the EMS.

————-

15:42 EDT

A FirstEnergy dispatcher realized that the AEPR was not working and informed technical support staff. By that time, the lights at the FirstEnergy control center had already flickered as they lost grid power and switched to emergency power supply.

Software system failures left operators without the necessary information and tools to help avoid the system collapse in Ohio.

Ohio Grid Events

The following events are a subset of the actual events which took place in the afternoon of August 14, 2003. A full accounting of every event is available in the Task Force Report of April 2004.

The event descriptions may include line and generator names for reference. The most important takeaways are:

  • August 14, 2003 was a normal summer day without any high risk conditions on the Ohio Grid.
  • Nothing should have tripped off that afternoon.
  • The first generator trip at 13:31 EDT did not put the grid at risk.
  • There wouldn’t have been any line trips if the trees on the right-of-ways had been trimmed to provide adequate clearance.
  • The first Dayton Power & Light 345 kV line trip at 14:02 EDT did not leave the grid in any immediate risk. There was time to act to prevent the blackout.
  • With every line trip, power redistributed to other lines, increasing loading on the remaining circuits.
  • Overloading eventually leads to line failures. By 16:06 EDT there had been so many line trips that all of the remaining circuits in northern Ohio were overloaded and the cascading outages began. That was the turning point of the event.
  • What happened after 16:06 EDT was an automatic cascade of failures triggered by protection schemes without sufficient time for operator intervention.

13:31 EDT

The grid events began at with the trip of a generator unit – Eastlake #5 while carrying 612 MW and 400 Mvar on the south shore of Lake Erie.

14:02 EDT

Dayton Power & Light’s Stuart-Atlanta 345 kV line trips due to tree contact. MISO is unable to determine what impact this event has on the system because their state estimator is down.

15:05:41 EDT

FirstEnergy’s Harding-Chamberlin 345 kV line trips due to tree contact with 44% loading. FirstEnergy’s alarm handling system is down…

15:32:03 EDT

FirstEnergy’s Hanna-Juniper 345 kV line trips due to tree contact with 88% loading.

15:39 EDT

FirstEnergy System 138 kV lines begin to fail. Over the next 3 minutes, 16 FirstEnergy 138 kV lines failed. This was the beginning of the collapse of the 138 kV system.

15:41:35 EDT

FirstEnergy’s Star-South Canton 345 kV line trips due to tree contact with 93% loading. This along with the Hanna-Juniper trip pushed flow to the 138 kV system and overloaded five of the lines.

15:56 EDT

The Pennsylvania Independent System operator (PJM) called FirstEnergy to report that Star-South Canton had tripped and that PJM thought FirstEnergy’s Sammis-Star line was in actual emergency limit overload. FirstEnergy could not confirm this overload.

16:05 EDT

A total of four 345 kV lines had failed and most of the 138kV system had collapsed. The 345 kV line trips were short circuits from tree contact as conductors sagged due to heating under increased line-load.

16:05:55 EDT

FirstEnergy’s Dale-West Canton 138 kV line trips. It was the most heavily overloaded line on FirstEnergy’s system at 180% of normal ratings

16:05:57 EDT

FirstEnergy’s Sammis-Star 345 kV line tripped out above 120% of rating. Unlike the previous four 345-kV lines, which tripped on short circuits to ground due to tree contacts, Sammis-Star tripped because its protective relays saw low apparent impedance (depressed voltage divided by abnormally high line current)—i.e., the relay reacted as if the high flow was due to a short circuit.

This was the turning point – the cascading outages are unavoidable.

Insufficient forestry management led to transmission lines contacting trees, multiple line trips and a cascading instability.

——§——

Conductors expand as they heat up. Heating may be due to ambient conditions, conducting current or both. The expansion of overhead line conductors increases sag and decreases ground clearance. Utilities must ensure that their high voltage overhead lines will have sufficient clearance from trees when lines are loaded on the hottest summer days to prevent short circuit flashovers.

The flashovers that occurred on August 14 were the result of overgrown trees. The line conductors did sag under load, however the amount of sag was not excessive.

————

Cascading events 16:06 EDT to 16:13 EDT – the spread beyond Ohio

The cascading outage spread through Ohio, Ontario, Michigan and New York until approximately 16:13 EDT. A devastating number of major events occurred in under seven minutes.

16:08:59 EDT and 16:09:08 EDT

Two additional 345 kV lines tripped cutting off power flow to northern Ohio and eastern Michigan from southern and western Ohio.

16:09:23 EDT

The Kinder Morgan generating station in central Michigan shut down, dropping 200 MW of supply. Power swings began through New York and Ontario to Michigan to compensate for the line and generation loss.

16:10 EDT

A 345 kV line in northern Ohio along Lake Erie tripped.

16:10:04 EDT

Twenty generators began tripping off along Lake Erie, dropping 2,174 MW of supply over a forty second period. The loss of this generation increased the power flows into northern Ohio and eastern Michigan.

16:10:37 EDT

The 345 kV east-west Michigan paths disconnected leaving Michigan supplied by a single path in the north, the Ontario ties and northern Ohio.

16:10:38 EDT

Three events occur simultaneously:

    • A 345 kV line along Lake Erie trips, separating northern Ohio from Pennsylvania.
    • Michigan lost 1,265 MW of generation from the Midland Cogeneration Venture plant.
    • The transmission system in Michigan separates northwest of Detroit.

At this point, eastern Michigan and northern Ohio did not have sufficient generation to support the load. The only interconnection left to support the load was the Ontario – Michigan ties. With all of the transmission line trips along Lake Erie, the remaining supply path formed a giant counterclockwise power flow from Pennsylvania to New York, through Ontario into Michigan.

16:10:40 EDT

The power flow surge resulted in tripping four transmission lines in four seconds, disconnecting Pennsylvania from New York. Within the next two seconds, two more Ohio 345 kV lines and two generating stations went off line. Within 5 seconds, another Michigan generator rated 820 MW tripped.

16:10:43 EDT

The Ontario – Michigan tie near Windsor was lost due to a line disconnection in Michigan.

16:10:45 EDT

The northern Ontario system broke apart when the 230 kV line between Wawa and Marathon tripped. A 500 kV tie between New York and New Jersey tripped leaving the northeast portion of the grid isolated.

16:10:46 EDT

Over the next 9 seconds, New York splits east-to-west. New England (except southwestern Connecticut) and the Maritimes separate from New York and remain intact.

16:10:50 EDT

Ontario detected declining frequency and initiated automatic load shedding and tripping nine 230 kV lines partially separating from New York. Radial feeds to New York at Niagara Falls and St. Lawrence (Saunders) supplied islanded loads.

16:10:56 EDT

Three transmission lines at Niagara Falls, Ontario automatically reconnect to New York and frequency dropped low enough to shed 4,500 MW of Ontario load.

16:11:10 EDT

Three transmission lines at Niagara Falls, disconnect again leaving Ontario and New York separated.

16:11:22 EDT

A 345kV line trip limiting supply of southwestern Connecticut from New York. Within the next 22 seconds, two 345 kV lines in southeastern New York trip, followed by an underwater 138 kV cable trip. Southwest Connecticut is blacked out.

16:11:57 EDT

The remaining lines connecting Ontario to eastern Michigan trip, isolating the two areas.

16:11:58 EDT

The remaining Ontario system continues to decline in frequency resulting in widespread line trips. Most of Ontario is now in a blackout.

Human factors

The human factors which contributed to the blackout include communication inadequacies, poor decision-making and lack of training. There are too many to list here, however they are all identified in the U.S.-Canada Power System Outage Task Force Report: August 14th Blackout: Causes and Recommendations.

Ontario impact – on the short end of the stick

Ontario was particularly hard-hit by the disturbance as frequency and voltage fluctuations caused by power swings triggered protection trips of lines and generators. When the dust settled, nearly all of the load in Ontario east of Wawa was interrupted and 92 generating stations were tripped off-line. An additional 5 Hydro Quebec plants were tripped because they were operating isolated onto the Ontario system. There was no advance warning that anything abnormal was occurring outside Ontario.

The collapse sequence of Ontario’s system

16:06 EDT

The conditions were normal, within limits with resources scheduled for the peak load forecast plus the necessary reserve capacity. Ontario Demand was 24,050 MW with imports of 2,300 MW from adjacent jurisdictions of New York, Michigan, Quebec, Manitoba and Minnesota.

16:09 EDT

Power swings began which changed flow by 700 MW from New York through Ontario to Michigan. The power swing was unusual but not considered abnormal as long as the rest of the system conditions were normal. Approximately 90 seconds later there were several more power swings ranging from 2,000 MW to 4,000 MW over a 12 second period on the same path from New York through Ontario to Michigan. These swings caused voltage and frequency variations that triggered power system protections and the Ontario system began to shut down.

16:10:43 EDT

The Ontario – Michigan tie near Windsor isolated by a line disconnection in Michigan.

16:10:45 EDT

Northwestern Ontario separated when the Wawa to Marathon 230 kV lines tripped leaving it energized from Manitoba and Minnesota ties.

16:10:50 EDT

The Ontario system separated from western New York at Niagara Falls and St. Lawrence ties.

16:10:56 EDT

Three transmission tie lines from western New York to Ontario at Niagara Falls reclose and Ontario loses 4,500 MW of load.

16:11:10 EDT

Three transmission tie lines from western New York to Ontario at Niagara Falls trip again and lock out.

16:11:57 EDT

The remaining tie between Ontario and eastern Michigan at Sarnia – Port Huron separated.

16:11:58 EDT

The remaining Ontario system continues to decline in frequency resulting in widespread line trips. Most of Ontario is now in a blackout.

End of sequence of major events in the collapse of Ontario’s grid.

Load shedding dropped demand from 24,050 MW to 2,500 MW. Almost 90% of Ontario’s load was dropped in the first minute of the outage. The Ontario grid broke apart, leaving only pockets in Niagara, the shores of Lake Huron, Cornwall, small areas north of Timmins and near Deep River with power.

The nuclear fleet

At the time of the blackout, Ontario had 11 nuclear units on-line at three stations. Four Bruce B, four Darlington, and three Pickering B units were operating at capacity until the system disturbance caused the nuclear reactors to trip. As a consequence, 7 units shut down and were unable to reconnect to the grid. Three units at Bruce B and one at Darlington were able to reconnect at 60% power – a feature only available at Bruce B and Darlington. The units remaining off line poisoned-out, meaning a reconnection would require days in accordance with nuclear operating protocols. Three units at Darlington were placed in a zero-power hot state which requires 2 days to restart.  Four units at Pickering B and one unit at Bruce B were placed in a guaranteed shutdown state which requires 6 days from the initial grid disconnection to reconnect. Units do not restart all at once but rather in sequence as determined by approved operating procedures.

Nuclear sites are a restoration priority as their plant systems operate on standby generators and battery backup until grid power is restored. Grid power was restored to the nuclear sites within a few hours of the start of the blackout. Within 5 hours the three Bruce units and one Darlington unit were back on-line. The last units to connect back on the grid were at Pickering on August 29th, more than two weeks after the blackout hit.

Restoration

Restoration of the Ontario grid is a massive undertaking with many constraints imposed on system operators. Switching operations and generator restarts must follow a strict sequence and protocol in accordance with procedures which existed, but had never before been invoked on this scale.

Parts of Ontario’s restoration plan had been practiced through similar simulation exercises in 2001 and 2002.

Ontario had lost 92 generators on a system with hundreds of transmission-connected stations, hundreds of transmission lines and thousands of feeders. This is complicated. More than 2,000 circuit breakers needed to be opened immediately following the loss of grid potential. Restoration required balancing load with generation and synchronizing islanded portions of the grid while maintaining voltage and frequency within limits.  In the first of many steps, generators were restarted and connected to transmission lines to begin the process of load restoration. A public appeal was issued for customers to minimize electricity use as they were restored to avoid any further outages.

Restoration activities and sequence of event milestones

The following information was provided by the Ontario Independent Electricity Market Operator in their Significant Restoration Milestones report issued August 29, 2003 and supported by their Restoration Evaluation Report issued February 20, 2004.

Ontario had been left with several small pockets (islands) of generation intact at Saunders, Beck, Smoky Falls and Des Joachims. Northwestern Ontario (west of Wawa) had remained connected to Manitoba and Minnesota. The restoration would begin by establishing paths from each islanded area simultaneously including restarting of the generation as it became available.

The paths included:

  1. from Niagara, Beck to Bruce Nanticoke and Burlington. Then Burlington to Pickering.
  2. from the Cornwall area, St. Lawrence to Lennox, Darlington and Pickering.
  3. from Chats Falls (Ottawa) to Pickering and the east/west parallel path.
  4. from Northeastern Ontario, North of Timmins to Southern Ontario
Restoration Paths – IMO August 2003 Blackout – Restoration Evaluation Report February 20, 2004

The energizing of lines out of Niagara began at 16:42 EDT with the objective to bring back the Bruce Nuclear site. Three Bruce units were back on line by 21:13 EDT. Grid potential was also restored to the Nanticoke generating station during the process of powering Bruce. Restoration was then implemented toward Toronto, Pickering Nuclear Station, Lambton and TransAlta generating stations in Sarnia. Transmission bounded by London in the west and Toronto in the east was loaded to support the output of Bruce nuclear units.

Restoration out of the Cornwall area began at 17:15 EDT by energizing the first circuit westward toward Lennox, Darlington and Pickering. Restoration from Cornwall to Ottawa was started at 18:40 EDT to restore critical telecommunication facilities. Generating units from Quebec were synchronized to the system at 20:17 EDT. The Darlington Unit which was available was back on-line at 21:18 EDT. A link between Cornwall area and Toronto was completed at 22:37 EDT to form a loop around Lake Ontario.

Restoration from Chats Falls began with the restart of the generating station units from the Quebec grid  at 17:15 EDT. The circuits toward Pickering were energized at 20:21 EDT. Pickering was connected to the Niagara path at 21:15 EDT. Chats Falls was connected to the remainder of the system early on August 15th.

Restoration from the Northeast began with the synchronization of some small generators south to Timmins by 19:41 EDT. After restoring Timmins area load, transmission was energized to Sudbury followed by east/west connections toward Ottawa and Wawa. The connection from northeast to southern Ontario was completed at 3:43 EDT, August 15th. The connection between the northwest and the rest of the province was completed at Wawa at 5:20 EDT, August 15th.

Ontario’s Basic Minimum Power System was established at 5:20 EDT, Friday August 15th

 

——§——

Grid and telecommunication infrastructure run on battery power during power outages. The industry standard for battery backup is 6 to 8 hours after which restoration becomes much more problematic. Portable emergency generators need to be deployed unless transmission lines can be re-energized to power battery chargers within the time limits. Personnel need to be organized for deployment to critical sites for local equipment assessment and manual operations.

The longer an outage lasts, the more complicated restoration becomes. Outages lasting for more than a few hours can impact public telephones, digital communication, municipal water supply, fuel supply, transportation, refrigeration of perishables, food distribution, medical services and public safety.

————-

Restoration completed

The restoration of power in Ontario was deemed completed and the public appeal for load reduction was lifted on Friday August 22nd at 20:00 EDT seven days after the blackout began. The IESO Restoration Evaluation Report claimed most power was restored in Ontario in 30 hours, however my personal recollection was that it took longer. A media report claimed it was 4 days for some customers to be restored.

 

——§——

Voluntary participation in reliability organizations in Ontario was established in 1968 as a result of the 1965 blackout. Based on recommendations from the 2003 blackout Canada-US task force, Canadian and US agencies worked together to create mandatory reliability standards. In 2007, Ontario made reliability standards mandatory and enforceable through changes to the Electricity Act. The changes include provision of appropriate penalties for noncompliance.

The North American Electric Reliability Corporation (NERC) and the Northeast Power Coordinating Council (NPCC) are now recognized as Standards Authorities in Ontario and most of Canada.

————

The blackout follow-up for Canada

The blackout prompted a review of Canada’s emergency preparedness. The Minister of National Defense (MND) directed the Department of National Defense (DND) to review the first responses of the department and the Office of Critical Infrastructure and Emergency Preparedness (OCIPEP). The review resulted in improvements to the emergency response processes in a final report issued September 17, 2003.

Ontario underwent its own reliability assessment following the blackout and issued a Restoration Evaluation Report on February 20, 2004. Individual entities conducted internal reviews and implemented changes where required to minimize impact from future blackouts.

In September 2006 the joint Canada-US task force issued its final report on the implementation of the 2004 recommendations. The only two task force recommendations impacted Ontario, both related to nuclear. One was regarding operator training associated with the use of adjuster rods and the other was provision of backup generators at the Canadian Nuclear Safety Commission. Both implementations were successfully completed.

FirstEnergy – items of interest

Ohio based FirstEnergy was identified as having caused the blackout. They were cited for several NERC violations but never penalized because reliability standards were not mandatory and financial penalties were not enforceable.

FirstEnergy has a fascinating history which is available on Wikipedia.

  • In November 2016, FirstEnergy made the decision to exit the competitive power business, and become a fully regulated company.
  • On March 31, 2018, FirstEnergy Solutions filed for bankruptcy. In 2020, it emerged from bankruptcy, becoming Energy Harbor Corp.
  • In March 2018, FirstEnergy announced the planned closure of the Perry and Davis–Besse nuclear plants and the coal fired Sammis plant in Ohio. In July 2019 a subsidy from the State of Ohio was approved to support the Perry and Davis–Besse nuclear plants and FirstEnergy rescinded the closure notice for the 3 plants.
  • In July of 2020 FirstEnergy was accused of bribery for funneling $60 million to state politicians. On July 22, 2021 FirstEnergy was fined $230 million for their actions.

The takeaway…

The blackout of 2003 was the result of a series of preventable, avoidable events. It is unfortunate that such a long list of circumstances could possibly happen in a short period of time and impact millions of people across two provinces and eight states with billions of dollars in economic costs. The lessons learned from the 1965 blackout and the voluntary reliability standards that were developed did not prevent the 2003 blackout… but they would have if they were followed.

One of the biggest lessons learned from 2003 is that voluntary reliability practices don’t work. Goodwill and mutual interest are not sufficient motivation for entities to follow them. They must be mandatory. Sufficient penalties must be in place such that it makes business sense for entities to maintain their compliance.

The blackout showed how important forestry management, communicating, personnel training, computer applications, IT support and proper maintenance is to reliability.

The benefits of integrated power systems can only be realized if the entities that are part of it adhere to the highest standards of reliability practices. Without each entity acting in the interconnected area’s best interests, integration is a liability, not a benefit.

Fortunately, mandatory reliability standards along with financial penalties for non-compliance are now in place in most of North America (not Texas). It is much less likely that the blackout of 2003 will repeat. We rely on an audit process performed by NERC delegates to verify reliability compliance of applicable entities.

It took 38 years for a repeat of the 1965 blackout. The clock is counting down to the next one.

Derek


Leave a Reply

Your email address will not be published. Required fields are marked *