Network controls for electrical systems

03/11/2013


Examples of network failures 

Northeastern United States, 1965: On Nov. 9, power was disrupted to parts of Ontario, Connecticut, Massachusetts, New Hampshire, Rhode Island, Vermont, New Jersey, and New York. More than 30 million people were without electricity for up to 12 hours, all due to an error in setting up a protective relay in the network. An interconnecting transmission line between the Niagara Falls (N.Y.) hydro generating station and Ontario was tripped offline, while it was operating well below its rated capacity. The loads that were being served by this line were transferred to another transmission line, causing it to overload and trip offline. This created a major power surge due to excess capacity on the Sir Adam Beck generating station, which reversed the direction of power flow on other transmission lines crossing into New York. While the Robert Moses plant (Niagara Falls generating station) isolated itself from the surges, the network disruptions cascaded throughout New York and the other Northeast states. There were alternating overloaded transmission lines and power generating stations tripping offline to protect themselves from damage due to the power surges. During this event, the network frequency dropped to 56 Hz and, 1 minute before the blackout, down to 51 Hz. 

New York City, 1977: On July 13 and 14, major portions of New York City were without power during an outage that was initiated by a lightning strike on a substation in Westchester County. The network failed due to a faulty breaker that did not reclose after the strike. A subsequent strike caused two other breakers to trip and only one of them reclosed. Due to control network problems, two major transmission lines were overloaded and local peaking generators failed to start, exacerbating the system instability. Within 1 hour of the initial lightning strike, all of the local generating capacity had failed and all inter-tie connections to areas outside of the local service area had been severed. Less than 2 hours after this failure, power restoration procedures had begun, but procedural problems within the network slowed the restoration efforts for more than 24 hours.

Northeastern United States Blackout, 2003: Some 55 million people experienced a major power outage lasting up to 16 hours, all due to a failure in the network control and data collection system. The sequence of events:

  • The beginnings of one of the largest power outages in U.S. history was a computer software bug in an energy management system that caused incorrect telemetry data to be reported to a power flow monitoring system. At 12:15 p.m., this problem was discovered and corrected by a system operator who failed to restart the power monitoring system.
  • Following this minor monitoring failure, a generating station was taken offline by its operating personnel due to extensive maintenance problems.
  • A 345 kV transmission line in northeast Ohio made contact with a tree and tripped out of service.
  • The alarm system at the local utility control room failed and was not repaired.
  • One transmission line after another tripped offline as they were sequentially overloaded by the failing network. More than 20 major transmission lines ranging from 138 to 345 kV were taken offline due to undervoltage and overcurrent. At this time, the shedding of 1.5 GW in the Cleveland region could have averted subsequent failures. However, the network operational data was inaccurate, unavailable, or not acted upon by the network control personnel in a timely fashion (4:05:57 p.m.).
  • Between 4:06 and 4:10 p.m., multiple transmission lines experienced undervoltage and overcurrent conditions and were tripped offline. Seconds later, multiple power generating stations along the East Coast were overloaded and tripped offline to protect themselves. At this point, the blackout was in full swing.
  • At 4:10:37, the Michigan power grids isolated themselves.
  • At 4:10:38, the Cleveland power grid isolated itself from Pennsylvania.
  • At 4:10:39, 3.7 GW of power flowed west along the Lake Erie shoreline toward southern Michigan and northern Ohio. A surge that was 10 times the power flow only 30 seconds earlier caused a major voltage drop across the network.
  • At 4:10:40, the power flow flipped to a surge of 2 GW eastward, a net of 5.7 GW, and then reversed again to the west, all in the span of 0.5 seconds.
  • Within 3 seconds of this event, many of the international connections failed. This resulted in one of the Ontario power stations tripping offline due to the unstable network conditions.
  • By 4:12:58, multiple areas on both sides on the United States-Canada border disconnected from the grid and New Jersey separated itself from the New York and Philadelphia power grids. This caused a cascade of generator station failures as the network conditions continued to deteriorate, both in New Jersey and westward.
  • At 4:13 p.m., 256 power stations were offline with 85% of them failing after the grid separations occurred. Most of these were caused by automatic protective controls in the power network or the individual power station. 

Southwestern United States Blackout, 2011: On Sept. 8, a procedural error caused a widespread power outage covering southern California, western Arizona, and northwestern Mexico. While electrical utilities normally use network monitoring and computer modeling to determine when a single point failure will cause a problem, this was not done prior to a capacitor bank switching event. Because the network control did not know about the potential problems that this line failure could cause, there was a cascade event lasting 11 minutes that tripped off transmission lines, generating stations, and transformers. More than 7 million people ended up without electrical power for up to 14 hours. 

India Blackout, 2012: In July, 620 million people were without electrical power between July 29 and August 1. The network failure began with a 400 kV transmission line tripping out due to an overload. Rather than shedding load to protect this critical circuit, power stations began tripping offline. This event created a cascade effect, shutting down power to more than 300 million people for about 15 hours. On July 31, the network failed again due to a control relay malfunction near Agra. This very small event caused a complete shutdown of 38% of the generation capacity in India, with impacts in 22 of 28 of the Indian states. This failure was the largest power outage ever experienced, and it would have been even larger but the five largest consumers of electricity in India had their own off-the-grid power generators. Overall there was 35 GW of private generation capacity and plans to implement an additional 33 GW, all off-the-grid. Of the recorded areas not affected, none were connected to the power network and therefore remained in service. 

While the aforementioned examples were all very large power networks, the principles of operation and failure work the same for small networks within a single building.

Definitions 

Spinning reserve: The amount of electrical capacity that is available from a generating system, without the station personnel starting another generator. 

Strong ties: Strong ties between sections of a network are those that have a tendency to try to keep the adjacent, disturbed network section operational.   

Weak ties: Weak ties between sections of a network are those that have a tendency to immediately break the connection to a disturbed network section whenever there is the smallest disruption in that section. 


Kenneth L. Lovorn is president of Lovorn Engineering Assocs. He is a member of the Consulting-Specifying Engineer editorial advisory board.


<< First < Previous 1 2 3 Next > Last >>

No comments
The Top Plant program honors outstanding manufacturing facilities in North America. View the 2013 Top Plant.
The Product of the Year program recognizes products newly released in the manufacturing industries.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
The true cost of lubrication: Three keys to consider when evaluating oils; Plant Engineering Lubrication Guide; 11 ways to protect bearing assets; Is lubrication part of your KPIs?
Contract maintenance: 5 ways to keep things humming while keeping an eye on costs; Pneumatic systems; Energy monitoring; The sixth 'S' is safety
Transport your data: Supply chain information critical to operational excellence; High-voltage faults; Portable cooling; Safety automation isn't automatic
Case Study Database

Case Study Database

Get more exposure for your case study by uploading it to the Plant Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.

These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.

Click here to visit the Case Study Database and upload your case study.

Maintaining low data center PUE; Using eco mode in UPS systems; Commissioning electrical and power systems; Exploring dc power distribution alternatives
Synchronizing industrial Ethernet networks; Selecting protocol conversion gateways; Integrating HMIs with PLCs and PACs
Why manufacturers need to see energy in a different light: Current approaches to energy management yield quick savings, but leave plant managers searching for ways of improving on those early gains.

Annual Salary Survey

Participate in the 2013 Salary Survey

In a year when manufacturing continued to lead the economic rebound, it makes sense that plant manager bonuses rebounded. Plant Engineering’s annual Salary Survey shows both wages and bonuses rose in 2012 after a retreat the year before.

Average salary across all job titles for plant floor management rose 3.5% to $95,446, and bonus compensation jumped to $15,162, a 4.2% increase from the 2010 level and double the 2011 total, which showed a sharp drop in bonus.

2012 Salary Survey Analysis

2012 Salary Survey Results

Maintenance and reliability tips and best practices from the maintenance and reliability coaches at Allied Reliability Group.
The One Voice for Manufacturing blog reports on federal public policy issues impacting the manufacturing sector. One Voice is a joint effort by the National Tooling and Machining...
The Society for Maintenance and Reliability Professionals an organization devoted...
Join this ongoing discussion of machine guarding topics, including solutions assessments, regulatory compliance, gap analysis...
IMS Research, recently acquired by IHS Inc., is a leading independent supplier of market research and consultancy to the global electronics industry.
Maintenance is not optional in manufacturing. It’s a profit center, driving productivity and uptime while reducing overall repair costs.
The Lachance on CMMS blog is about current maintenance topics. Blogger Paul Lachance is president and chief technology officer for Smartware Group.