[sdiy] WAY OT: Last public post - US power failure
J. Larry Hendry
jlarryh at iquest.net
Mon Sep 8 04:55:27 CEST 2003
Since there were some very good question posted publicly, I will address
some of them, then I promise to take everything else private. Last warning
= long winded.
> ----- Original Message -----
> From: Ian Fritz <ijfritz at earthlink.net>
> It sounds like what you are saying is that the outage could have been due
entirely to natural dynamic instabilities inherent to the system. I've been
wondering lately if this would be possible. With such a large coupled
non-linear system it seems to me that it would be quite difficult to ensure
stability under all possible perturbations.
Normally, these system is quite stable. In fact, utilities and security
coordinators are constantly running real time contingency analysis looking
for anything that might be troublesome if it were to trip out of service.
Utilities are required to not to be "out of contingency compliance." In
other words, we have to already know the anticipated results of any single
event AND be able to tolerate that evert without loss of load. The trouble
comes when you start looking at all the possible combinations of two. That
is a difficult analysis of large proportion. That is where real time
analysis comes into play.
You are usually 100% prepared for the first thing that happens, and
sometimes the second. But, at some point, an event occurs for which you had
not planned before hand. Then, you must rely on your on-line analysis to
determine steps to take to remove the system from an undesireable condition.
All tools must be in place and functioning when you get to this point or you
are sunk.
> From: Ian Fritz <ijfritz at earthlink.net>
> The news that has been getting press attention (what little of it there
is) appears to be focused on human error -- operators not contacting each
other properly because of malfunctioning computers was the last I read.
I would say that is fairly accurate reporting. We see that some of the
SCADA (supervisory control and data aquisition) equipment may have been off
line at the time of the incident. Operators may have been partially in the
dark (pun intended). Plus, First Energy is a very new member to the
transmission security coordinator Midwest System Operator. For years,
transmission security coornination was the responsibility of ECAR in this
area (during the days of regulated wholesale market). The entire industry
is going through some growing pains as once fully inergrated utilities are
broken into competitive generators, transmission companies, and distribution
operating companies. While they may still be under one roof, communications
have changed greatly. For eaxmple, I worked 12 years in the operations
centers where I am employeed. At that time, we dispatched and marketed
generation and also operated the transmission system from a single location.
Coordination was easy as we had the final word on everything. Today, that
center controls transmission only. Generation sales are handled by another
group, operating of the generators is handled by another group. We have a
code of ethics which prevents us from calling the guys that market power.
We cannot disclose any transmission information to them that might give them
a competitive advantage over other generation providers -- FERC rules, blame
the government. That information is put out on the web so everyone has the
same accessability. We have to ask for generation changes for TLRs
(transmission loading relief) from the Midwest System Operator. So, I
would say that communications & manfunctioning equipment are indeed probable
contributors.
Now as to the human error, I would say yes. Given the correct amount of
information, once loads in the Cleveland area were causing an unreasonable
burden on the remainder of the system, that company has an obligation to get
rid of some load and possibly sever some connections to help localize the
outage. At some point, I believe some of that did occur. However, it
appears that not enough did as quickly as it should have. That could relate
back to bad data / communications. And, it is very difficult for an
operator to intentionally put people in the daek if he believes the problem
is solveable otherwise.
> From: Ian Fritz <ijfritz at earthlink.net>
> And all the fuss about "decaying infrastructure", etc! If what you seem
to be saying is true, then more reliable equipment in and of itself might
not help, unless the basic method of stabilization were also improved.
"Decaying infrastructure" is an some what ambiguous term. What is meant by
decaying? Bad condition? Maybe in some places, but certainly not in most.
What I get from that is "reduction in the amount of reserve margin" in the
system. The amount of load added to the grid and the amount of generators
added to the grid have increased faster than capacity additions to the grid.
So, facilities are loaded more than they used to be. Therefore, weaknesses
are amplified. Lots of new facilities are not being added for a couple of
reasons. Utilities are trying to maximize existing investment so they can
both remain competitive in rates and keep Wall Street happy. And, people in
general have the NIMBY syndrome. No one wants a new transmission line in
their yard. So, the truth is there is some reduction in the reserve margins
in the infrastructure.
> From: Ian Fritz <ijfritz at earthlink.net>
> Could you tell me, how long might a 0.1% frequency deviation last?
Sometimes my VCO frequencies do not reproduce at the same temperature. And
my homemade counter works off the line frequency. (There -- now we're on
topic).
Frequency deviations of 1/100 and 2/100 of a cycle are not deviations. They
are normal operation. 59.98 to 60.02 is common place. Frequency in that
range is considered normal. Sustained deviation outside of 59.95 to 60.05
would be something we would be concerned about and trying to resolve. The
60.2 we saw during the outage is something I have seen only this once in my
28 years.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> Now, this jiggled my memory from way back in time. I actually have
education to cover that basic part of it (but not the overall system aspect
of it, which I assumed you could give an inside glimps into - and did).
I don't recall the symbols correctly, but it goes something like this:
iphi
> P + jQ = U * I * e
> where phi is the phase between U and I.
> Q is thus the reactive effect (VARS in your lingo) when P is the normal
resistive effect (WATTS).
Yep, P and Q are used in the formulas. And, honestly, most of the that math
is beyond me. I am not a BSEE. But, I spoke in terms of two
currents -in-phase and out-of-phase. In reality, there is one current which
has a phase angle compared to the voltage that is the vector sum of the two
out-of-phase currents. So, If we had 4MW of true power, and 3 MVAR of
lagging reactive power, the total load would be 5 MVA. The simple right
angle triangle formula (since we are dealing with 90 degrees).
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> I assume that you guys really get happy when all consumers stay all
resistive.
That would be the almost perfect customer. They don't exist. However, the
Japanese plants usually come pretty close. They do this with switched
capacitors at the incoming primary voltage service. The total of capacitors
woudl be enough to offset their lagging current loads. These capacitors
would be divided into incrments that are autimatically switches in and out
of service based on real time monitiring of power factor. Steel plants with
arc furnaces sometimes employ static var compensators which switch these
with hig voltage SCR devices in real time to prent their arcs (control
faults) from inducing flicker in to the system. The perfect customer would
have leading power ON peak and lagging power OFF peak.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> Uhm... now... I deal with transmissionlines of a completely different
frequency range, but it still sounds like a great idea to have a impedance
matching here. In this case, the tranmissionline should be designed so that
its inductance and capacitance cancel out neatly, right?
I'd buy that power line !! Actually, they do not exist in traditional
sense. The load of C is fixed by the construction. The load of L varies.
More current on the line increases the field. That represents an increase
in the lagging current load of the line. So, unloaded lines are highly
capacitive. Heavily loaded lines are very inductive. The point where you
cross over is called the surge impedance point of the line. We have to
compensate for that by switching in additional shunt or series capacitors or
by changing the excitation voltage on the generator field. Both are used
continually.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> Which is what I am thinking about. From an engineering perspective it
would be good to reduce the amount of VARS a transmissionline helps to
"upset", so cutting it of from the system should cause less of a hazzle.
Well, some of that is done. For example, the excess capacitance of 765 KV
lines is balanced by shunt inductors installed at one end of the line
(bigger than a house). But, there is no way to really compensate
automatically for transmission lines tripping out of service. Normally, one
line tripping out is not a huge concern.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> 1) more equipment in the all kinds of places = expensive and 2) some
consumers is off on too much VARS to begin with and the powerlines help
compensate that "for free".
Well, what is done is that compensation is scattered around close to the
loads it intends to offset. So, outside a plant with a highly lagging load
will find capacitors installed by the utility. This reduces the VAR load on
the utilities lines. Then, larger capacitor banks are scattered around in
transmission substations controlled remotely from the control centers and by
automatic voltage sensing. Since voltage drops as the result of lagging
current, lower voltage limits are set to automatically switch in more
capacitors.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> If a tree caused it, then it is due to bad maintenance. In Sweden (a
country with lots and lots of wood, like Vermont or something - I'm bad at
North American geographical details like that) all the big power-lines is
well trimmed on a regular basis so that no tree should be able to reach the
power lines even by far. We do have powerouts due to trees, almost every
winter, but then it is caused by heavy snow and on the minor distribution
lines where maintenance of the path has not been kept and where isolated
lines should have been installed.
I wish that was 100% true. But, we have too many lawyers in ths country and
sometimes insiufficient right of way agreements. Sometimes, we must fight
with a customer over trimming a tree (usually one with a good lawyer).
However, once they do contact the line and take it out, we can trim the tree
level with the ground. :)
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> Isn't the problem that the system has been going more and more "on its
knees" in terms of lacking dynamics?
I would phrase it as less reserve margin than we used to have.
> ----- Original Message -----
> From: Magnus Danielson <cfmd at swipnet.se>
> That should be: 28 years of delivering hum to PA and studios!
Guilty as charged. :)
Larry - I'll shut up now.
More information about the Synth-diy
mailing list