Up to the mid 1930’s, no widescale efforts had been made to protect busbars on a unit basis. Also there was reluctance in arranging one protective equipment to cause simultaneous tripping of a large number of circuits.
Before the British Grid System was built in the early 1930s, many undertakings ran isolated from adjacent ones, and so the power available for busbar faults was often relatively small, and damage due to these faults was generally not extensive.
By the late 1930’s, the British Power Systems were extensively interconnected, with a consequent increase in fault power.
A number of busbar faults occurred about this time, but due to their relatively slow clearance from the system by overcurrent and earth-fault relays, considerable damage resulted, especially in indoor stations. These faults led to efforts being made to produce busbar protection in such a form that it could be widely applied without itself being a further hazard to the system.
Construction of the British 275 kV supergrid system began in about 1953, by which time standard principles of busbar protection had been adopted for outdoor switchgear at the higher voltages.
At this time the emphasis was placed on the avoidance of unwanted operations in order to give maximum security of supply.
With the introduction of 400 kV substations in the 1960’s, the transient stability of generators became the more important consideration and this led to a change of emphasis so that fast operating times and reliable operation would be obtained for a fault occurring within the protected zone, which in this case would be the busbars and switchgear.
How to analyze a Bus Bar protection scheme for 400kV one and half breaker scheme?