Causality 35: San Bruno

27 July, 2020


A modification made in 1956 to a pipeline built in the 1940s would ultimately fail costing 8 people their lives in 2010. We look at what went wrong with PG&Es gas pipeline in San Bruno, California.

Transcript available
Chain of events. Cause and effect. We analyze what went right, and what went wrong, as we discover that many outcomes can be predicted, planned for and even prevented. I'm John Chidgey, and this is Causality. Causality is supported by you: our listeners. If you'd like to support the show you can by becoming a patron. Patron subscribers have access to early release, high quality ad-free episodes as well as bonus episodes and to Causality Explored. You can do this via Patreon, just visit to learn how you can help this show to continue to be made. Thank you. "San Bruno" At 6:11pm local daylight savings time on Thursday the 9th of September, 2010 in San Bruno, a small suburb south of San Crancisco in California, an explosion occurred in Crestmore a residential neighborhood. The explosion registered at 1.1 on the Richter scale and it took place only 3.2km or 2mi from the San Francisco International Airport. The Pacific Gas & Electric Company (PG&E) owned and operated the gas pipeline, as well as multiple others, as well as electrical distribution throughout approximately t2/3rds of northern California. From Bakersfield to near each of the Oregon, Nevada, and Arizona state lines. PG&E operated approximately 2,900km or 1,800mi of high pressure gas pipelines at that time. Before we talk about the incident, let's talk a little bit about how gas pipelines work. Gas is moved around pipelines from a source to a destination. Where the gas is coming from (a source location) is referred to as "Upstream," to a "Downstream" location where it's going to its destination. Gas is compressed at a compression station and then that pressurized gas is output into a pipeline. A pipeline is best thought of as similar to a balloon, but fit that balloon with a flow control valve on one side to counterbalance the person blowing air into the inlet of the balloon. Like a balloon if the outlet side is restricted too much, whilst pressurised gas continues to enter it, then it will continue to expand, until eventually it ruptures. Hence controlling the outlet flow is crucial to ensuring safe operation of a pipeline. Put too much gas in: it will over-pressure if you're not letting that amount of gas out. Where pipelines join or split they use a connecting pipe section referred to as a "Header." To ensure each downstream section gets the correct flow, and that everything is set correctly sometimes, the back-pressure is monitored and alarmed to detect reverse flow, which is usually undesirable. A pipeline operator's role is to ensure that the flow rates and pressure regulating or pressure control valves, are set such that gas is directed in the right quantities to the right destinations within the operational limits of the pipeline network. High pressure pipelines are normally made out of steel or concrete lined steel and whilst they are made of a steel they still have a rupturing pressure limit referred to as the MaOP (or Maximum Operating Pressure). Steel fundamentally has 2 problems: Corrosion and fracturing. Steel is made from Iron mixed with Carbon which turns Iron into a significantly stronger material by introducing barriers to dislocations attempting to traverse through the materials crystalline structure. When dislocations propagate and build over a long period of time they can lead to fracturing and this effect is often referred to as fatigue stress. For a steel pipeline as gas pressure increases and then decreases again, day-in, day-out, for a long period of time those dislocations will inevitably lead to weaknesses in the steel pipeline, increasing the risk of a rupture. Hence a pipeline has a limited lifetime after which it cannot be used safely and exceeding the MaOP even for a young steel pipeline might not lead to a rupture at that moment, but each time there is a pressure cycle over that limit, it will significantly shorten the usable life of that pipeline. Pipeline pressures are generally referred to as Gauge Pressure as the suffix of the unit of measure. In this incident it was in the United States: all pressure measurements are in psi (or pounds per square inch) and referenced to gauge, hence psi gauge (or psig). PG&E had identified that their existing UPS (Uninterruptable Power Supply) system which was supplying backup for the 120V AC distribution panels, throughout their network needed to be systematically upgraded for a multitude of reasons, and in order to upgrade it that it would first be necessary to move individual loads off the UPS distribution board, to individual standalone UPSs'. The project consisted of multiple smaller outages where individual loads were transferred over one by one. Approval for the UPS project work was granted on the 27th of August, 2010. By the 9th of September, some loads had been transferred in the first stage of the project, and several more were planned for that day. The Milpitas Terminal plays a pivotal role in this incident. The impacted section of the Milpitas Terminal took 2 gas lines in: 300A and 131, into a common set of headers numbered 3, 4, 5, 6 and 7. From these headers line 300B and 327 feed downstream stations with several connectors dropping the pressure to header 2. From which 4 downstream lines: Line 100, Line 109, Line 101 and Line 132... which would be the line of the incident. And then the San Jose distribution feeder is also taken off from there. Each feeder has a Pressure Regulating Valve (or PRV), with a local independent controller which in turn can be controlled externally in automatic from a PLC (Programmable Logic Controller) Each feeder also has a monitor valve which is a fully independent, pneumatically operated, pressure controller intended to be a final over-pressure protection for downstream pipes. Monitor valves are a more traditional method of control. They're commonly slow acting therefore a high lag, and these devices sometimes can take several minutes to react and compensate as gas pressures increase or decrease with faster adjustments handled by the PLC controlled PRVs. Finally there are many manually operated valves that also allow operators to direct gas flow in multiple directions as well as fully isolate pipelines and headers and provide a pathway for a full station bypass if desired. With that background, let's talk about the incident itself. At 2:46pm a gas control technician at the Milpitas Terminal obtained verbal work clearance from the SCADA center in San Francisco to initiate another UPS changeover as part of the broader project. At 3:36pm the Milpitas technician called the SCADA center and spoke with a second SCADA operator to confirm whether the valves on 2 of the incoming gas pipelines would close when power was withdrawn. Upon being told they would, the Milpitas technician then locked the valves open to prevent their accidental closure during the changeover. Shortly afterward the workers at the Milpitas terminal contacted the SCADA supervisor on shift to confirm the correct action to take for the PRVs, the Pressure Regulating Valves. The supervisor advised them to put the PRVs into manual control at a set position to prevent them from operating during the loss of power. At 4:03pm the Milpitas technician alerted the third SCADA operator that the installation of one of the smaller UPSs was about to begin. The PRVs were put into manual control at the controller itself, and indicated they would be returned to automatic once completed. At 4:18pm the SCADA center lost SCADA indication for pressures, flows and valve positions at the Milpitas terminal. At 4:32pm the second SCADA operator contacted the Milpitas terminal requesting an update as the works were taking much longer than expected. At 4:38pm the Milpitas technician contacted the SCADA centre to verify the SCADA indications had now returned to normal and upon verifying this, the PRV controllers were returned to automatic. Shortly after this the technicians began powering off several breakers: one of which was unidentified, and they noted that the local control panel was now no longer powered up. The technicians decided to leave the circuit breaker open and instead referred to electrical drawings to try and determine a different way they could power the local control panel. As they investigated further they noted that the power supply that powered some of the local control panel equipment was outputting an inconsistent voltage. The power supplies were 120V AC to 24V DC supplies operating as a redundant pair on the DC bus, and that DC bus also powered the 24V DC instruments: most notably the pressure transmitters that fed the pressure process signal into the PRV controller. Since PRVs operate on a pressure feedback in a closed loop, if the input signal reports that the pressure is very low, the valve will proportionally open to increase the pressure. When the pressure doesn't increase the valve will open more, and more until the valve is inevitably fully opened. At 5:22pm the SCADA center alarm console displayed more than 60 alarms in a space of a few seconds, including controller error alarms, high differential pressure, gas backflow alarms and several High High pressure alarms from the Milpitas terminal. At 5 25 pm the third SCADA operator called the Milpitas technicians reporting the high pressure alarms and at this time the Milpitas technician realised that the pressure and regulating valve controller displays on the local control panel had lost all of their data. The SCADA consoles at Milpitas displayed constant pressures, a sign of "Report Last Known Value," as a result of the process value having been lost with all the downstream line PRVs and most of the monitor and incoming line valves indicating that they were not open. At 5:28pm the Milpitas technician called the fourth SCADA operator to ask what pressure values were being displayed on the remote SCADA console and discovered that the remote console was not receiving any live values at all. The remote SCADA console indicated 458 psig at the Milpitas terminal mixer. (That's the name for the common headers 3 through 7) The fourth SCADA operator concluded that the PRVs and/or the station bypass valves may have been opened which was quickly confirmed by the technician on site. With all of the regulating valves fully open, only the monitor valves were limiting the pressure on the downstream pipelines. The monitor valves were set at 386 psig, however due to control lag in the monitor valve's pneumatic controller, the pressure in the lines leaving the Milpitas terminal peaked at 396 psig sometime between 5:22pm and 5:25pm, however with a lack of transmitter data it's impossible to be certain what the peak value was and exactly when it occurred. At 5:42pm the Milpitas technician called the SCADA center and reported to the third SCADA operator that the regulating valves on the primary feedline (300A) had opened fully but the remote operator was unable to confirm live pressures or valve positions from the Milpitas terminal on his SCADA console. The third operator then authorized the local technician to reduce the manual set point of the monitor valves from 386 psig to 370 psig to try to reduce the downstream line pressures. At 5:52pm the senior SCADA coordinator directed the fourth SCADA operator to lower the upstream pressure set points for the gas supplying the Milpitas terminal upstream, as this was still able to be remotely controlled. Shortly after the site technician called the fourth SCADA operator to report that the monitor valves were shut on line 300B, with the fourth SCADA operator noting 500 psig on the common headers 3 through 7 and requested the on-site technician to place a pressure gauge on Line 132 to get a direct reading of the pressure leaving the terminal. There was no other way at this point to know for sure what that pressure actually was. Now at 6:02pm the fourth SCADA operator contacted a SCADA operator at the PG&E Brentwood facility stating, (and I quote): "We've over-pressured the whole peninsula." (end quote) At 6:04pm the senior SCADA coordinator informed the supervisor at the Milpitas terminal that the pressure on the incoming lines at the milpitas terminal had been lowered to 370 psig as directed. Shortly after this the on-site technician reported to the third SCADA operator that the manually-read line pressure on Line 132 had reached 396 psig. At some time between 6:00pm and 6:11pm gas began leaking from a section of pipe in San Bruno, until it found an ignition source, which it found at 6:11pm. At 6:11pm a 911 call reported an explosion from either a suspected gas station or an aircraft crash. With emergency responders dispatched to the scene within 1 min due to the San Bruno Fire Department (the SBFD) having a fire station 300 yards from where the explosion occurred. At 6:11pm Line 132 upstream of Martin Sstation registered a pressure drop from the peak of 386 psig. Within 2 minutes of the explosion, many residents had self-evacuated and fled their homes to escape the fireball. By 6:15pm Martin Station generated a low pressure alarm for Line 132 followed 20 seconds later by an alarm at 150 psig. At 6:16pm local police began diverting traffic away from the incident and assisting in organizing evacuations. At 6:18pm an off-duty PG&E employee notified the PG&E dispatch centre in Concord, California, about an explosion that had occurred in the San Bruno area. This was the first formal report to PG&E about the incident, 7 min after it had occurred. At 6:23pm a dispatcher sent a gas service representative working in Daly City, that's 8mi South of San Bruno, to confirm the report was accurate. At 6:24pm a PG&E supervisor saw the accident fire while driving home from work, after which he called the PG&E dispatch center, reported the fire and proceeded to the scene. At 6:27pm a PG&E dispatcher called the SCADA centre and asked the third SCADA operator if they had observed any pressure drops (and I quote): " a station in (the) area..." stating there had been reports of a flame shooting up in the air accompanied by a sound similar to a jet engine and that a PG&E supervisor and a GSR had been dispatched to the area to which the third operator replied that the SCADA center had not received any calls about any incident. At 6:29pm the senior SCADA coordinator informed a SCADA coordinator at the Brentwood Facility that there had been a gas line break, and further stated that there had been an over-pressure event at the Milpitas terminal earlier. By 6:30pm some staff at the SCADA center realised that there had been a rupture along Line 132 in the San Bruno area, however the exact rupture location was still unknown. We're now 19min post-explosion. At 6:35pm an off-duty PG&E gas measurement and control mechanic who was qualified to operate manual mainline valves, saw media reports about the fire, and as he suspected he might be able to assist, he notified the PG&E dispatch center and proceeded to the PG&E Colma Maintenance Yard to obtain a service truck and tools necessary to shut off a mainline valve in case it was needed. A few minutes later a second mechanic called him to check on his well-being and agreed to meet at the Colmer Yard and by 6:36pm the Line 132 pressure at the Martin Station had dropped to only 50 psig. At 6:50pm both mechanics arrived at Colma Yard and after reviewing a map showing the location of pipeline valves in the area and using information from local news reports regarding the fireball, they determined the rupture was most likely occurring in Line 132. They then called a supervisor to authorize isolating the line manually which was approved after a brief telephone conversation. About 7:06pm the two mechanics left the Colma Yard and route to the first mainline valve at (Mile Point) MP38.49 that they planned to close and arrived at 7:20pm. At 7:27pm the supervisor who was with the two mechanics at MP38.49 requested the SCADA center close 2 valves at the Martin Station to reduce the flow of gas. At 7:29pm the fourth SCADA operator had remotely closed the 2 valves downstream of the rupture, stopping gas flow from North to South into Line 132. South to North however was still flowing. By 7:30pm the 2 mechanics had manually closed the mainline valve at MP38.49 upstream of the rupture, stopping the gas flow into 132 from that location. By 7:42pm now 91min after the rupture, the gas flow had decreased enough such that the intensity of the fire had decreased to a point where firefighters could approach the site and begin containment efforts. By 7:46pm the 2 mechanics had manually closed 2 additional valves downstream of the rupture at MP40.05 and MP40.05-2 at Healey Station. At this point, and by closing these final 2 valves, the ruptured section of Line 132 was finally, fully isolated. 1hr and 35min after the loss of containment. Firefighters declared 75% of all active fires to be contained at about 4:24am the following morning. At 8:00pm on the 11th of September, 2010 all fires had finally been extinguished. As a result of the pipeline rupture and ensuing fires, 48 people sustained minor injuries, 10 people sustained serious injuries, and 8 people died. James Franco aged 58: a resident of Glenview Drive. Jacqueline Greig aged 44. Janessa Greig aged 13. Elizabeth Torres aged 81. Jessica Morales, aged 20. Lavonne Bullis, aged 87. Greg Bullis, aged 80. Will Bullis, aged 17. The rupture released an estimated 47.6 million standard cubic feet of natural gas, that's about 136GJ (Giga-Joules) by energy. It left a 72ft by 26ft crater that's 22m by 8m, and ejected a 28ft long or 8.5m piece of pipe weighing 3,000lbs that's 1.36 tonnes. 108 houses in the neighborhood were affected by the blast and fire, of which 53 had some damage, 17 had severe or moderate damage and 38 were completely destroyed. So what went wrong? There are seemingly 3 components to explore in what went wrong. The UPS works impact on the pipeline pressure controllers, the design of the pipe itself, and the response to the incident. The response to the incident was slow. Mis-informed and ultimately the meaningful isolations were all done manually. Why were they done manually? There weren't enough remotely controlled valves to allow for a better isolation of the pipeleg segment, so actually the speed that PG&E could respond was severely hindered by the pipeline's design. So actually then the response problems are just as much about the design. So let's start there. There are several aspects to consider. The mechanical aspects of the pipe itself: why did it fail? The positioning of the isolation valves and their operability, and the path of the pipeline relative to where people had their houses built. So first things first let's talk about the pipe itself. Line 132 was constructed in phases, during 1944 and an extension in 1948 which included the segment that failed in this incident. Line 132 consists of 24", 30", 34" and 36" diameter segments of different steel grades with different longitudinal seam welds including: Double Submerged Arc-Welded (DSAW), Electric Resistance Welded (ERW) and Seamless (SMLS). To clarify, a longitudinal weld is one that traverses the length of the pipe to join the bent piece of steel into a pipe, and a girth weld is one that joins 2 pieces of pipe together in a circle around the pipe. Pipe welds are commonly welded both on the inside and outside seams to improve their strength. It's a labor-intensive and tiring job and to ensure the welds are done properly they are inspected after they're completed by different means including x-rays and old-fashioned gas or water pressurization tests. Records from the 1948 project included logs for 209 radiographs and between the two project stages about 10% of all welds were X-rayed. That's not very many but keep in mind that in those days it was an expensive procedure and X-raying...the entire pipe from end to end was unheard of, though 10% isn't very much admittedly. According to construction records as the final check before introducing gas to the 30" portion of the line, it was checked for leaks and breaks, though the exact method that was used wasn't described in any detail, it was assumed by investigators to consist of a soap and water mixture on girth welds as it was pressurised. In 1956 however PG&E were required to relocate 1,851ft, that's 564m of Line 132 that had originally been installed in 1948, and this section actually contained the segment that failed. They moved this because the existing elevation of Line 132 was incompatible with land grading that had been done at the time in connection with residential housing being built in a new residential area. This relocation re-routed Line 132 from the East-side to the West-side of Glenview Drive, and was done entirely in-house by PG&E. The investigation found that there were no design material construction specifications, no inspection records, no as-built drawings and no X-ray reports for any of the works conducted from the 1956 relocation. Line 132, pipe segment 180, was documented in the PG&E GIS (that's the Geographic Information System) as a 30" diameter Seamless Steel pipe. Its code was API 5L X42, also known as L290 pipe under ISO3183. The wall thickness was supposedly 0.375" and was installed in 1956 with the manufacturer noted as "N/A" on the data sheet. (Not very useful) During the investigation the NTSB found that the data in the GIS was actually imported in 1998 from data from a pipeline survey sheet that was created in 1977, and in digging further it was discovered that this section had no engineering records relating to its specification. The reference was taken from an accounting journal from 1956 as materials were transferred between different construction jobs and not only that, the reference had been incorrectly entered in 1977 as X42 when the original voucher actually read X52 which is a completely different pipe. The investigators found that it was not possible for this pipe to have been X42 as seamless pipe wasn't and still isn't available in diameters larger than 26" and this was a 30" pipe segment, apparently. The investigators found that this pipe segment was actually made from 6 PUPS, which is pipeline slang, for a short length of pipe - a short pipe segment. And each of these 6 PUPS had been welded together to make one longer section. None of the pups were X52 pipe, or X42 pipe either and they varied in length from 3.5ft to 4.7ft long each. Laboratory testing of the composition and the material properties in some of the PUPS didn't meet the PG&E 1948 material specifications or any industry pipeline material specifications for that time period. They also found several of the PUPS had partially welded longitudinal seams with part of the seam unwelded on the inside, and several of the girth worlds joining the PUPS together also contained several welding defects. Interestingly in 1995 PG&E replaced several sections of Line 132. Their replacements however ended about 565ft to the South and about 610ft to the North of Segment 180 as part of a multi-year replacement project to address seismic hazards. Prior to 1961 no regulations in California existed that governed natural gas pipeline safety. The voluntary national consensus standard ASME B31.1.8, 1955 edition, called for hydrostatic pressure testing of pipelines at 1.1x to 1.4x the MaOP. PG&E elected not to hydrostatically test segment 180 of Line 132, and whether PG&E followed the other guidelines of the ASME standard can't be demonstrated because it was never documented. In 1961 the new General Order 112 required hydrostatic pressure testing of newly constructed pipelines in Class 3 areas at 1.5x the MaOP. However as Line 132 was not technically a new pipeline, it was not required to be retrospectively tested. So let's talk a little bit more about MaOP and why it's important. When a pipe section's designed, a combination of its material properties, the construction and size, all dictate its maximum operating pressure that it can withstand. Depending on how much gas volume you want to transport over what distance, and taking into account how much Line Pack you want to keep to allow for supply upsets and demand surges, you pick the right kind of pipe at the design stage, before you put in the ground. Well that's the idea at least but in this case the pipeline wasn't really designed with that kind of methodology and if it was the section they fitted in 1956 consisting of those PUPS, certainly wasn't. In terms of testing requirements though for pipelines constructed before 1970 where hydrostatic testing wasn't required, 49 CFR 192.619(a) (3), our old friend a "grandfather clause" allows the MaOP to be based on the highest actual operating pressure to which the segment was subjected, during the 5 years preceding July the 1st, 1970. Unlike an MaOP that's tested based on a hydrostatic pressure test, the grandfather clause doesn't specify a minimum amount of time that the historical pressure needs to have been held before it could be considered a valid MaOP sample. The thinking is that if a pipe wasn't designed with the target MaOP in mind from the start, if that pipeline has operated at a peak pressure recently in that 5 year period, it should be okay to handle it? Right? Well... not really...but that's the regulation. In 1987 the NTSB recommended eliminating the grandfather clause in Safety Recommendation: p-87-9 however this was not taken up by the standards boards following public comments recommending against its elimination. The MaOP for Line 132 was established at 400 psig under the grandfather clause as a result of records from the highest operating pressure on Line 132 during that 5 year period, it had reached 400 psig on October the 16th, 1968. Now PG&E had a subtly different interpretation. They considered the MaOP of a pipeline to be the maximum pressure at which a pipeline system, as distinguished from a pipeline segment, can operate. Hence the MaOP set by PG&E for Line 132 was only 375 psig. This is because when cross-ties connecting Line 132, although it has an MaOP of 400 psig, when it connects to Line 109 it only has an MaOP of 375 psig and once those cross- ties are open, the MaOP of Line 132 is the MaOP of Line 109 which is 375 psig. Now whether that MaOP was correctly set to match the pipeline's actual capacity, it was probably borderline, well it would have been for a well-constructed segment but not a poorly constructed segment. Whether the measured pressure immediately prior to the loss of containment exceeded PG&E's own MaOP for Line 132 there is no question that it was exceeded significantly. It was clocked at 398 psig, MaOP was set to 375psig. The NTSB report's findings, well, there were 28 in total, but I'll just touch on the ones that i think are the most relevant. The pipe segment that failed did not conform to PG&E's own standards nor any other standards of the time and was unable to be traced to where it was manufactured or by whom, and it would have failed accepted industry quality control and welding standards from the day it was constructed. The lack of quality control during the 1956 relocation project led to a defective pipe segment being installed that remained undetected until it failed 54 years later. the fracture of Line 132 Segment 180, originated in the partially welded longitudinal seam of a PUP. PUP number 1. Which was progressively weakened due to ductile crack growth and fatigue stress. The SCADA system made tracing the source of the leak difficult and a lack of remotely controllable valves and a lack of Automatic Line Break valves led to an excessive delay in isolating the segment, causing significantly greater damage. The report was also critical of the poor planning of the electrical UPS work at the Milpitas terminal, which i agree could have been done a bit more sensibly, like turning the breaker back on and making sure the critical instruments were always energised, valve positions and manual override and such. The truth is though that those events were just the final straw. This pipe leg wasn't subjected to any pressure it hadn't been previously and it was still within 10% over the MaOP, which in a properly constructed pipeline wouldn't have led to a failure. Had the UPS over-pressurisation not occurred, if they hadn't found the defective pipe segment in the next 12 to 24 months or so, it's very likely that it would have let go through normal operational spikes and surges. Specifically in the report though, noting this one: (and I quote): "Inline inspection technology is not available for use in all currently operating gas transmission pipeline systems. Operators do not have the benefit of a uniquely effective assessment tool to identify and assess the threat from critical defects in their pipelines." (end quote) That is a really good point. When i first read about this incident i thought immediately, "Why didn't they just use an inspection PIG?" PIGs are Pipeline Inspection Gauges, and are basically a big cylindrical (not quite a plug) that you load into a working pipeline at the upstream end, build back pressure behind it in a launcher, which is essentially just a large chamber for loading and unloading a PIG, and then you open the valve and let it fly into the pipeline. It's pushed through the pipeline from the gas pressure behind it and it travels down the other end of the pipeline to the receiver and they're commonly used for cleaning things out of the pipe like Tri-Ethylene Glycol (TEG), or in other cases they have a 360° X-ray machine built into them. They literally take a continuous X-ray of the inside of the pipeline looking for defects in welds and pipe work corrosion, everywhere from the launcher to the receiver, recording them all on board and the engineers pour over them when they're done at the end. Very, extremely cool stuff! Not cheap though. They're technically called an ILI, an In-Line Inspection PIG. Of course they only work if the pipeline is much the same size along its length and have pig launchers and receivers and have a maximum bend radius for all the sections you want the PIG to travel through. Too many corners obviously it's not going to work. Line 132 didn't qualify for pigging, or certainly Segment 180 didn't. So leak detection then is down to pressurizing and testing for a loss of pressure or by using tracer gas and gas sniffers at ground level. Not as accurate. Not as effective. So despite all of this, why would you build it in a residential neighborhood? So let's look at that. California Public Utilities Commission, San Bruno Explosion Report of the Independent Review Panel (2011) stated (and I quote): "In 1956 the city converted PG&E's pipeline Right Of Way from an Easement to a Franchise Right as the community was growing and residential subdivisions were being laid throughout the area." (end quote) So what's an "Easement," at least in the context of California law? According to the California Land Title Association an Easement is a: "...real estate ownership right, an encumbrance on the title granted to an individual or entity to make a limited but typically indefinite use of the land of another..." It goes on to state that: "...statutes may prohibit locating buildings or undertaking any construction over a pipeline easement." So a Right Of Way then is: "...the legal right established by usage or grant to pass along a specific route through grounds or property belonging to another." The term "Franchise Right" is interesting. In California, Public Utilities are defined as: "An entity which provides a service to the public for reasonable charges whether such charges be in the form of taxes or rates." So utilities are granted statutory franchise rights, most often in the context of let's say a water mains under a city street, and this provides a degree of legal protections against consequential costs for repairs, relocations, amongst other things but in this context, PG&E were operating as a utility and were granted statutory franchise rights for that section of the pipeline, rather than the stricter Right Of Way condition that an easement would have because under a local statute that would have prevented anyone from building on it. So in summary, the city re-classified the easement so they could utilise what was the Right Of Way easement, so they could then build houses in 1957, and to make way for that they regraded a section of the ground to level it for those houses and additional streets requiring a short section of the pipeline to be moved to a different spot, which was the work that PG&E did themselves in 1956. The suburb that was built there, Crestmoor, consists of 2,500 homes built between 1957 and 1959. That was a decade after the pipeline was originally finished and had been in operation for that long. Only 1 year though after Segment 180 was reinstalled, made from those 6 pieces of PUPS. So technically they didn't bring the pipeline to the neighborhood. The neighborhood came to the pipeline. Let's talk about the fallout. The cost to repair the pipeline was estimated to be $13.5M USD. About $1/4M USD worth of gas, went up in smoke in 90min. The CPUC fined the utility $1.6B USD in connection with the disaster. That's the largest penalty ever levied against a California Utility. PG&E paid over $550M USD to the victims of the explosion. PG&E were required to undertake pressure tests and integrity checks of their 1,800mi of transmission pipelines. In August 2016, a federal jury convicted PG&E on 5 charges of violating federal pipeline safety regulations and 1 charge of obstructing the NTSB investigation. In January 2017, a federal judge sentenced PG&E for crimes linked to the explosion, imposing the maximum fine of $3M USD, and enforcing mandatory oversight by an external monitor (not the PUC) and 5 years probation. No employees of PG&E or the PUC were individually sentenced. No jail time was served by anyone involved in this incident. In the years since PG&E were also found guilty in the Camp Fire incident and whose equipment also started multiple wildfires, with subsequent fines and class actions with $30B USD in liabilities PG&E filed for bankruptcy for a second time in its history, in January of 2019. On Wednesday the 1st of July, 2020 only a few weeks ago at the time of recording this, PG&E exited Chapter 11 and is once again out of bankruptcy. Having said that 5 years after the incident, PG&E had completed 673mi of pressure testing of their gas pipeline network. They also replaced 115mi of transmission pipeline, down-rated 12mi, retired 9mi, upgraded 202mi of pipeline to accept In-Line Inspection PIGs, and installed 217 automatic valves to improve pipe leg isolation. So what do we learn from all of this? Let's start with the operational issues on the day. Keeping in mind they contributed as the trigger, but if they hadn't, the pipe would have ruptured at some point in the future anyway unless they caught it first in a test. That said the technicians at the terminal doing the UPS work, they shouldn't have been trying to figure out the consequences of turning power off on valves during the changeover. Why hadn't they pre-planned that? The sequence of events that jumped out to me as well in all of this, why was there no dedicated operator as the point of contact? There was no SCADA operator as the sole point of contact for those works! They were constantly speaking to different operators, through the entire process, which led to inconsistent answers, a lack of common understanding in the SCADA control room about what was actually happening. It was just poorly coordinated overall. Even if they couldn't afford to assign an operator to those works entirely, they should have had at least 1 of the 4 SCADA operators as the point of contact for the works. Besides that though, engineering records are critical. Design details that can be traced and linked to inspection records. These are critical pieces of information that so many companies seem to treat as an afterthought. When operational companies execute works entirely in-house, the first things that seem to get cut are the engineering records. The design documents. External companies develop design documentations and keep test records as evidence or proof for progress claims as well as for legal reasons, but operational companies don't feel the need to, which may not be needed for progress claims (because there aren't any it's internal) but they still need to be kept for legal reasons. in the competition for BAU (Business As Usual), and project work, all too often corners get cut on design. And documentation on internal projects was clearly a corner that was cut in the 1956 work on Segment 180. Finally isolation or rather a lack of it. When the pipeline was built, remotely operated valves did exist but they were very expensive to install and not very reliable but manual valves were still installed. Driving to the sites and manually closing the valves takes precious time, that could have been saved if they had remote actuators fitted to those manual valves, hence automating them. It also seems unbelievable that the SCADA operators didn't realize that there was a rupture for so long after it had occurred. The readings at Martin Station dropped so low, so quickly, they should have realised but it was close to 20min before they did and when they did they couldn't pinpoint where just somewhere on Line a bit vague. So if they'd automated more of those mainline isolation valves they would have also fitted local upstream and downstream pressure monitoring at those valve locations, that's standard practice, and that would have made pin-pointing the loss of containment significantly faster. With automated valve actuators fitted to the manual valves they closed and pressure transmitters fitted at those locations, they could have isolated that pipeleg in only a few minutes. Not 1-1/2hrs. Some countries around the world have legislation that restricts the size of gas pipelines that can be built in or near residential areas, but it's inconsistent between countries and even states in the same country. In an environment where developers want to build where the flattest land is and where the rules have loopholes, there are pressures for local governments to rezone and reclassify vast tracts of land that probably shouldn't have been reclassified for all sorts of reasons. Easements are generally made easements for a good reason. Then again the original mistake: the poorly manufactured pipe leg segment made from those 6 PUPS, was made before more than half of these victims were even born. Like the 737MAX, grandfathering provisions just need to go away from so many regulations including these ones. When you're designing a gas pipeline, you have to think about containment. They have a saying in the business: "Keep it in the pipes." That's your job. Because whatever's in that pipe if it gets out it can kill people, hurt people, destroy property, cause environmental damage, and yeah okay there's money too because you're going to lose money because your product is getting out...never gets to a customer, so you can't sell it. Also if those customers are relying on that product for heating, sterilization in hospitals, cooking and restaurants, I mean you name it, there's lots of other outcomes as well. Gas pipelines can be built and operated safely if you do it correctly, but you have to inspect your pipes. You have to know their condition. That they're safe to continue to operate. So they couldn't use an inspection PIG in 2010, but they could have hydro-tested it. So why didn't they? It's money. You have to empty the pipe of the gas. Sometimes you need to purge it with an inert gas: some will debate that one. Fill it with water. Pressurise it and test it. If it fails you have to trace the leak, dig it up, patch it, fix it, refill the pipe, re-test the pipe, drain the pipe, dry the pipe, and then you can put gas back in it again. That whole process takes time and all that time you're not moving gas, you're not making money. So if you don't mandate it, they won't do it. When you're an engineer, building something that could be an operation for a very long time, a very, very long time, you have to make sure that it's designed to the required standard. That that design is documented. That it's built and tested against that standard, as best you can, from the right materials and test it as thoroughly as you can. Because the mistakes made today will sometimes lay in wait for 50yrs, before they take innocent people's lives. If you're enjoying Causality and you want to support the show you can by becoming a Patron. You can find details at causality, with a thank you to all of our Patrons and a special thank you to our Patreon Silver Producers Mitch Bielger, John Whitlow, Joseph Antonio, Kevin Koch, Oliver Steele and Shane O'Neill. And an extra special thank you to our Patron Gold Producer known only as 'r'. Patrons have access to early release, high quality ad-free episodes as well as bonus episodes and to Causality Explored. You can do this via Patreon just visit to learn how you can help this show to continue to be made. Of course there's lots of other ways to help like favouriting this episode in your podcast player app or sharing the episode or the show with your friends or via social. Some podcast players let you share audio clips of the episode so if you have a few favourite segments feel free to share them too. All these things help others discover the show and can make a big difference. Causality is heavily researched and links to all the materials used for the creation of this episode are contained in the show notes. You can find them in the text of the episode description, on your podcast player or on our website. You can follow me on the Fediverse @[email protected] on Twitter @johnchidgey or the network @Engineered_Net. This was Causality. I'm John Chidgey. Thanks so much for listening.
Duration 49 minutes and 33 seconds Direct Download

Show Notes

Investigation and Reports:

Technical Links:

Links of Potential Interest:

Legal Links:

Episode Gold Producer: 'r'.
Episode Silver Producers: Mitch Biegler, John Whitlow, Joseph Antonio, Kevin Koch, Oliver Steele and Shane O'Neill.

With thanks to John for the topic suggestion.

Premium supporters have access to high-quality, early released episodes with a full back-catalogues of previous episodes


John Chidgey

John Chidgey

John is an Electrical, Instrumentation and Control Systems Engineer, software developer, podcaster, vocal actor and runs TechDistortion and the Engineered Network. John is a Chartered Professional Engineer in both Electrical Engineering and Information, Telecommunications and Electronics Engineering (ITEE) and a semi-regular conference speaker.

John has produced and appeared on many podcasts including Pragmatic and Causality and is available for hire for Vocal Acting or advertising. He has experience and interest in HMI Design, Alarm Management, Cyber-security and Root Cause Analysis.

Described as the David Attenborough of disasters, and a Dreamy Narrator with Great Pipes by the Podfather Adam Curry.

You can find him on the Fediverse and on Twitter.