There’s a lot that’s already been written about the Clean Power Plan (CPP): Google it and you find 71 million hits (at least as I write this). A lot more will be written yet. The scale of the CPP predictably and properly has ignited widespread debate. Much of the controversy centers around the animating force for the CPP, the claims of impending climate doom motivating a profound shift in U.S. energy use, and collaterally the politics, legality, and costs of implementation.
But permit herein a brief illumination of the realities about which there is no debate: the physics of supplying electricity, realities that determine the art and economics of the possible.
The idiosyncratic physics of electricity will ultimately doom the aspirational goals of the new 1,560 page Clean Power Plan, more than will an army of lobbyists, lawsuits and laborious studies. It is an inconvenient truth that electricity is profoundly different from every other energy source society uses; it is, in fact, weird.
In energy equivalent terms, the nation’s electric utilities deliver 5 oil supertankers every day. This feat is performed on a network where operational dynamics and disasters can happen at near lightspeed. And here is the critical singular fact: Over 99 percent of all electricity has to be generated at the same instant that it is consumed. Try doing that with wheat, steel, or oil.
Thus the problem: The Clean Power Plan (CPP), as by now everyone knows, sets a course to radically increase the use of wind and solar power everywhere in America. And, cost aside (which it never is in the real world), it should go without saying that neither wind nor solar are available all the time.
“Availability” is not a semantic nicety. It is a specific and critical technical feature of power plants.
In order for the grid to deliver power continuously and nearly instantaneously in the face of inevitable challenges (plant failures, or the highly cyclical nature of demand), operators must have access to unused capacity that is available to be called upon, any time. While wind and solar have very low average availability compared to conventional power plants, what is more important is that they have zero availability for many hours at a time every day. And similarly neither are available, even when operating, to increase output to follow normal daily and hourly demand surges.
It bears noting that “availability” is distinct from another technical, non-semantic, feature of power plants, the “capacity factor” which is a measure of total energy delivery. Unsurprisingly, wind and solar also have low capacity factors compared to conventional power plants: over a year, a megawatt of wind, on average, can deliver less than one-third as much energy as a megawatt of gas turbine. If one rated automobiles this way, for example, capacity factor would measure how often, on average, you were actually able to use your car for all reasons, regardless of how big the car or its engine. Availability is the if, when and how long each day at any given time your car would actually start.
When it comes to cost of capital, capacity factor matters. Simplistically, you need to build three wind or solar megawatts of capacity to equal the energy produced by one megawatt of turbine capacity. (Obviously the exact ratio depends on how windy or sunny the locale.) That means it is just nonsensical to claim a solar or wind plant with a capital cost per “nameplate” megawatt equal to a conventional power plant has achieved the Holy Grail of “grid parity.” And even if you build extra wind and solar capacity, the extra capacity is worthless if it’s not available when needed.
It is availability that matters when it comes to the engineering, and derivatively economic challenge of keeping a grid continuously operating and stable (the latter no small feat). A stable continuous grid is utterly essential for modern society and the hallmark of modernity. Just ask anyone in India, or dozens of other countries plagued with episodic grids.
To go back to the automobile analogy: if your favorite car had low availability, you could occasional get a ride from a neighbor who was wise enough to own a high availability car. That’s precisely how solar and wind successfully operate on the grid today. Of course the neighbors on the grid have high-availability ‘cars’ called coal, gas and nuclear plants. Iowa, the second biggest wind-producing state gets 50 percent of its electricity from coal.
Today, 90 percent of America’s power comes from highly available sources: a total of 65 percent roughly equally split comes from coal and natural gas (which the CPP and greens seek to eliminate; coal first of course, gas later), 20 percent from ageing nukes which have incredibly availability (and for which the CPP provides no incentives to refurbish and keep running), and 5 percent from old, big hydro dams (which greens despise).
The anti-hydrocarbon cheerleaders for the CPP have long had three quick answers to the availability challenge of cramming lots more wind and solar onto the grid: more efficiency, more transmission, and lots and lots of batteries. All are integral parts of the CPP.
Start with the batteries, the trump card that has green mavens the most excited as a way to disrupt and transform the utility industry. Batteries are conceptually the simplest and most obvious solution to the availability problem. Store extra when it’s available for use later. That’s the smartphone paradigm; ‘just’ expand that to cars first, and then homes, and then the grid itself. Tesla has been at the forefront, at least in the media, with car and home batteries.
The challenge here starts with the physics. Electricity is astonishingly difficult to store at volumes that matter. Unlike every other energy source, and every other large-scale commodity society uses – one can’t just pile up, pour in, or in some way package up huge quantities of electrons.
Meanwhile, country-level supply chains of critical commodities always have huge amounts of storage somewhere in the network; barrels, tanks, boxes or warehouses chock-a-block full of extra supply. On average, there are months worth of annual national demand in storage at any given moment for every commodity from oil and natural gas, to grains and metals. The exception? Electricity. The total amount of electricity stored at any given moment in all the batteries out there for all purposes is countable in minutes, not months, worth of annual demand.
Elon Musk has given us a way to illustrate the challenge to store power at grid levels. The astoundingly big $5 billion Tesla battery factory under construction in Nevada, the so-called “gigafactory,” is slated to produce more than all of the world’s existing lithium battery factories combined. For battery cognoscenti, that represents a quantity of batteries each year that can store 30 billion watt-hours of electricity. A big number. But the United States consumes about 4,000,000 billion watt-hours a year. Thus the entire annual output of the gigafactory can store about five minutes worth of U.S. electric demand.
This says nothing about costs, or the lifespan of batteries that is counted in years not the decades needed for grid-scale power systems. It also says nothing about comparative costs of paying for availability. Storing electricity in expensive short-lived batteries is not a little more expensive but tens of thousands of times more expensive than storing gas in tanks or coal in piles adjacent to idle but readily available long-lived power plants.
But we are promised that better battery technology will continually emerge. Of course it will. But as has always been the case with batteries, newer tech is more expensive. There is, by the way, a market ready to pay up, big time, for better batteries. The cost of a battery in a smartphone measured in grid terms is $1000 per kilowatt-hour of capacity. This illustrates the problem. The target price that grid-scale storage needs to reach, according to the Department of Energy, is under $100 per kilowatt-hour – and for a system far more complex than the power unit in your phone. And even that is still too expensive for commodity storage by at least 10-fold.
Consider one more example of the scale challenge for storing electricity. Cushing, OK, is home to one of the nation’s preeminent, and numerous, tank farms to store oil. In order to build a ‘tank’ farm to store kilowatt-hours equivalent to the energy stored at Cushing, we’d need a quantity of batteries equal to 40 years of production from 100 gigafactories. Electricity is hard to store.
Transmission is the second fix proffered to solve the low-availability problem for green power. Since it’s always windy and sunny somewhere on the continent or planet, if one builds a big enough maze of an even far longer long-haul electric grid, one can always bring power back. Europe seriously considered plans to have a grid built all the way into sunny North Africa. Once again though, the staggering scale of such a grid adds to costs, and substantially so. This says nothing about the collateral reliability challenges for such a long grid, nor the now nearly universal challenges of public opposition to siting transmission lines. Europe gave up on the idea. If one really wanted cheap green power from far away places, Canada might oblige with its staggering untapped potential from its northern rivers a mere several thousand miles away. (For a deeper glimpse into the complexities of transmission engineering, global expert on the subject, and my friend, Ake Almgren, wrote an excellent tutorial essay.)
Then there’s efficiency, which, counterfactually, is widely labeled by green advocates and the CPP as a “source” of energy. And to be clear, we don’t mean conservation which is on average deprivation or volunteering to doing without. Reducing the growth, or even the absolute level of future demand with efficiency does not create more power, it merely and only slightly reduces the challenge of providing that power. Like food, energy has to be ‘grown’ continuously.
The bottom line is that despite billions already invested in efficiency to stifle growth, and despite a devastating recession that did stifle it, U.S. electric demand today is 10 percent higher than 2001. That seemingly modest amount at the scale of America’s grid represents an increase equal to Italy’s entire annual electric use. The Energy Information Administration forecasts at least another 10 percent rise over the next fifteen years too – that will require the United States to add real capacity equal to Germany’s entire current grid.
But efficiency as a ‘solution’ to all manner of environmental problems is widely embraced on both sides of the political aisle, so it’s worth visiting briefly why, at best, efficiency makes no difference to the CPP’s macro goals, and at worst (via regressive cross-subsidies) just increases costs for consumers.
More efficient power generation (less fuel burned per kilowatt-hour produced) is the proverbial ‘no brainer’ when the cost of the newer more efficient hardware is less than cost of the fuel saved. Engineers have been successfully chasing this kind of efficiency for centuries; just ask GE and GM, or Intel and IBM.
People happily choose to buy the more efficient technology when it costs the same as the older less efficient technology. But when businesses or consumers are asked to spend more up front to buy future savings they are generally reluctant because they are being encouraged to spend today’s precious capital for future savings, and because they are usually being asked to bet that the savings will be real in part because future cost of fuel may be higher. (Prices have been in decline across all hydrocarbons.) The former guarantees higher immediate costs; the latter runs the risk that the potential savings may never be realized.
Of course governments can manipulate markets and give consumers money via tax credits, or subsidies, to ‘invest’ in the more expensive equipment for theoretical future benefits to society at large. But this strategy is not a solution to supplying power, not just because efficiency is not an energy source, but because subsidies quickly become prohibitively expensive and non-sustainable.
Then there is efficiency used to control consumption behavior – a statement by itself that should be a red flag, both as a matter of political philosophy and as an oxymoron when the words efficiency and consumer behavior are juxtaposed. If there isn’t enough power available when demand peaks, then, the concept goes, one eliminates the peak by discouraging or eliminating demand during the peak.
Utilities have had since the 1970s (even earlier) programs to encourage, usually via preferred rates, big electric users to ramp-down when grid demand peaks. That the Internet now makes this easier is nearly irrelevant. The low-hanging industrial fruit has long ago been picked. What’s left are business operations that cannot be turned on-and-off. Data centers are one good example. And it is worth noting, that the extent to which there is any economic resurgence in some corners of the industrial landscape, it is precisely because U.S. electricity is cheap and reliable (at least so for).
Meanwhile in residential markets load shifting has long failed to deliver on over-promises for radical change in consumer response. Time-of-day pricing has worked but encounters citizen blowback. Regardless of future appliances getting smarter and controllable from smart phones, here too there’s not much low-hanging fruit. When it’s hot, for example, and people are at home there’s a sharp limit to willingness to let a utility remotely turn off the AC. Of course smartphones and smart homes make it easier to turn off the lights your kids leave on, or the equivalent. There are savings to be had there, but they’re truly de minimis.
The real irony in all this is that the CPP’s push towards using power sources that will make the grid less reliable or, at best and at great expense, no more reliable, comes precisely when modern society needs greater reliability.
The demand for “always on” power to keep our digital and information-centric economy humming has never been greater. The share of the U.S. GDP associated with information—which is entirely dependent on electricity—is three times bigger than the share associated with the (oil dependent) transportation sector. (For more on the Internet’s surprising electricity appetite, see my earlier report.)
The average incidence of grid outages has been rising at about 8 percent to 10 percent annually since 1990. Outages were up 12 percent last year. And the duration of outages has also been rising by about 14 percent per year. (Eaton Corporation provides revealing state-by-state data and trends in their Blackout Tracker.)
And now we have the newest and increasingly serious concern over cyber attacks on the grid – arguably one of the most critical areas demanding increased spending and attention to harden the grid. Fortunately the Internet and analytics can make a big difference in this domain. And while Silicon Valley products and services, such as the newly launched Siebel Energy Institute, are often touted for their energy savings benefits, it will be in the security and reliability domains that they will make the biggest and most valuable contributions.
We have long known that the cost of outages is far greater than the value of the power itself. Depending on the kind of business, an outage is ten to ten thousand times more expensive than the power itself. We already know that outages overall cost the U.S. economy $150 billion a year. And this says nothing about the social and human costs of outages.
You can bet that the CPP does not count as a “cost” a future that has a less reliable grid. But you can bet that consumers and voters will, in due course, notice. And you can bet that consumers will eventually notice that electric rates already started creeping back up since 2005, reversing an earlier 25-year trend of declining rates.
A future grid that is both more expensive and less reliable will be terrible for the economy and toxic for politicians.
[Mark P. Mills is CEO of Digital Power Group, a tech and investment advisory, a Senior Fellow with the Manhattan Institute, a Faculty Fellow at the school of Engineering & Applied Science at Northwestern University, on the boards of the Marshall Institute, a think tank focused on space and missile defense, and Notre Dame’s Reilly Center for Science, Technology & Ethics. He co-authored the energy-tech book “The Bottomless Well,” was the tech strategist for a boutique venture fund, a tech advisor for Banc of America Securities, and co-authored a tech investment newsletter. He served in the White House Science Office under President Reagan, and studied physics at Queen’s University, Canada, and Rutgers.]
[rest of article available at source]
|Wind Watch relies entirely
on User Contributions