Chapter 3. Renewable Electricity: Falling Costs, Variability, and Scaling Challenges

The universal availability and use of electricity has come to define modern life, at least for the vast majority of North Americans and western Europeans. Electricity is accessible in nearly every home and commercial building. We rely on power from wall sockets 24 hours a day, 365 days a year for a myriad of uses that range from toasting a bagel to powering an MRI machine. Electricity is remarkably versatile, and we have built a massive infrastructure to generate, distribute, and consume it.

Electricity constitutes only a portion of the energy the world uses daily. In the United States, 21 percent of final energy is used as electricity (for the world, the figure is 18 percent); of the U.S. electricity supply, 38 percent is generated from coal, 31 percent from natural gas, 19 percent from nuclear power, 7 percent from hydro, and 5 percent from other renewables (fig. 3.1).[1]

WEB Figure 3-1 US Final Energy Consumption by Fuel 2012
Figure 3.1. US final energy consumption by fuel type, 2012. NGL = natural gas liquids. LPG = liquefied petroleum gas. Source: International Energy Agency and U.S. Energy Information Administration.

Since most solar and wind energy technologies produce electricity (as do hydro, geothermal, and some biomass generators), replacement of fossil fuels by renewable energy sources is happening fastest in the electricity sector. Further, this means that hopes for accelerating the energy transition hinge on the electrification of a greater proportion of our total energy use.

For proponents of renewable energy, there has been plenty of good news in recent years regarding falling prices for solar and wind, and soaring growth rates in these industries. Still, as we will see in this chapter, there are significant challenges to be addressed.

Price Is Less of a Barrier

Solar and wind are growing fast. In 2014, global solar capacity grew 28.7 percent over the previous year and has more than quadrupled in the past four years.[2] This is an astounding rate of growth: if it were to continue, solar would become the world’s dominant source of electricity by 2024. Wind energy capacity is growing at a somewhat slower pace (doubling about every five years), but has a larger current base: in 2012 (the last full year of U.S. Energy Information Administration [EIA] global data by generation type), solar delivered 94 terawatt-hours (TWh) (billion megawatt-hours [MWh]) per year, versus wind’s 522 TWh per year, out of a global generation of 22,600 TWh.[3]

Remarkably, in the United States solar and wind power are currently growing faster than coal—not just in percentage terms but in absolute numbers: for 2014, the U.S. increase in coal consumption amounted to 4.6 TWh, while solar and wind added 23 TWh.[4] Even in China, solar and wind are expanding quickly, while coal consumption is hardly growing at all or even starting to taper off (owing to a substantial slowdown in industrial consumption).

Solar and wind’s spectacular growth is occurring for several reasons, but perhaps the most significant driver has been the fall in prices for new solar and wind capacity as compared to costs for coal and natural gas. The price drop is most apparent in the case of solar: the price of photovoltaic (PV) cells has fallen by 99 percent over the past twenty-five years, and the trend continues. In a 2014 report, Deutsche Bank solar industry analyst Vishal Shah forecast that solar will reach “grid parity” in 36 of 50 U.S. states by 2016, and in most of the world by 2017 (grid parity is defined as the point where the price for PV electricity is competitive with the retail price for grid power).[5] Shah also estimates that installed solar capacity will grow as much as sixfold before the end of the decade; see figure 3.2 for a snapshot of just the last few years of solar capacity growth.

WEB Figure 3-2 US total PV installation and capacity
Figure 3.2. US total photovoltaic installations and capacity. 
Source: Vishal Shah, Jerimiah Booream-Phelps, and Susie Min, “2014 Outlook: Let the Second Gold Rush Begin,” Deutsche Bank, Market Research, North America United States, January 6, 2014.

The fall in PV prices is being driven by two factors: improvements in technology (both in manufacturing methods and in PV materials), and increased scale of manufacturing. Manufacturing scale improvements have resulted largely from the Chinese government’s decision in 2009 to support widespread deployment of PV, which in turn has led to a spate of price cutting across the industry, as well as a global flood of cheap panels—though some characterize China’s actions as product dumping or unfair competition, with many American and European manufacturers having gone bankrupt due to their inability to match Chinese prices.

Power purchase agreement prices for wind energy projects are currently competitive with prices for power from coal and natural gas plants in many markets. Wind prices are falling because of lower-cost wind turbines (taller wind towers and longer and lighter blades) that allow for a better capture of the wind resource, and therefore increased economic performance.

In general, technologies tend to become more efficient and more cost-effective over time, as engineers identify improvements and as devices are produced on a larger scale.[6] Fossil fuel technologies (mining, drilling, hydrofracturing, refining) are also becoming more efficient; however, those technologies are being used to harvest depleting resources, so an accelerating decline in resource quality will inevitably outstrip the ability of engineers to improve recovery efficiencies.[7]

Is the current rapid growth in solar and wind capacity sustainable? Can the pace in fact be substantially increased? Will price declines increase or reverse themselves as higher penetration rates are achieved? The answers to these questions will depend on the renewable energy industry’s ability to solve a few looming problems.

Intermittency

As stated earlier, we have designed our energy usage patterns to take advantage of controllable inputs. Need more electricity? If you’re relying on coal for energy, that just requires shoveling more fuel into the boiler. Sunlight and wind are different: they are available on Nature’s terms, not ours. Sometimes the sun is shining or the wind is blowing, sometimes not. Energy geeks have a vocabulary to describe this—they say solar and wind power are intermittent, variable, stochastic, or chaotic. In contrast, energy experts refer to coal, gas, oil, hydro, biomass, nuclear, and geothermal sources as predictable; sources that can be quickly brought into service or shuttered to meet transient demand (usually natural gas or hydro plants) are called dispatchable. It should be noted, though, that these latter sources are also subject to a certain amount of variability: natural gas, coal, and nuclear power plants sometimes need to be shut down for maintenance, or can go offline due to accidents, and hydropower can be distinctly seasonal depending on rainfall patterns. They’re just much less variable than solar and wind.

The availability of sunlight follows fairly consistent diurnal and seasonal patterns. We can calculate in advance the position of the sun in the sky for any moment in time, for any location. We know that sunlight will be more readily available in summer months than in winter months, and that this seasonal variability will be more extreme the farther we are from the equator. We also know that sunlight is likely to be most intense at noon and is absent at night. Yet within this expected variability there is also a more chaotic intermittency: sometimes the sun is hidden for moments, hours, days, or even weeks by clouds.

Wind tends to follow different diurnal and seasonal patterns. Some locations have far more consistent winds than others. Also, winds tend to be stronger, and more consistent, at greater heights above Earth’s surface (thus taller turbines tend to be more efficient). The wind resource varies greatly by location. In some regions it is out of phase with energy demands—weak during the day but stronger at night; in other places, winds are stronger during the day. Transient weather patterns can bring hurricane-force gales or days and weeks of calm, when virtually no electricity can be generated.

Therefore when discussing solar panels and wind turbines it is important to understand the difference between nameplate capacity (how much power could be generated with constant sun or wind) and these resources’ average power output (fig. 3.3). The ratio of these numbers is the capacity factor. A coal- or gas-fired baseload power plant might have a capacity factor of 90 percent; wind farms have capacity factors ranging between 22 and 43 percent.[8] In the United States, PV systems have capacity factors ranging between 12 and 20 percent, depending on the location.[9]

WEB Figure 3-3 Intermittency of renewable energy
Figure 3.3. Intermittency of renewable energy electricity generation and its effect on price. This chart shows Germany’s electricity production and spot prices for the week of April 7, 2014. As renewable energy production fluctuates, conventional production and the spot prices respond.
Source: Johannes Mayer, “Electricity Production and Spot Prices in Germany 2014” (Freiburg: Fraunhofer ISE, December 31, 2014).

Uncontrollable resource variability is a problem for grid operators who need to match electricity generation with demand on a minute-by-minute basis. Daily and seasonal demand cycles are fairly easy to predict in general terms: electricity use tends to peak in the afternoons and dip at night; and in most temperate and tropical regions it increases during the hottest part of the summer when air conditioners are in use. Solar output tends to follow this cycle fairly well up to a point, but often cannot be dispatched to meet a surge of demand or turned off if demand is low (more recently built PV farms have “spinning” reserves where some proportion of the power output must be available for ramping). Wind power’s variations often balance out those of solar; but sometimes both reinforce one another, producing an unusable surge of electricity that grid operators must somehow shed. And sometimes both sources are in a lull (the weather is cloudy and still), even though electricity demand is high. (Modern wind farms also have grid benefits, since they can be damped easily, which is useful for reactive power [voltage] control.)

Intermittency has long been recognized as a hindrance to the adoption of solar and wind technologies, and so a lot of thought has gone into finding ways to reduce or buffer that intermittency. Also, many countries now have experience integrating solar and wind into their grid systems. In short, there are strategies for dealing with intermittency—though each has limitations and costs.

Storage

The most obvious way to make up for the variability of solar and wind energy is by storing energy when it is available in surplus so that it can be used later. There are several ways energy can be stored, but before we survey them it will be helpful to know a little about how to evaluate storage systems.

Let’s start with two factors: (1) the amount of energy the system can store (as expressed in watt-hours), and (2) the amount of power the system can absorb or deliver at any moment (as expressed in watts). A system that stores lots of energy won’t be very useful if it can only receive or return that energy a little at a time. And a system with enormous power won’t be helpful if it needs recharging after only a few minutes. Storage systems need to do well in both respects.

Energy density is especially relevant for alternative ways to power transportation. For electric vehicle (EV) batteries, it is useful to know the energy density both by weight (megajoules per kilogram, MJ/kg) and by volume (megajoules per liter, MJ/L). EVs are often burdened by heavy batteries (the battery pack of a Tesla Model S, for example, weighs in at over 1300 pounds). On the other hand, storing energy in the form of compressed hydrogen takes up a lot of space (see fig. 1.3).

Another metric of energy storage has to do with economic and environmental factors. What’s the carbon footprint of a given storage technology? How much energy was used to construct it? And what’s the energy cost of maintaining the technology over its projected lifetime? These three questions are closely related. Researchers Barnhart and Benson at Stanford University have proposed using the metric energy stored on investment (ESOI) as a way of tackling these issues.[10] It expresses the amount of energy that can be stored over the lifetime of a technology, divided by the amount of energy required to build that technology. The higher the ESOI value, the better the storage technology from an energy point of view—and, most likely, from an environmental perspective as well.

A final consideration with regard to energy storage has to do with limiting resources, such as lithium for batteries. For electricity, the three most widely discussed options for energy storage are geologic storage, hydrogen, and batteries.

Geologic Storage: Water Reservoirs, Compressed Air in Caverns

In the most common instance, this means pumping water uphill into a reservoir when electricity is overabundant, then letting it run back downhill to turn a turbine when more electricity is needed. Pumped storage is the most widely used grid-scale energy storage option; yet, for the United States, current pumped storage capacity is roughly 2 percent of the capacity of the electric grid.[11]

WEB Image 3-1 pumped hydro
Pumped hydro power station. (Credit: A. Aleksandravicius, via Shutterstock.)

Pumped storage is the cheapest option for grid-scale energy storage (batteries have much higher embodied-energy costs). Barnhart and Benson determined that a typical pumped hydro facility has an ESOI value of 210,[12] which means it can store and deliver 210 times more energy over its lifetime than the amount of energy required to build it. Storage of compressed air in underground caverns also has a high ESOI value; however, this option is today rarely used.

The limits and downsides to geologic storage include the fact that it works only for stationary systems (not vehicles). It also suffers from low energy density: physicist Tom Murphy points out that “to match the energy contained in a gallon of gasoline, we would have to lift 13 tons of water (3500 gallons) one kilometer high (3,280 feet).”[13] Therefore we would need a lot of reservoir volume to store really significant amounts of energy. But geologic storage requires appropriate topographic and geological conditions. In the final analysis, it is unclear whether it can be expanded enough to store anywhere near the amounts of energy we might need in an all-renewable future.

Hydrogen

Using electricity to produce hydrogen, then storing the hydrogen, offers another possible vector for buffering out the intermittency of renewable energy sources. Current hydrogen storage is minuscule. However, some analysts suggest hydrogen storage could be used widely at the household scale to store a large total amount of energy that could be flexibly used.[14]

Pellow et al. have determined that a hydrogen energy storage system would have an ESOI rating of 59,[15] which is much lower than the figure for pumped storage but higher than that of the best battery technology available today. Nevertheless, Pellow et al. also found that the low round-trip efficiency of a regenerative hydrogen fuel cell (RHFC) energy storage system “results in very high energy costs during operation, and a much lower overall energy efficiency than lithium ion batteries (0.30 for RHFC, vs. 0.83 for lithium ion batteries).”[16] Hydrogen storage represents a relatively efficient use of manufacturing energy to provide storage. But its operational efficiency must improve before it can compete with batteries in that regard.

In sum, hydrogen may be economic in some applications. It is potentially better than batteries for large-scale storage, and it can be adapted for use in vehicles and homes—though operational energy losses remain a problem.

Batteries

There is much ongoing research into the technology of converting electrical energy for storage as chemical energy in a battery. Just a couple of decades ago, lead-acid batteries (invented in 1859) were the primary available option for large-scale applications; today nickel- and lithium-based batteries are also available. Batteries are getting cheaper and better. In 2015, Tesla Motors Inc. unveiled a new generation of patented lithium-ion batteries designed for home and industrial use to store energy from sun and wind. This provoked speculation that higher volume production and further technical improvements could yield batteries cheap and powerful enough to solve the intermittency problems of renewable energy.

Since battery costs and efficiencies are a moving target, perhaps it is useful to consider the physical limits to battery improvements. Science writer Alice Friedemann has performed the thought experiment of examining the periodic table of elements to identify the lightest elements with multiple oxidation states that form compounds (oxidation-reduction reactions generate a voltage, which is the basis of electric cells or batteries). Ignoring problems such as materials scarcity, she finds that the theoretical upper energy density limit to the best materials would be around 5 megajoules (MJ) per kilogram (kg).[17] The best batteries currently commercially available are able to achieve about 0.5 MJ/kg, or 10 percent of this physical upper bound. Improvements would also be required in materials such as electrolytes, separators, current collectors, and packaging. Given all this, Friedemann concludes that “we’re unlikely to improve the energy density by more than about a factor of two within about 20 years.” Energy density is primarily a limiting factor in batteries for mobile purposes; still, for stationary purposes, low energy density implies the need for more material, and therefore typically translates to greater energetic cost in manufacturing.

The ESOI of batteries is quite low compared to that of pumped storage, and lower than that of hydrogen. Lithium-ion batteries perform best, with an ESOI value of 10.[18] Lead-acid batteries have an ESOI value of 2[19], the lowest in the Barnhart and Benson study.

Batteries imply an added energy cost; what happens when this energy cost is added to the energy cost of building and installing renewable energy generation systems? Clearly, it reduces the energy “affordability” of the system; but if you’re starting with an energy source that has a high EROEI, this is less of a problem. Using EROEI analysis, Charles Barnhart et al. found that storage is less “affordable” for PV than it is for wind.[20] Also, the manufacturing of batteries adds to carbon emissions. Technology writer Kris De Decker performed a life cycle analysis on existing PV-plus-batteries generating systems and found that they entail lower carbon emissions than conventional grid power, but not that much less.[21]

For small rolling vehicles and off-grid, self-contained electricity systems, batteries may provide the best available energy storage solution. Nevertheless, low energy density and low ESOI appear to be inherent drawbacks for chemical storage of electricity on a large scale; and while improvements are on the way, they are unlikely to change the overall situation.

Other Storage Options

While geologic storage, hydrogen, and batteries are the options most often discussed, there are others, such as compressed air canisters (for cars) and flywheels (for the grid); however, these are not widely used and are unlikely to offer substantial improvements over our three main candidates.[22]

There has also been talk of storing energy in electric fields (by way of capacitors) or magnetic fields (using superconductors). A company called EEstor claims a new capacitor capable of storage of 1 MJ/kg, which is about twice as good as the best possible battery. Electromagnets of high-temperature superconductors can theoretically achieve about 4 MJ/kg. The ultimate physical potentials for such storage technologies would represent improvements over existing batteries but would still lack the energy density of hydrocarbon fuels.

Electrical energy could also be stored in synthetic fuels more chemically complex than hydrogen, including liquid fuels. These would offer greater energy density than battery storage and would therefore be better suited for use in vehicles; however, they would suffer from energy conversion inefficiencies.

In a recent paper, Mark Jacobson et al. propose the use of yet another storage medium—underground thermal energy storage (UTES).[23] Industrial waste heat, or heat from combined heat and power (CHP) plants or solar thermal collectors, would be channeled to storage tanks of water, pits of water, or fields of boreholes up to 300 m deep. For solar thermal plants, heat would be collected in the summer and released and used in winter. The heat is primarily used for space conditioning, though it can also be used for power generation, depending on the storage temperature. The technology (which is currently in use on the fifty-two-household Drake Landing Solar Community in Alberta, Canada, with 25,000 square feet of solar collectors) has high investment costs (3,400–4,500 euros/kW) but fairly low operation and maintenance costs.

Scaling up this technology is likely to be a big challenge. UTES (or any thermal energy storage design) is best used and optimized when done in conjunction with new construction or renovations; but given that the average building lifetime in the United States is 75 years, the rate of penetration growth is likely to be inhibited. A joint technical paper on the subject by the International Energy Agency (IEA) and the International Renewable Energy Agency (IRENA) confirms this is the case for Europe, where building stock turnover is only 1.3 percent per year and the renovation rate is only 1.5 percent per year.[24] It would be very expensive to try to retrofit existing buildings to take advantage of this process. It is also unclear how it could be fit into an existing dense urban area. UTES design is site specific, and subsurface storage technologies are site specific. This adds to cost and complexity.

UTES is also characterized by low energy density. Water-based systems can achieve up to 50 kWh/m3 (180 MJ/m3 or 0.18 MJ/kg) which is about at the level of a Li+ battery. The consequence of that is low area density (a scheme in Crailsheim, Germany, for 260 houses, one school, and one sports hall uses 79,000 square feet of solar collectors, 3500 cubic feet of peak load storage, 17,000 cubic feet of buffer storage, and 1.5 million cubic feet of borehole storage with 80 probes), and of course entails a lot of drilling, along with large quantities of probes, pipes, and other equipment. The IEA/IRENA technical paper notes that the barriers include system integration, regulation, high costs, material stability, and complexity, while R&D is needed for insulation and high-temperature materials.

Currently only 8–10 gigawatts (GW) of sensible thermal energy storage exists in the world, but Jacobson et al. propose capacity sufficient to support 467 terawatts (TW) of charge from solar thermal collectors. To say that this is a highly ambitious proposal may be an understatement.

The bottom line for energy storage: many options exist, and research is likely to expand their number and improve them. But each of the categories of options is subject to limits and costs, even assuming substantial technical improvements. Given different criteria (energy density, carbon emissions, cost), some storage options offer advantages over others. However, current electricity storage is only a tiny percentage of the amount that will likely be required in an all-renewable energy future—we need to build a lot of storage. And supplying large amounts of storage will add significantly to the financial, materials, energy, and carbon cost of systems.[25] A real-world example: California’s Energy Storage law AB2514 directs utilities to install 1.3 GW of storage capacity by 2020. Total installed generation capacity today is 78 GW, of which 12.26 is renewable (excluding large hydro). The law says storage must be economically feasible, but utilities have so far balked at implementing it.

Grid Redesign

The electricity grids of the twentieth century were designed to distribute power from large, centralized coal, gas, nuclear, and hydro generating plants to far-flung end users. Grid managers learned to track electricity demand patterns (usually based on times of heavy use of domestic heating and air-conditioning), which tend to feature daily peaks. These demand spikes are now met by peaking power generators (usually fired by natural gas) that are used only for short periods each day. The low utilization of peaking generators, along with the necessary redundancy in the electricity grid, results in high costs to the electricity companies, which are passed on to customers.

The renewable electricity system of the twenty-first century will be different: it will accommodate numerous smaller and more geographically distributed power inputs, most of which are uncontrollably variable. Meeting demand will require, among other things, significant smart grid upgrades. The term “smart grid” doesn’t refer to a specific technology, but rather to a set of related technologies whose goals are to gain a better understanding of what is happening on the grid in order to reduce power consumption during peak hours and incorporate grid energy storage, both of which make it easier to integrate more solar and wind. Disregarding the renewable energy transition, smart grids are expected to deliver increased efficiency and reliability, saving grid operators and consumers money. Add distributed renewable power generation, and the grid may evolve beyond a centralized system to become something of a collaborative network of electricity producers and consumers.

The main elements of a smart grid consist of integrated communications, sensing and measurement devices (smart meters and high-speed sensors deployed throughout the transmission network), devices to signal the current state of the grid, and better management and forecasting software; as renewable energy inputs are added, energy storage systems will inevitably become part of the network. Smart grids with a large share of renewables will also need additional transmission capacity to move more power longer distances to balance loads as output from distributed solar and wind generators varies.

A paper from Siemens Corporate Technology in Germany weighs the relative contributions of grid extensions and electricity storage to a hypothetical 100 percent renewable European grid, and finds that, with storage, renewables could supply up to 60 percent of power without additional grid capacity or backup, and 80 percent with an “ideal” grid.[26] These conclusions are similar to those of a National Renewable Energy Laboratory (NREL) study, which relies heavily on dispatchable biomass power generation to achieve the renewable target (about 15 percent biomass generation in 2050). They note regarding the grid that “electricity supply and demand can be balanced in every hour of the year in each region with nearly 80% of electricity from renewable resources, including nearly 50% from variable renewable generation, according to simulations of 2050 power system operations.”[27]

How much will all this cost? A 2011 study by the Electric Power Research Institute (EPRI) found that smart grid upgrades in the United States would require the investment of between $338 billion and $476 billion over the next 20 years, but would deliver $1.3 trillion to $2 trillion in benefits during that period.[28] Another study, this one by the U.S. Department of Energy, calculated that a more modest modernization of U.S. grids would save between $46 and $117 billion over the same twenty-year timeframe.[29]

Assuming that smart grid investments are a good deal over the long run, who pays for these upgrades over the short term? Experts disagree on whether recovery of a utility’s smart grid upgrade costs should come from raising rates to customers or from some “nontraditional” source, such as government. There is also concern that utilities and regulators are accustomed to buying power equipment that lasts 40 years or more, whereas some electronic sensors and communications devices now being installed on the grid may last half that time, or as little as a decade.[30]

The electricity grid has been described as the largest machine ever created by human beings; as we make it larger and smarter in order to accommodate more variable and distributed renewable energy inputs, we also make it even more complex. Is there another solution? There is: do away with the centralized grid altogether and have energy generation and storage happen at the scale of communities. This would require every city and possibly every neighborhood to have enough generating and storage capacity, as well as needed control equipment, to sustain itself. The result would likely be a more expensive electricity system overall, and one that would, left entirely to the free market, result in much greater energy inequality (a subject to which we will return in chapter 8), since some households and communities would be able to afford robust systems, others none at all. The intermittency of wind and sunlight would also likely pose a greater challenge for more localized minigrids, unless they were linked over large geographical areas to take advantage of distant resources to make up for local shortfalls.

Decentralizing the grid would encourage energy use more in line with natural flows of renewable energy; also, households/communities would be more self-sufficient, and the system would entail less complexity and fewer interdependencies, resulting in less vulnerability to breaks in a brittle system. In light of all the factors mentioned, the likely outcome will be come mix of both centralized and decentralized grid systems, combining long-distance transmission infrastructure (high-voltage lines) with local distribution.

Demand Management

Given electricity sources whose unpredictably variable output doesn’t coincide with times when electricity is typically used, one set of solutions (which we have just discussed) aims to make that output more predictable using storage and control systems; another set of solutions, generally referred to as demand response, is geared to manage when consumers use energy and how much they use, through voluntary programs or economic incentives. Although the purpose of demand response programs today is to avoid construction of costly generation capacity to meet peak demand, the practices are similarly applicable to managing the increased penetration of variable electricity generation such as solar and wind. Aligning electricity demand with supply entails two main substrategies: dynamic pricing, and smart appliances and equipment. One potentially important example of the latter is the use of electric car batteries for grid storage, discussed later in the chapter.

WEB Image 3-2 electric vehicle battery
Battery of a Toyota 86 electric vehicle. (Credit: Tokumeigakarinoaoshima, via Wikimedia Commons.)

Dynamic pricing—changing the price of electricity according to its hour-by-hour availability—has led large industrial and commercial users to shift their usage to times when supplies are abundant and prices are low. This requires knowing when those times are, which in turn requires ways to communicate with users. In California, links between the independent system operator (ISO)—which coordinates, controls, and monitors the operation of the electrical power system—and large interruptible users have already been established; further, one of the goals of smart meter programs is to communicate real-time pricing information to residential and commercial customers so they can shift usage times. As this requires sensors, communication links, software, and data management, dynamic pricing is inseparably connected with the project of redesigning the grid, discussed earlier in this chapter. Using dynamic pricing to enlist market forces in demand management will unquestionably help reduce times of over- or undersupply of electricity, thus increasing power affordability.[31]

Dynamic pricing can happen with the old grid infrastructure, it just requires feedback of the spot price to consumers who face that price. However, most domestic consumers currently don’t have an easily accessible way to track the spot price in real time, or are subject to flat-rate pricing, and thus have no incentive to change their usage patterns.

When we’re at home, we don’t check electricity prices hour by hour to see when prices are high or low. How, then, can residential electricity customers be integrated into dynamic pricing programs? By automating the process via the so-called Internet of Things. Once most appliances are computerized and connected by wifi or hard line, they could in principle be set to respond to data from the utility company so they adjust their energy usage based on the current price of electricity. (Another potential for grid demand management entails allowing the utility to dial down power usage of appliances like refrigerators and air conditioners remotely during peak times.) This doesn’t require a smart meter; in fact, most smart meters don’t have this capacity. All that is required is a switch that the utility can turn on and off.

There are, of course, limits to these strategies. The Internet of Things implies additional material resources—which require extraction, manufacturing, transport, and operation—and also increased system complexity. It also raises privacy issues: already televisions are tracking (and potentially selling) users’ usage data. Finally, some electricity usage is easily amenable to demand shifting; at home, for example, we may be quite willing to load up the washing machine, set its dial, and wait for the machine itself to determine when to wash our clothes based on hourly electricity price fluctuations. But if we’re working at a computer, we might be less than pleased to see its screen go black following the momentary display of a message reading, “Sorry, electricity prices have just gone up.”

Among smart appliances, electric cars have often been touted as having the greatest potential for helping match grid electricity demand with supply. Since automobiles are parked an average of 95 percent of the time, if EVs were left plugged in during that time electricity could flow to power lines and back, with a value to the utilities of up to $4000 per year per car.[32] The use of EV batteries to provide decentralized storage of electrical energy, either by delivering electricity into the grid or by throttling their charging rate, is known as vehicle-to-grid (V2G). Grid managers could incentivize vehicle owners to participate in V2G programs by offering discounted electricity at night to charge vehicles, and by offering fees to offset the cost of battery wear and tear from additional cycling. It is unclear, however, whether such incentives could realistically be greater than the value of the batteries to their owners.

Since proposed V2G programs center on the use of batteries for storage, all of the limits to battery storage technology previously discussed apply here. Currently, only pilot V2G programs exist, and the number of EVs in use worldwide is still too small to provide much real-world data on the likely benefits and drawbacks of a program large enough to impact grid reliability and price stability.

Capacity Redundancy

Another way to reduce the impact of energy source intermittency is to add redundant generation capacity: when the sun isn’t shining and the wind isn’t blowing, then simply rely on other electricity sources, which can be throttled down when sun and wind are abundantly available (this is already done with natural gas generators, though using them this way is much less energy efficient than as combined cycle base load, in which they operate continuously and are available 24 hours a day). Redundancy obviously adds to total system costs, and therefore proposals for future 100 percent renewable electricity systems typically attempt to reduce the need for it with strategies already discussed (storage, grid upgrades, and demand management). Nevertheless, capacity redundancy is the primary strategy that currently enables intermittent renewables to be integrated into electricity grid systems.

So far, solar and wind have remained proportionally small contributors to overall electrical energy in most nations, and variability has been buffered primarily by fossil energy resources (especially by natural gas–fired peaking plants, which can be powered up or down quite quickly). In effect, the grid itself becomes the battery for solar and wind generators. Renewable energy resources other than solar and wind could fill more of that role; these would likely include biomass, hydro, and geothermal. But are these resources up to the job? Let’s take a look at each in turn.

Biomass

Burning wood, crop residues, and biogas is a dispatchable electricity source: as with coal or natural gas, if more electricity is needed then it’s just a matter of firing up the boiler and adding fuel. However, this resource is limited, and long-term sustainability is uncertain. Forests cover 7 percent of Earth’s surface, but net deforestation is occurring around the globe, especially in South America, Indonesia, and Africa.[33] The use of ever-larger areas of land and quantities of water for growth of dedicated “energy forests” also raises concerns about competition with food and fiber crops.

World electric power generation from biomass was about 405 TWh in 2013 from an installed capacity of 88 GW, with much of the growth based on a growing international trade in wood pellets (at some distance from the source, transport of wood pellets consumes more energy than the pellets will deliver).[34] Cogeneration or CHP plants can burn fossil fuels or biomass to generate electricity while also using their “waste” heat for space or water heating (biomass CHP is more efficient at producing heat than electricity, but it can be practical if there is a local source of excess biomass and a community or industrial demand nearby for heat and electricity). Most biomass generation plants are located in northern Europe, the United States, and Brazil, with increasing amounts in China, India, and Japan, and capacity has been growing at over 10 percent per year over the last decade.[35] However, biomass power plants are only half as efficient as natural gas plants, and they are limited in size by a fuel-shed of around 100 miles. Except in cases of long-distance trade in wood pellets, biomass availability is highly seasonal, and biomass storage is particularly inefficient with high rates of loss due to degradation.

In its favor, biomass is well suited for use in small-scale, region-appropriate applications where using local biomass is sustainable. In Europe there has been steady growth in biomass CHP plants in which scrap materials from wood processing or agriculture are burned, while in developing countries CHPs are often run on coconut or rice husks. Burning biomass and biogas is considered to be carbon neutral, since, unlike fossil fuels, these operate within the biospheric carbon cycle, though the increased reliance on wood as fuel raises concern about the time lag between combustion of the wood and the pace of carbon reuptake in new growth.

While biomass is a renewable resource, it is not a particularly expandable one. Often, available biomass is a waste product of other human activities, such as crop residues from agriculture, wood chips, sawdust and “black liquor” from wood products industries, and solid waste from municipal trash and sewage. In a less-fossil-fuel-intensive agricultural system, such as may be required globally in the future, crop residues may be needed to replenish soil fertility and won’t be available for power generation. There may also be more competition for waste products in the future, as manufacturing from recycled materials increases.

Hydroelectric Power

Hydro dams have the potential to produce a moderate amount of additional, high-quality electricity in less-industrialized countries, but they are often associated with severe environmental and social costs. Particularly in northern Europe, hydropower already serves to balance the growing proportion of variable renewable electricity production, though hydropower itself can be subject to strong seasonal variations, which may be exacerbated by climate-change-induced changes in rainfall. Globally, there are many undeveloped dam sites with hydropower potential, though there are far fewer in the United States, where most of the best sites have already been dammed. With over 1000 GW of hydropower capacity installed globally,[36] the International Hydropower Association estimates that about one-third of the technical potential of world hydropower has already been developed.[37]

Geothermal

Geothermal energy is derived from the heat within Earth. It can be “mined” by extracting hot water or steam, either to run a turbine for electricity generation or for direct use of the heat itself. High-quality geothermal energy is typically available only in regions where tectonic plates meet, where volcanic and seismic activity are common, and where heat is fairly close to the surface. Currently, the only places being exploited for geothermal electrical power are ones where hydrothermal resources exist in the form of hot water or steam reservoirs. In these locations, hot groundwater is pumped to the surface from wells 2–3 km deep and is used to drive turbines. In theory, power can also be generated from hot dry rocks by pumping turbine fluid into them through boreholes that are 3–10 km deep. This method, called enhanced geothermal system (EGS) generation, is the subject of ongoing research and the construction of demonstration plants, and the first grid-connected commercial plant of 1.7 MW capacity came online in Nevada in 2013 as part of an existing geothermal field.[38] Because EGSs use fluid injection to open existing rock joints, there is some concern that this technology could generate earthquakes as an unintended side effect.[39] In general, early high hopes for EGS technology appear not to be panning out.

In 2013, world geothermal power capacity reached 12 GW with output rising to 76 TWh.[40] Annual growth of geothermal power capacity worldwide has slowed from 9 percent in 1997 to 4 percent in 2013.[41] Geothermal power plants produce much lower levels of emissions and use less land area than fossil fuel plants. However, technological improvements are necessary for the industry to continue to grow. Water can also be a limiting factor, since both hydrothermal and dry rock systems consume water.

There is no consensus on potential resource base estimates for geothermal power generation. Hydrothermal areas that have both heat and water are rare, so the large-scale expansion of geothermal power depends on whether lower-temperature hydrothermal resources can be tapped. A 2006 Massachusetts Institute of Technology (MIT) report estimated U.S. hydrothermal resources at 2400 to 9600 EJ, while dry-heat geothermal resources were estimated to be as much as 13 million EJ (as you’ll recall, the world currently uses over 500 EJ per year), but the U.S. Department of Energy (DOE) estimated in 2014 that technical advances needed to access the latter may still be 10–15 years from commercial maturity,[42] which may reflect inherent problems with EGS.

Biomass, hydro, and geothermal are probably the best three renewable electricity sources available for base load renewable power, though there are others (notably tidal and wave generators, which currently produce only very small total amounts of electrical power). This book focuses on solar and wind as the main candidates for expansion of renewable energy because these are the sources with the most immediate capacity for growth. No doubt some combination of biomass, hydro, and geothermal, used as base load and/or backup capacity, can help buffer the intermittency of solar and wind, but since these sources (with the possible exception of geothermal) have limited prospects for expansion, this could ultimately also limit the total amount of energy production capacity in an all-renewable future energy regime.

How about buffering the intermittency of solar and wind with . . . more solar and wind? After all, even if the weather is cloudy and still in a given location, it might be windy and sunny a few hundred miles away. This kind of capacity redundancy would require more grid interconnections and, of course, more solar panels and wind turbines. Since we couldn’t know far in advance which other regions would be likely to provide capacity redundancy for ours, we would need enough redundancy in several places, and enough transmission capability, to meet possible supply shortfalls. All of this adds to the system cost.

The actual experience of grid operators integrating solar and wind into the grid has led to an emerging consensus that the cost of integrating renewables will shoot up as solar and wind make up a very high percentage of grid power.[43] In the early stages of solar and wind build-out, it is fairly easy to incorporate new uncontrollable inputs into the grid because redundancy already exists in the form of coal, natural gas, nuclear, and hydro generation plants, which have plenty of capacity to balance out added variable renewable electricity and match it with demand—which is also variable. However, as total solar and wind input surpasses 30 percent of grid electricity, the costs of integration are likely to gradually increase. Past 60 to 80 percent, the need for storage and redundancy will likely explode. The goal of a near-100 percent renewable, grid-based electricity system is a subject of great controversy and research, but it remains theoretical, because no society has created one yet, except for a couple of small islands in the Canary Islands (El Hierro)[44] and Denmark (Samsø) where wind and pumped hydro have been deployed to serve their small populations; or Uruguay, which generates the great majority of its electricity from hydro power.

Scaling Challenges

If we’re to achieve a 100 percent renewable electricity system soon enough to significantly mitigate climate change, we’ll have to build fast. The good news is that solar and wind are already growing quickly, as we have already seen. The bad news is that there appear to be some financial, energy, and environmental hurdles in the path toward scaling up these sources at the rates needed.

The energy transition will be expensive. While some estimates suggest that a renewable energy regime will be more affordable than a business-as-usual pathway dominated by fossil fuels (especially so once the climate and health impacts of the latter are taken into account),[45] it is doubtful that the business-as-usual pathway is itself affordable. And health and environmental costs avoided do not translate to money in the bank ready to be invested in alternative energy projects. Estimates of the total cost of moving to an all-renewable global electricity system are too preliminary to be exact, but are nevertheless expressed in the tens of trillions of dollars.[46] Where will the money come from? If the utility industry simply replaces coal, natural gas, and nuclear plants as they reach retirement age with solar, wind, geothermal, hydro, and biomass capacity, then most of the capital cost of the transition would come from the utility industry using its usual financing methods. But, again, to achieve the speed of transition needed, we would also have to retire fossil-fueled plants that are still well within their projected operating lifetime. That would imply higher rates of investment than the utility industry is accustomed to. Also, as the need for storage, capacity redundancy, and grid expansion and redesign increase, these will impose still more added costs.

Until recently, rapid expansion of solar and wind has relied on incentives, including rebates to homeowners installing PV systems and feed-in tariffs (long-term contracts to buy electricity from renewable energy producers, typically based on the cost of generation rather than existing market prices for electrical power). But those incentives are being reduced, eliminated, or thrown in doubt in countries such as Italy, Spain, and the United Kingdom, and in states such as Kansas and Arizona. While the falling costs of wind and solar are making them more directly competitive with incumbent fossil electricity sources, the loss of government financial support would slow the renewables transition.

A recent MIT study of the prospects for solar electricity found that, due to factors related to intermittency, “Even if solar PV generation becomes cost-competitive at low levels of penetration, revenues per kW of installed capacity will decline as solar penetration increases until a breakeven point is reached, beyond which further investment in solar PV would be unprofitable.”[47] Therefore further subsidies for renewables (or penalties for nonrenewables) would probably be required if this energy source is to be scalable to replace the bulk of fossil-fueled generation.

Financing for solar and wind generation is fundamentally different from that for coal and gas plants. In the former case, investment is almost entirely up-front; from then on the “fuel” is free and maintenance is relatively inexpensive. In the latter, the cost of building the generation plant is proportionally less, with ongoing fuel costs being factored into wholesale and retail electricity prices. There is an obvious advantage to solar and wind from an investment standpoint (no worries about fluctuations in fuel prices), but there is also a drawback: front-loading of investment means that the availability of low-interest credit plays a major role in making new wind and solar capacity affordable.

Incumbent coal and gas power plants have the advantage of a lower tax burden, as fuel costs can be deducted from taxable income, while solar and wind cannot benefit from this deduction. Property taxes can also be an issue for large solar and wind installations, which take up much more land per unit of generating capacity than fossil fuel plants.

Aside from these financial problems, there is also an energy hurdle to the rapid transition to renewable electricity. Just as the financial investment in solar and wind generators is front-loaded, so is the energy investment in their construction: from an energy perspective, these generators must “pay” for themselves over time. This means that, if lots of generation capacity is built too quickly, it may constitute an energy sink rather than a true net energy source until rates of installation begin to slow. A 2013 study by Benson and Dale at Stanford University, cited earlier, showed that solar PV generation capacity installed between the years 2000 and 2012 paid for itself in energy terms only toward the end of that period; this was due to the high rates of growth, the energy costs of panel production and installation, and the relatively low EROEI of PV.[48] Wind power, with its higher EROEI, is less subject to this problem; nevertheless, the principle still holds: if the rate of installation of an energy-generating technology whose energy costs occur almost entirely in the manufacturing stage is steep, then the net energy available from that technology during the ramp-up period will be significantly lower than the gross energy produced by the installed generators.

WEB Figure 3-4 Conceptual energy balance
Figure 3.4. Conceptual energy balance. This figure shows growth rate [%/yr] as a function of energy payback time (EPBT) [yrs] for a number of fractional reinvestment rates [%] (diagonal lines).
Source: Michael Carbajales-Dale, “Fueling the Energy Transition: The Net Energy Perspective,” GCEP Workshop on Net Energy Analysis at Stanford University, April 1, 2015.
WEB Figure 3-5 Wind energy payback period
Figure 3.5. Wind energy payback period.
Source: Michael Carbajales-Dale, “Fueling the Energy Transition: The Net Energy Perspective,” GCEP Workshop on Net Energy Analysis at Stanford University, April 1, 2015.

This also means that, from an energy standpoint, a rapid deployment of solar and wind generators will almost certainly be subsidized mostly by fossil fuels. Which in turn implies that, during at least part of the transition period, society will need more energy from fossil fuels than it is currently deriving—unless existing energy demand can be throttled down while a larger proportion of remaining fossil energy consumption is devoted to all the activities needed to build and deploy wind turbines and solar panels.

Another scaling challenge for solar and wind comes from the need for raw materials, including rare earth minerals for electromagnets in wind turbines and lithium for batteries. At current rates of installation this is not a significant barrier, but world supplies of these elements are limited and could constrain production; for example, at 10 percent annual growth in annual extraction rates, currently lithium reserves would last a mere 50 years.[49]

Questions about the technical potential of wind power pose yet another scaling challenge for renewable electricity sources. Early estimates of the potential ranged from ten to a hundred times current total world electricity generation capacity from all sources. Research by Adams and Keith notes “[w]ind resource estimates that ignore the effect of wind turbines in slowing large-scale winds may therefore substantially overestimate the wind power resource.”[50] However, other researchers dispute this claim.[51]

Then there are location issues. Older-design wind turbines near urban areas have been reported to create low-frequency noises that are disturbing to at least some people (this is not a problem with offshore turbines—at least not for humans).[52] Solar panels can often be unobtrusively sited on rooftops, but producing really substantial amounts of energy from PV or concentrating solar will require real estate. Already, large concentrating solar thermal projects in the deserts of the American Southwest are forcing tradeoffs with habitat for species such as the desert tortoise. In addition, large solar arrays in desert areas require periodic washing of dust in order to maintain high levels of efficiency. Concentrating solar thermal plants need water for cooling as well, but this requires amounts of water that can be significant in these environments.[53]

Lessons from Spain and Germany

The world is still in the early phases of its renewable energy transition, but some clues about that transition’s future, and some lessons on how to optimize it, can be gleaned from the experience of countries that have gone the farthest and fastest. Spain (with 27.4 percent of its electricity derived from solar and wind in 2014)[54] and Germany (with about 30 percent of electricity from renewable sources, including hydro)[55] are two of the leaders in this regard, but their stories are very different. And their efforts have both supporters, who characterize the transition so far as a great success, and detractors, who paint it as an expensive failure.

The Iberian Peninsula is sunny and has large wind resources; further, in terms of grid connections with other nations, Spain is relatively isolated. These factors together make the Spanish experience with PV and wind an interesting test case. Spain’s experience with the rapid introduction of renewables started in 1997, with early strong support for solar and wind. The government instituted a standard offer (feed-in tariff) policy requiring that utilities purchase electricity generated by renewables at premium rates. Power companies, including Acciona, Endesa, and Iberdrola saw this an opportunity to start building their own wind farms. Spain’s renewables subsidies led to a nearly fortyfold increase in wind capacity over the next dozen years, to 16.7 GW in 2008.[56]

WEB Image 3-3 Spanish solar farm
The Solnova Solar Power Station near Seville, Spain. (Credit: Abengoa Solar, via Wikimedia Commons.)

In 2004, the Spanish government also instituted a generous feed-in tariff of 46 euro cents per kWh for solar. Again, investors rushed to cash in on this lucrative promise of long-term profits, and rates of solar installation soared. The government target for 2008 was 400 MW of new solar capacity; 3500 MW was actually installed. During 2008, Spain installed more than 2.5 GW of PV capacity, nearly half of the global total that year. At the same time, subsidies supported the construction of nearly 2 GW of generation capacity from large solar thermal electric plants.

Out of necessity, Spain pioneered the integration of large amounts of variable renewable electricity into the grid. The nation’s grid operator, Red Eléctrica de España (REE), had argued that it would be impossible to integrate wind power at more than 12 percent of total electricity demand. However, in 2006 REE built a centralized dispatch system and required all wind farms to connect to it. This was the first system of its kind in the world, and it enabled Spain’s wind power to grow to 20 percent of annual demand in 2014, providing over 60 percent of electricity at times of peak generation.[57]

But this rapid deployment of renewables meant the government was paying out more in subsidies than it had bargained for. An existing law that set limits on retail electricity rate increases required the government to make up for discrepancies between the utility industry’s revenues and costs. By 2009 this rate freeze was causing Spain’s utility system to run a deficit of 4 billion euros—roughly 20 percent above utility company revenues.[58] After the global economic crash of 2008, the Spanish government was simply unable to continue funding such deficits. The sitting center-left government reduced the feed-in tariff rates; in 2012 its center-right successors froze renewable energy incentives and introduced a complicated system that rewarded renewable energy producers even less.

Today Spain’s renewable energy transition is moving very slowly. In retrospect, failures of the boom years can probably be chalked up to a combination of bad policies that failed to pay fairly for electricity and that lacked an upper limit on subsidies, and bad luck in the form of the global financial crisis.[59]

Germany offers a more encouraging example. Its Energiewende, or energy transition, has historical roots reaching back to the 1970s, when popular skepticism of nuclear power and support for renewables were already decisive political issues. Like their Spanish colleagues, German policy makers believed that early subsides for renewables would eventually lead to much lower prices for solar and wind—as they indeed have. But in Germany subsidies have been more consistently managed. Feed-in tariffs were instituted in 2004 and have been modified many times since. As of July 2014, subsidized rates for PV electricity ranged from 12.88 euro cents per kWh for small rooftop systems, to 8.92 euro cents per kWh for large utility-scaled systems.[60]

WEB Image 3-4 German wind farm
An Enercon wind farm in Lower Saxony, Germany. (Credit: Philip May, via Wikimedia Commons.)

Today in Germany, wind, solar, and biomass combined account for almost the same portion of net electricity production as brown coal (biomass was 39 percent of the total).[61] Peak generation from combined wind and solar achieved 74 percent of total electricity production in April 2014.[62] In terms of generating capacity, Germany reached its 2010 target for wind power in 2005, its solar target for 2050 in 2012.

The 2011 Fukushima nuclear disaster in Japan led Germany’s government to rethink the nation’s reliance on nuclear power. Chancellor Angela Merkel announced the immediate, permanent shutdown of eight of its seventeen reactors and pledged to close the rest by the end of 2022. As a result, the largest four German utility companies—all owners of nuclear power plants—have seen declining electricity output. Meanwhile the nation doubled-down on its determination to develop renewable energy sources.

Germany has not only encouraged large-scale renewable energy systems but has also financed enormous numbers of distributed household- and community-sized generators. Six percent of German households were producing their own energy in 2014, and 20 percent said they aimed to do so by the end of the decade.[63] Compare this to California, where household solar ownership rates are about 1.2 percent.[64] Similarly, rather than relying only on grid-scale storage, Germany has created incentives for homeowners to add batteries to their residential PV systems.[65]

The Energiewende does have its detractors. A recent Wall Street Journal opinion piece noted, “Average electricity prices for companies have jumped 60% over the past five years because of costs passed along as part of government subsidies of renewable energy producers. Prices are now more than double those in the U.S. Yet nearly 75% of Germany’s small- and medium-size industrial businesses say rising energy costs are a major risk, according to a recent survey by PricewaterhouseCoopers and the Federation of German Industry.”[66] However, businesses are not fleeing the country as a result. In fact, it could be said that manufacturing is flourishing in Germany to a greater degree than in the United States, where electricity is so much cheaper: in 2012, industrial production made up 30.7 percent of the German economy, while it comprised only 20.6 percent of the U.S. economy.[67] Perhaps the biggest difference between critics and boosters of the energy transition is that critics assume that maintenance of the current largely fossil-fueled electricity system is a viable option, while boosters understand that, even with its challenges, the transition to an all-renewable energy economy is both necessary and inevitable.

What lessons can we take away from the examples of Spain and Germany? Subsidies for renewable electricity are still necessary, as are coordinated efforts to integrate and manage variable solar and wind inputs to the grid. These technical and economic issues are important, but perhaps less daunting than potential political roadblocks. As energy expert Craig Morris puts it, “The rapid deployment of large volumes of renewables requires both political will and a consistent policy.”[68] When new governments overturn strong renewable energy policies instituted by previous governments, potential investors in wind and solar flee and may be shy to return. The nations that have had the most success with the renewable energy transition have implemented some form of feed-in tariff as a subsidy, and have stuck with that basic strategy even while adjusting tariffs somewhat as generation costs and other factors changed. Though solar and wind electricity prices have fallen significantly, it is difficult to imagine the renewables transition occurring at greater than old-plant replacement speed without such subsides or incentives.

Pushback against Wind and Solar

The recent rapid growth of wind and solar has posed problems for utility companies. As more and more solar and wind electricity generation capacity is installed—and this applies especially to rooftop solar—the utility companies’ current business model faces an existential threat. Solar panel owners benefit from electricity free of generation costs, but utility companies have to pay for grid maintenance and are now forced to deal with uncontrollable energy inputs that may have to be offset, shed, or stored—and that costs money. The solar owner benefits, the utility pays.

Utilities are stuck with the bill for grid upgrades and grid-scale energy storage and, absent government subsidies, have no choice but to pass these costs on to customers in the form of higher rates. But then, facing higher grid rates, customers who can afford stand-alone solar systems may see it as being in their long-term advantage to go off grid. This hypothetical self-reinforcing feedback process has been called the utility death spiral.[69]

A 2010 study from the German Renewable Energies Agency concluded that nuclear power is inherently “incompatible with renewable energies.”[70] Because solar and wind generators require no fuel, they can be the cheapest sources of electricity at the moment of production (their “levelized cost” includes payment on capital); therefore renewable electricity is often used as much as possible when it is available (though policies such as renewable portfolio standards [RPSs] play a role in this regard as well). When this happens, fossil-fueled and nuclear plants are throttled back if there is too much power relative to immediate demand—but not all power plants can do this. Older nuclear and coal power plants that can’t be throttled back easily are therefore poorly suited for an electricity system with large and growing amounts of intermittent solar and wind power.

In the United States, utility companies—especially ones with large investments in nuclear and coal—have begun a coordinated campaign whose first phase included a push for state laws raising prices for solar customers. This has largely failed in legislatures around the country, as solar energy has proven popular even with political conservatives. More recently, the effort has centered on public utility commissions, where utility industry representatives have pushed for solar fee hikes, including high monthly charges for net metering, which pays solar customers for electricity they feed into the grid.[71]

Costs to utility companies from the introduction of distributed solar PV are somewhat balanced by the fact that added solar capacity helps reduce the strain on electric grids on summer days when demand soars and utilities must buy additional power at high rates. Nevertheless, as more residential and business customers install their own PV systems, revenues to the utility industry are starting to decline.[72] Industry-sponsored studies warn that the trend could eventually lead to a radical transformation of energy markets, on a scale similar to the restructuring of the telecommunications industry following the advent of the Internet and cell phones.

One partial solution is to entirely separate the businesses of power generation and grid operation (a situation that largely already exists in many places). That way, grid operators can concentrate on dealing with the task of optimizing the electricity system for renewable inputs, while nuclear, coal, gas, solar, and wind generators battle among themselves for market share. In any case, there is obviously a need for planning and policy at the governmental level to smooth the transition as much as possible.

*          *          *

This rather lengthy chapter has explored issues surrounding the renewable energy transition in the electricity sector. It is in this sector where most of the growth in renewable energy has occurred so far. But we must not forget that only about 18 percent of final energy is consumed in the form of electricity globally (21 percent in the United States). As we have seen, even in this portion of the overall energy economy, substantial roadblocks to an all-renewable future remain (a very significant one that we will address later is the problem of embedded energy in the electricity sector—energy used in the processes of building and manufacturing solar panels, wind turbines, storage devices, and the rest of the infrastructure that will make up the renewable electricity system of the future). The next two chapters explore nonelectricity uses of energy, which pose their own, often greater, challenges.