The new LBNL value report has been placed with many media outlets, as part of the follow-up public relations campaign to dismiss value impacts as a mere “concern”, while doing little to address the very real problem. I have received many requests for comment on the latest LBNL effort. A thoughtful review of the claims stated therein is warranted, and my preliminary remarks follow.
First, the August 2013 LBNL report conclusions should not be relied on for any purpose other than showing that statistics can be used to support any biased position they choose, but it is far from being an empirical value study. There was certainly enough data to perform a study that incorporated the accepted methodology of paired sales and/or resale analysis, with careful analysis of marketing times and other value influencing factors. But LBNL once again ignored the primary data source for residential values; the Multiple Listing Service (MLS) active in any given study area. (Marketing times do not show in Assessor data)
Once you bother to read through all the scientific sounding discussion and internally supported citations (Hoen citing Hoen, for example), I recommend that you refer to the last sentence of paragraph 2 on report page 5, wherein the authors disclose an apparent bias as follows: “Therefore for the purposes of this research we will assume 3-4% is a maximum possible effect.”
Review of Table 7 arrays the data and reveals impact on a factual basis. The empirical evidence is presented on Table 7 before the sale price data was “crunched” to obtain the stated results. On a side note, the focus is on claims of statistical significance; not upon value impact.
Regardless of terminology or focus, the fact is that the raw data shows a post construction negative impact of 28% for homes < 1 mile from turbines vs. homes in the 3-10 mile range, as follows:
Second, the methodology utilized in the LBNL analysis is not an accepted, proven regression model. It pools data from 67 different projects in 27 counties in 9 states, and simply cannot be deemed reliable because of the wide value variations that exist between these local markets. To the contrary, it insures that the variables have such wide variation that any impact measured after running the data through their hedonic “model” will not be able to identify any impact at the level needed to establish statistical significance (see Al Wilson, Rubber Rulers).
In contrast, however, the raw data is substantively significant (see “Three Types of Significance”), as it shows the real market reaction, without any assumptions, alteration or adjustment of the numbers, and no built in bias (i.e., 3% to 4% max impact).
Further, given that the Table 7 data includes 1,198 sales located within 1 mile of turbines, and also that it covers 67 different projects, it seems quite clear that there is a high level of causal significance to establish there is a direct relationship between distance and value impact, and these market reactions are commonly repeated throughout the USA.
The Rubber Rulers paper by Al Wilson addresses the numerous problems with the 2009 LBNL analysis, and it appears that the majority if not all of these problems are replicated in the 2013 LBNL report. (Note: Wilson is a professional appraiser who literally wrote the book on environmental impact on property values and other value impairment research, statistical standards studies, etc.)
What non-appraisers (i.e. LBNL authors) refer to as “anecdotal” evidence, in their attempts to dismiss actual examples of value loss, is what appraisers refer to as Comparable Sales and other market evidence. From an appraisal perspective, the 1,198 sales within 1 mile that find a 28% loss of value is meaningful, but actual local examples are potentially a higher level of proof. Realtors such as Annie Cool are in the trenches on this issue, and consistently find that buyers will not pay prices that are at “no turbine proximity” value rates, and that most buyers simply will not even make an offer on homes near turbines. Homes that sit on the market for extensive periods of time have downward pressure on list price, and while some owners elect to pull the property off the market rather than accept a large (or total ) loss of equity, others end up selling for whatever the “market” will pay. The 1,198 sales represent a sample of the latter group, and again, show a 28% lower value, on average.
These type of marketing facts are completely ignored in the LBNL studies, which is a major failing of the academic approach to addressing this issue. My own recent study of 13 paired sales in Illinois found marketing times within 1 mile to be exactly 1 year longer than competing homes (paired sales) located an average of 10+ miles away from the turbine projects. (All 2012 sales near turbines were paired with one or two sale further away, but which would otherwise be considered competing homes while on the market.)
As an appraiser, I would prefer to have 5 good “comps” than 50,000 meaningless data points. In fact, many billions of dollars in mortgage loans are made across the country on the basis of 3 to 5 good “comps”. But no mortgages or sales will be based on the LBNL report…it is irrelevant for real world purposes.
Third, Assessed values may or may not be accurate. My experience dictates that AV data is not reliable for purposes of establishing value of a given property. It is merely a method of spreading the taxes levied on a supposedly uniform and ad valorem basis across the properties in a given jurisdiction. AV’s may have decreased in some locations while increasing in others, but it seems pretty consistent that Assessor’s Offices are not being compelled to seriously address local impacts from turbines. I have been told by one Assessor that “since none of the properties nearby sold, there is no basis for reducing the values”. This statement was despite the fact that several of the properties in question had been extensively marketed and could not elicit even a single offer at ANY price. Remember, Assessors have a different job than independent appraisers; Assessors are required to assess properties uniformly, whereas appraisers are required to value each property individually and with attention to all relevant factors that affect value. Stocks are a far more “liquid” asset than is real estate typically, as it can be sold in minutes, or days at most. Real estate marketing times are an important component in setting values, and when the asset loses all or virtually all of its liquidity, that is indeed a significant value impact. One might look to foreclosure sale data to support the discount that is needed to attract a buyer of a “problem” property, in a time frame that preserves reasonable liquidity.
Finally, the LBNL study is completely inappropriate as “evidence” for developers to submit to zoning boards, when considering turbine SU permit applications. Typical Zoning Bylaw standards for approval require a finding that the “project” will not have any adverse impact on neighboring property values (as well as the public health, safety & welfare). The LBNL report does not address this standard(s) of approval, but instead attempts to recast the question as whether turbines have a far reaching, uniform and statistically significant impact. Home sales that are not in the neighborhood of turbines are irrelevant, as is whether the impact is widespread or uniform. Zoning regulations require denial of applications when there will be ANY significant adverse impact on neighboring values, and do not provide the right for any development to diminish the value of neighboring property. The regulations also do not typically require a neighboring or abutting property owner to be “pooled” with the level of impact for an entire Town or State. Zoning regulations are intended to preserve values (etc.), but not to create sacrifice zones to accommodate wind energy development.
I trust this will help save a little time in addressing the recycled claims of no value impact from turbines. But if you choose, ignore all opinion and just go to the facts:
1,198 sales within 1 mile of turbines demonstrate a 28% lower value, and the data provides a compelling basis to determine there is a causal relationship between distance and impact.
Michael S. McCann
McCann Appraisal, LLC
500 North Michigan Avenue
Chicago, Illinois 60611
Update: Forensic economics expert witness used unreliable methodology to give speculative opinion: First Circuit
In a recent ruling on Smith v. Jenkins, the United States Court of Appeals for the First Circuit held the Plaintiff’s forensic economics expert witness to have been improperly admitted by the Massachusetts District Court, and sent the case back for new trial on damages. Not only was the expert’s methodology found questionable, his testimony on damages from loss of credit expectancy was also found to be speculative.
Background of the case
The suit arose out of a fraudulent real estate mortgage scheme that involved inducing an illiterate schizophrenic trash collector named Robert Smith, the Plaintiff in this case, into acting as a straw buyer for two overvalued residential properties in Massachusetts. Plaintiff then sued various entities and individuals involved in the transactions, including the mortgage lenders, mortgage brokers, real estate brokers, and closing agents. A jury returned a verdict largely favorable to Plaintiff on his claims of fraud and breach of fiduciary duty, and the district court doubled and trebled certain damages pursuant to the Massachusetts Consumer Protection Statute. Two of the Defendants, real estate brokerage firm Century 21 Dorchester Real Estate, Inc. and mortgage broker New England Merchants Corporation appealed the unfavorable verdict on multiple grounds. Century 21 also challenged the district court’s denial of its motion to preclude the testimony of Plaintiff’s damages expert witness, a forensic economist by profession. Plaintiff cross-appealed.
Testimony of the forensic economics expert witness
The forensic economics expert witness testified that Plaintiff suffered three types of damages: (1) the loss of enjoyment of life or “hedonic damages”; (2) the loss of credit expectancy as a result of two foreclosures on his record; and (3) the loss of time expended dealing with the consequences of the fraud scheme.
As for the hedonic damages the expert employed a method for valuing life known as the “willingness-to-pay” model and concluded that Plaintiff was entitled to $219,900 for his loss of enjoyment of life. He then doubled Plaintiff’s reported income of $41,000 and multiplied it by 13% per year for seven years to arrive at a figure of $87,850 figure for the loss of credit expectancy. Finally, assuming Plaintiff spent at least half an hour a day for five years seeking to resolve foreclosure-related issues, the damages expert valued that time by calculating how much Plaintiff would have expended had he hired an administrative assistant to handle those issues on his behalf, and concluded Plaintiff was entitled to an additional $22,729.
Ruling by the First Circuit Court
“From the evidence in the record” the United States Court of Appeals was “unable to conclude that the district court sufficiently evaluated the admissibility of [the expert’s] testimony.” The Court observed that there was nothing to indicate that a Daubert analysis was conducted by the district court as the judge denied Defendants’ motion to preclude the expert’s testimony as unreliable and unhelpful to the jury without any further comments. Circuit Judge Jeffrey R. Howard added that though this was not to suggest that the experienced trial judge ignored Daubert, “given the record-based limitations under which we are required to operate, the absence of any findings or discussion on the record leaves us hard-pressed to conclude that the district court adequately fulfilled its gatekeeping role in admitting [the expert’s] testimony.”
Much like the Seventh Circuit Court (in Mercado v. Ahmed, 974 F.2d at 871), the First Circuit Court also had serious doubts about hedonic damages being a reliable measure of the value of life or assisting the jury. Hence the Court held the forensic economics expert’s method for valuing life to be based on assumptions “that appear to controvert logic and good sense.”
The testimony on loss of credit expectancy was also held to be speculative and inadmissible under Rule 702, since neither was there any evidence that the Plaintiff tried to or had the intention to borrow anything close to the expert’s specified amount, nor was there any evidence about his income at that time, a crucial variable in the expert’s formula.
The Court concluded that the erroneous admission of the forensic economics expert witness testimony was not harmless, and reasoned that “the court’s jury instruction on damages must have affected the jury’s determination.” The Court further added that “indeed, while the $260,000 that was awarded by the jury was lower than [the expert’s] $330,000 estimate, we can discern no basis in the record other than [the expert’s] testimony for an award that was 79% of the amount that he estimated. Given the instruction and the damages awarded, we cannot say with any degree of assurance that the award was not substantially swayed by the erroneous admission of [the expert’s] testimony.”
The case was remanded for a new trial on damages.
This article is the work of the author(s) indicated. Any opinions expressed in it are not necessarily those of National Wind Watch.
|Wind Watch relies entirely
on User Funding