J.D. Power Quality Ratings: They're the Industry Standard... But do They Really Measure Quality?
September 26, 2012
J. D. Power Quality Ratings: They’re the Industry Standard… But do They Really Measure Quality?
Written by: Gary Witzenburg on September 26 2012 6:00 AM
When you think of automobile quality, what comes to mind? To me, a high-quality car or truck is bolted together right and tight. It doesn’t squeak or rattle. It doesn’t show big gaps between panels, inside or out. It doesn’t malfunction and cost you a day at the dealership. It doesn’t leak fluids, shed parts, or break down and leave you stranded.
Thankfully, there are very few poor-quality new vehicles on the U.S. market today.
What quality does not mean to me is the subjective stuff, we’ll call it pleasability for the lack of a better word. Do I like the look or feel of that material? Is the radio (or controls for A/C, lights or wipers) easy to see, reach, and operate? Is the seat adjustment where I like it? Those are ergonomic issues, not quality concerns.
And in this age of electronic driver aids and connectivity, quality is definitely not: How easy is it to pair my phone or download my tunes? How big are my infotainment screen’s touch-pads and how quickly does it respond? How well does my car’s voice-recognition understand what I’m trying to say… especially when I get frustrated and yell at it?
Yet these are exactly the kinds of pleasability issues that are shaping today’s quality ratings. “Cars and trucks have never been built better, but frustration with audio, infotainment, and navigation features on new vehicles has never been worse,” wrote Automotive News senior editor Jesse Snyder in the industry weekly’s June 25 issue.
Looking at the 2012 J.D. Power and Associates’ Initial Quality Study (IQS), the industry standard for measuring and comparing vehicle quality as reported by owners after their first three months of ownership, Snyder observed: “For the first time, complaints about such features surpassed those about engines and transmissions as the top category.” And, he pointed out, “… half the problems reported by vehicle owners after 90 days were design related – things that are confusing or hard to use rather than faulty or broken.”
The unlucky poster child for this is Ford, which was the top-scoring non-luxury brand in Power’s 2010 IQS. But Ford fell to 23 out of 34 brands in 2011 and 27 in 2012. Have Ford’s vehicles suddenly started falling apart or breaking down? No. They are at least as good as and probably better than the 2010 models. The major difference is MyFord Touch, the touch-screen navigation/infotainment system launched in 2011 (a version of which his pictured at top), which many owners have not liked.
While Power’s research touches every automaker and many other companies around the world, it is best known for its annual surveys of automotive quality and dependability. And while these yield meaningful data…they are also subject to misinterpretation by the media, and by extension the public. Most media understand simple comparisons: vehicle A beats vehicle B; Brand C beats D. But such comparisons can be misleading if you don’t understand the data.
One important factor is that Power surveys don’t differentiate between major and minor problems, or between things that malfunction or break and things that dissatisfy. Engine and transmission failures count the same as wind noise, squeaks and rattles; poor fuel economy the same as poor fits. If this is a quality survey, shouldn’t it tally only quality complaints?
In J.D. Power’s official opinion, these are quality complaints. “When we ask consumers what they think is meant by quality, they include not only absence of defects and malfunctions but also what we would consider perceived quality, quality of materials, and very much also design quality,” says JDP automotive research vice president Dave Sargent. “Not only is the component built the way it was designed, but was it designed right in the first place.
“The purpose of the Initial Quality Study is to measure quality problems as defined and reported by consumers. So, because consumers define quality broadly, we take the fairly broad approach that if a consumer considers something a problem, it is counted as a problem…whether it be a blown transmission or a wind noise or a squeaking panel, they all count the same, for a number of reasons,” Sargent continued. “For one, it’s impossible to get a consensus as to what one problem is worth relative to another. And traditional defects, things that break, almost always get fixed fairly quickly by the dealer at little or no expense to the consumer. But design problems such as wind noise or a hard-to-use navigation system, the dealer has little chance of fixing, so the consumer has to live with those for the lifetime of the vehicle.”
When JDP’s 2012 IQS results were released, the press release pointed out that the industry’s overall initial quality had improved by five problems per 100 vehicles over 2011. “There are year-over-year gains in most areas of initial quality,” it said, “with one notable exception – audio, entertainment, and navigation problems have increased by eight percent from 2011. This continues a recent trend, as problems in this category have increased by 45 percent since 2006, while other categories have improved 24 percent, on average.”
Another problem is almost all automakers have gotten so good that the rankings among brands and gaps between them are far less meaningful than they once were. For example, with the 2012 IQS Industry Average at 102 problems per 100 cars (PP100), the eight brands placing just below that in 16th through 23rd places — Audi, Buick, Hyundai, Kia, Lincoln, Volvo, Subaru, and Jeep — scored between 105 and 110 PP100. So Jeep owners reported five more problems per 100 cars (0.05 per car) than Audi owners, which is statistically insignificant.
Of the 15 brands above that average, Lexus, Jaguar, and Porsche placed 1st through 3rd at 73, 75 and 75 PP100, followed by Cadillac, Honda, Acura, Infiniti, and Toyota at 80, 83, 84, 84 and 88. Does that mean that Cadillacs (4th place) are better than Toyotas (8th) because their owners reported 0.08 fewer problems per car? Or that Mercedes-Benz vehicles (9th at 96 PP100) are better than Chevrolets (15th at 100)? Between Benz and Chevy are BMW, Mazda, GMC, Nissan, and Ram, all clustered between 97 and 99 PP100.
Much more meaningful to me are Power’s Vehicle Dependability Surveys (VDS), which track customer-reported problems after three years of ownership. Thus the 2012 VDS results cover vehicles built in 2009, the year the economy turned upside down, the U.S. auto industry nearly died, and both GM and Chrysler went through government-guided bankruptcies.
Despite all that, Power’s 2012 VDS reported “historically high levels of vehicle dependability.” Lexus (86 PP100), Porsche (98), Cadillac, and Toyota (tied at 104) topped the charts, but Scion, Mercedes, Lincoln, Ford, Buick, Hyundai, Acura, and Honda placed 5th through 12th at 111 through 131. Just below the Industry Average of 132 PP100 was Chevrolet at 135, Volvo at 143, and Audi at 148. Does that mean 2009 Chevrolets were better than Audis and much better than BMWs (20th at 154 PP100) and Volkswagens (tied with Kia for 25th at 169)?
Providing a completely different take is the Tustin, California, research, forecasting, and consulting firm AutoPacific, which annually surveys tens of thousands of owners on how much they like their cars and trucks. “These studies are very different from Power’s,” says AP president George Peterson, “JDPA is measuring things gone wrong. We are measuring satisfaction and product execution…very different things.” AP’s top Vehicle Satisfaction Awards for 2012 went to Mercedes, Cadillac, and Lincoln (1st, 2nd, and 3rd among luxury brands) and mainstream brands Buick, GMC, and Chrysler.
What about Consumer Reports? It provides useful information, but it’s hardly gospel. CR surveys its own faithful readers instead of (like JDPA, AP, and others) a scientifically selected random sample of vehicle owners. So CR tells its readers what to think and what to buy, then turns around asks them, “What did you buy, and what do you think?” CR also has a troubling habit of predicting the reliability of completely new vehicles (with no reliability record) by the records of their predecessors. Thus the 2011 Chevy Cruze was predicted by CR to have the same (less than great) reliability as the Cobalt it replaced. Same for the all-new Ford Focus. It’s not only unfair, it can also be flat-out wrong.
My advice: check the surveys, comparison-drive the cars, and then decide for yourself.
Read more: http://blogs.motortrend.com/j-d-power-quality-ratings-theyre-the-industry-standard-but-do-they-really-measure-quality-25937.html#ixzz28iZFYO7M