Friday, 16 December 2016

Crashing motorcycles - causes.


An American study of the causes of motorcycle crashes. It comes from the Insurance Information Institute, whose roles is "Improving public understanding of insurance and how it works." That makes sense as most severe injury and all fatal crashes are reported and end up involving claims.

http://www.iii.org/issue-update/motorcycle-crashes

Alcohol use: According to NHTSA, in 2014, 29 percent of motorcycle riders who were involved in fatal crashes had a blood alcohol concentration (BAC) of 0.08 percent or over (the national definition of drunk driving), up from 27 percent in 2013 and 28 percent in 2012.  This compares with 22 percent of passenger car drivers and light truck drivers involved in fatal crashes, and with 2 percent of large truck drivers.
In 2014, fatally-injured motorcycle riders between the ages of 35 to 39 had the highest rate of alcohol involvement (42 percent), followed by the 40 to 45 age group (41 percent).
In 2014 motorcycle riders killed in traffic crashes at night were almost three times more likely to have BAC levels of 0.08 percent or higher (46 percent) than those killed during the day (15 percent).
The reported helmet use rate for motorcycle riders with BACs at or over 0.08 percent who were killed in traffic crashes was 51 percent in 2014, compared with 67 percent for those who did not have any measurable blood alcohol.
Speeding: In 2014, 33 percent of all motorcycle riders involved in fatal crashes were speeding, compared with 20 percent for drivers of passenger cars, 17 percent for light truck drivers and 7 percent for large truck drivers, according to NHTSA.
Licensing: Twenty-eight percent of motorcycle riders who were involved in fatal crashes in 2014 were riding without a valid license, compared with 13 percent of passenger car drivers.
By Type of Motorcycle: According to a 2007 report from the Insurance Institute for Highway Safety (IIHS), riders of “super sports” motorcycles have driver death rates per 10,000 registered vehicles nearly four times higher than those for drivers of other types of motorcycles. Super sports can reach speeds of up to 190 mph. The light-weight bikes, built for racing, are modified for street use and are popular with riders under the age of 30. In 2005 these bikes registered 22.5 driver deaths per 10,000 registered vehicles, compared with 10.7 deaths for other sport models. Standards and cruisers, and touring bikes (with upright handlebars) have rates of 5.7 and 6.5, respectively, per 10,000 vehicles. In 2005 super sports accounted for 9 percent of registrations, and standards and cruisers made up 51 percent of registrations. Among fatally injured drivers, the IIHS says that drivers of super sports were the youngest—with an average age of 27. Touring motorcycle drivers were the oldest, 51 years old. Fatally injured drivers of other sports models were 34, on average; standard and cruiser drivers were 44 years old. Speeding and driver error were bigger factors in super sport and sport fatal crashes. Speed was cited in 57 percent of super sport fatal crashes in 2005 and in 46 percent for sport model riders. Speed was a factor in 27 percent of fatal crashes of cruisers and standards and 22 percent of touring models."

To sum up, being under the influence of alcohol, lack of experience and the power and type of bike are all high risk facts. Back in the UK bikelawyer.co.uk points to the single statistic of;

 http://www.bikelawyer.co.uk/bike-accident-statistics

"The majority of motorcycling accidents occurred at junctions(45%) and a ‘failure to look properly’ was the most frequently cited cause of all accidents on all road types."

There is further information of;

"The majority of motorcyclist fatalities (70%) took place on rural roads, with motorway accidents accounting for only 1% of motorcyclist fatalities and 2% of serious injuries.
69% of all accidents involving injury to a motorcyclist took place at a junction, the vast majority of accidents involved one other vehicle (70%), with the other vehicle involved most likely (79%) to be a car."

So take care at junctions and on rural roads where junctions are more likely to passed at higher speeds


Monday, 2 April 2012

Are blind tests bogus? Examples of blind tests with positive results.

Some claim that blind tests are bogus because they are designed to create a fail or cannot be passed and they involve trickery or deception.

I do not think that is true because

- they can be passed, such as blind tests of speakers and bit rates. Use the same test with cables and the result is a fail. That is because there is no difference between cables, but there is one between speakers and bit rates. If the test was itself designed to fail, then why not fail with speakers?

- I do not like the blind tests that have been done where people are told they are listening to different cables, but in fact there has been no change. It is interesting, but again a bit dubious when something like a wire coat hanger is slipped into a cable test without anyone's knowledge. My preference is the simple two cables, let the subject see and hear both in action. Then once blinded use one and then say, 'I am may or may not change to the other cables now, please say if you can hear a difference or not?' Then repeat that for about 20 times. Where is the deception in that?

You can also decide if you do have 'golden ears' here

http://www.audiocheck.net/blindtests_index.php

So here are blind tests that have positive results where people could hear a difference.

1 - A blind test of speakers, passed by the subject. Interestingly, the subject failed to identify different crossovers, one more expensive than the other.

http://www.audioholics.com/news/editorials/axiom-blind-listening-test

2 - One of amps with 500 participants EDIT - it is debatable whether this is actually a pass or not.

http://www.stereophile.com/features/113/index.html

3 - Power amp blind tests, two of which are positive.

http://home.provide.net/~djcarlst/abx_data.htm

4 - Head-Fi, by member Pio2001 between a Marantz integrated amp and Pro-ject Headphone amp

http://www.head-fi.org/forum/thread/429619/headphone-outputs-lots-of-measurments-and-one-abx

5 - An interesting Boston Audio Society article about two tests. The tweaked CD test is a fail, but read on and an amplifier blind test is a pass.

http://www.bostonaudiosociety.org/bas_speaker/wishful_thinking.htm

6 - A Hydrogen Audio test of different gauge speaker cables.

http://www.hydrogenaudio.org/forums/index.php?showtopic=14082&st=25

7 - PSB speaker blind test, the top of the range speaker won

screenOvenNRC.jpg

http://www.psbspeakers.com/audio-topics/Birthplace-of-Good-Sound

8 - ABX Comparator. A series of blind tests of different kit and cables.

Starting with the cables, differences were found with video cables over very long runs of 100 feet in comparison to a 6 foot one.

http://home.provide.net/~djcarlst/abx_vid.htm

Then interconnect and speaker cables, five tests and no differences found.

http://home.provide.net/~djcarlst/abx_wire.htm

A speaker test with a very large sample found 97% could tell the difference

http://home.provide.net/~djcarlst/abx_spk.htm

CDPs and a DAC did less well

http://home.provide.net/~djcarlst/abx_cd.htm

Power amps did a bit better

http://home.provide.net/~djcarlst/abx_pwr.htm

But what was very noticeable was the likes of distortion, filters and a small change in volume

http://home.provide.net/~djcarlst/abx_data.htm

9 - Matrix Hifi between two amps where two testers got all 30 tests correct, 60 attempts in all (in Spanish, Google translator used)

http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=es&tl=en&u=http%3A%2F%2Fwww.matrixhifi.com%2Fcontenedor_trivsclat.htm&act=url

10 - 16bit vs 24bit, Gearslutz.com forum, 9 out 10 correctly identified

http://www.gearslutz.com/board/so-much-gear-so-little-time/358301-24-vs-16-bit-not-audible.html

11. Sampling rate, 44.2kHz vs 88.2 kHz, AES May 2010

http://www.aes.org/e-lib/browse.cfm?elib=15398

Thursday, 23 February 2012

Hifi Miscellany

Testing Sites

Tom Nousaine's test site

http://www.nousaine.com/nousaine_tech_articles.html

Ethan Winer's test site

 http://www.ethanwiner.com/index.htm


NwAvGuy's test site/blog

http://nwavguy.blogspot.com/


The Audio Critic, critical writing and testing of hifi

http://www.theaudiocritic.com/

General bibliography of hifi testing

http://www.geocities.ws/jonrisch/c4.htm


Audibility

Audiocheck, check out your hearing and see what really is audible

http://www.audiocheck.net//

What can we hear?

http://www.silcom.com/~aludwig/EARS.htm#Subjective_vs_objective

Listening test of distortion

http://www.klippel.de/listeningtest/lt/


Jitter

http://aoselectronics.com/jitter_article.html

Testing of audibility of distortion caused by jitter

http://www.jstage.jst.go.jp/article/ast/26/1/50/_pdf

The Audio Critic on jitter

http://www.theaudiocritic.com/back_issues/The_Audio_Critic_21_r.pdf

BBC Report on jitter

http://downloads.bbc.co.uk/rd/pubs/reports/1974-11.pdf

TNT audio on what is jitter?

http://www.tnt-audio.com/clinica/jitter1_e.html

Hydrogen Audio forum and a thread about the audibility og jitter

http://www.hydrogenaudio.org/forums/index.php?showtopic=51322&st=25


Benchmark Audio and their claims about jitter

http://www.benchmarkmedia.com/appnotes-d/jittercu

Stereophile and their claims about jitter

http://www.stereophile.com/reference/1093jitter/#



Testing of claims about cables

St Andrews Uni, testing skin effect

http://www.st-andrews.ac.uk/~www_pa/Scots_Guide/audio/skineffect/page1.html


HDMI

C-net on there being no difference between HDMI cables

http://news.cnet.com/8301-17938_105-20056502-1/why-all-hdmi-cables-are-the-same/

Test of HDMI cables

http://www.expertreviews.co.uk/home-entertainment/1282699/hdmi-investigated-are-expensive-cables-a-scam/3

Another test with video games

http://www.eurogamer.net/articles/digitalfoundry-vs-hdmi

One with lots of cables

http://www.audioholics.com/education/cables/long-hdmi-cable-bench-tests/hdmi-cable-testing-results


ABX testing

http://www.hopkins-research.com/abx.htm


Tests of cables

http://sfguitarworks.com/?p=1521


Spurious cable compnay claims

Atlas on directionality  http://www.atlascables.com/fine-tuning.html

Sunday, 15 January 2012

Self defence with a gun - how much is a myth?

This is from a discussion with fellow sceptics about the prevelance of guns in the US and the right to bear arms.

Defensive gun use (DGU) is often cited as a reason to have gun at home or on your person. So being a sceptic I went to have a look at how many times DGUs have taken place and to what extent DGUs are a good reason to have a gun.

One side claims many succesful uses of self defence. For example, from the St Louise University Public Law Review in 1999, "Guns and Justifiable Homicide"

http://saf.org/LawReviews/SouthwickJr1.htm

"However, if each execution and each justifiable homicide results in 7.5 fewer murders, the total of 697 justifiable homicides each year should have deterred over 5,200 murders each year. Compared with the approximately 21,500 [Page 221] murders actually occurring each year as shown in Table 2,[17] this implies that the murder rate would have been about 24 percent higher without these justifiable homicides. The civilian justifiable homicides averaged 299 per year, which should have saved over 2,200 murders per year."

That is very speculative and indeed the author admits "Of course, the above argument relies on some very strong inferences which may not be valid."

The other side, such as The Harvard School of Public Health disagrees with the above

http://www.hsph.harvard.edu/research/hi ... index.html

and states

"We then try to validate the claims of many millions of annual self-defense uses against available evidence.
Major findings: The claim of many millions of annual self-defense gun uses by American citizens appears to be invalid."

"We analyzed data from two national random-digit-dial surveys conducted under the auspices of the Harvard Injury Control Research Center.
Major findings: Criminal court judges who read the self-reported accounts of the purported self-defense gun use rated a majority as being illegal, even assuming that the respondent had a permit to own and to carry a gun, and that the respondent had described the event honestly from his own perspective."

"Using data from a national random-digit-dial telephone survey conducted under the direction of the Harvard Injury Control Research Center, we investigated how and when guns are used in the home.
Major findings: Guns in the home are probably used more often to frighten intimates than to thwart crime; other weapons are far more commonly used against intruders than are guns."

Then from this study

http://www.pulpless.com/gunclock/kleck2.html

"There has probably been more outright dishonesty in addressing the issue of the frequency of defensive gun use than any other issue in the gun control debate."

So which side is correct and how much of self defence with a gun is a myth?

I was given various stories from the US press about DGUs, but what was in the back of my mind was 'was it really a DGU, or was it someone in a panic brandishing or firing their gun in what does not amount to self defence?'

Here are a whole host of examples how self defence ended up in unnecessary deaths and injuries

http://www.cato.org/raidmap/

They are reports of deaths of people and police during no knock warrant entries to houses. The blame is laid in the article on the warrant, but it is really the use of guns that has caused the deaths and injuries. The red, blue and black icons show times where self defence was not against a criminal attack but the police.

In Scotland this incident was widley reported and caused revulsion against the attitude of Americans and DGUs. It is an example of supposed self defence which seriously calls into question what passes for self defence in the US

http://www.independent.co.uk/news/uk/fa ... 99010.html


"Mr De Vries, 29, from Aberdeen, was shot at about 5am local time after he knocked on the back door of a house, (in Houston, Texas) apparently seeking a taxi for himself and a Scottish colleague, Sydney Graves, 42.

Relatives were angered by a newspaper photograph of the scene, which they said seemed to show the door was glass-panelled. This suggested that Mr De Vries would have been visible to the man who fired several shots through the door."

Surely there will be records kept on DGUs? The answer is to my mind surprising and it is 'NO'.

Many articles about DGUs cite the follwing study by Gary Kleck and Marc Gertz from the Northwestern University School of Law in 1995.

http://www.guncite.com/gcdgklec.html

There is a lot of guestimation going on, too much for my liking to get an accurate picture of what is going on. Is there any enquiry made to see if a DGU really is self defence or not?

Surely such records would provide a far more accurate picture of DGUs and from that you would know how often such takes palce and how often they really are defensive and prevented a crime. From that study, which was a teleephone survey -

 "Are these estimates plausible? Could it really be true that Americans use guns for self-protection as often as 2.1 to 2.5 million times a year?"

It does seem a lot, but it does include just having a gun to scare away someone and not just shooting people.

"How could such a serious thing happen so often without becoming common knowledge? This phenomenon, regardless of how widespread it really is, is largely an invisible one as far as governmental statistics are concerned. Neither the defender/victim nor the criminal ordinarily has much incentive to report this sort of event to the police, and either or both often have strong reasons not to do so. Consequently many of these incidents never come to the attention of the police, while others may be reported but without victims mentioning their use of a gun. And even when a DGU is reported, it will not necessarily be recorded by the police, who ordinarily do not keep statistics on matters other than DGUs resulting in a death, since police record-keeping is largely confined to information helpful in apprehending perpetrators and making a legal case for convicting them. Because such statistics are not kept, we cannot even be certain that a large number of DGUs are not reported to the police.

The health system cannot shed much light on this phenomenon either, since very few of these incidents involve injuries.[61] In the rare case where someone is hurt, it is usually the criminal, who is unlikely to seek medical attention for any but the most life-threatening gunshot wounds, as this would ordinarily result in a police interrogation. Physicians in many states are required by law to report treatment of gunshot wounds to the police, making it necessary for medically treated criminals to explain to police how they received their wounds."

So we apparently get huge official under reporting and no official records. So really we have little idea as to what is really going on, we do have to estimate.

The survey's sample size;

"A total of 222 sample cases of DGUs against humans were obtained. For nine of these, the R broke off discussion of the incident before any significant amount of detail could be obtained, other than that the use was against a human. This left 213 cases with fairly complete information. Although this dataset constitutes the most detailed body of information available on DGU, the sample size is nevertheless fairly modest. While estimates of DGU frequency are reliable because they are based on a very large sample of 4,977 cases, results pertaining to the details of DGU incidents are based on 213 or fewer sample cases, and readers should treat these results with appropriate caution. "

An example of the questions asked

"We asked Rs: "If you had not used a gun for protection in this incident, how likely do you think it is that you or someone else would have been killed? Would you say almost certainly not, probably not, might have, probably would have, or almost certainly would have been killed?" Panel K indicates that 15.7% of the Rs stated that they or someone else "almost certainly would have" been killed, with another 14.2% responding "probably would have" and 16.2% responding "might have."[96] Thus, nearly half claimed that they perceived some significant chance of someone being killed in the incident if they had not used a gun defensively."

I am sorry, but I am very sceptical of that survey primarily because it a survey of perception and not it is not an examination of facts.

It also makes no effort to find negative results. Does anyone expect a respondant o say, my DGU was inappropriate, I got it wrong, I never should have used my gun? 

I have since found a study of American DGUs from Australia of all places, by a Tim Lambert of the University of South Wales

http://www.cse.unsw.edu.au/~lambert/guns/lott/lott.pdf

It has some official data from Dade County in Florida on DGUs

"There is empirical data on how often permit holders use their weapons:
Dade county police kept records of all arrest and non-arrest incidents
involving permit holders in Dade county over a 5 year period [8]. Lott cites
this study to show that gun misuse by permit holders is extremely rare (page
11):
A statewide breakdown on the nature of those crimes is not available,
but Dade county records indicate that four crimes involving
a permitted handgun took place there between September 1987
and August 1992
and none of those cases resulted in injury.
Lott fails to note that the same study shows that defensive gun use by
permit holders is also extremely rare (page 692 of [8]):
The Dade police recorded the following incidents involving the
defensive use of licensed carry firearms: two robbery cases in
which the permit-holder produced a firearm and the robbers fled,
two cases involving permit-holders who unsuccessfully attempted
to stop and apprehend robbers (no one was hurt), one robbery
victim whose gun was taken away by the robber, a victim who
shot an attacking pit bull; two captures of burglars, three scaring
off of burglars, one thwarted rape, and a bail bondsman firing
two shots at a fleeing bond-jumper who was wanted for armed
robbery.
There were only 12 incidents where a criminal encountered an armed
permit holder.


Twelve in 5 years and some DGUs were unsuccessful and I would not say the bail bondsman was acting in self defence at all. A bit further on and

"Kleck’s survey [22] indicates that 64% of defensive gun uses are reported to the police." So that means if all DGUs (using Kleck's own method of extrapolation) were reported there were 19 DGUs in Dade County over five years. But this is the Gary Kleck who is widly cited for his claim that guns were used for self defence about 2.5 million times a year or once every thirteen seconds.

http://actionamerica.org/guns/sdcnt-wdgt.shtml

Lambert concludes

"If you believe Kleck’s survey of defensive gun use [21] there were at least
100,000 DGUs (defensive gun uses) in Dade county over the five year period.
If you believe the NCVS [40] there were at least 2,500. Either way, the 12
by permit holders makes no significant difference to the total"


That is a massive variation, so much so either the official records are way out or the academic study is, or maybe both?

But returning to a Harvard study by David Hemenway

http://www.stat.duke.edu/~dalene/chance/chanceweb/103.myth0.pdf

if you submit examples of DGUs for legal examination, many are not self defence at all.

A study of DGUs using a sample period from newspaper articles, Denton & Fabricus 2004 Arizona State University

http://www.ncbi.nlm.nih.gov/pmc/article ... p00096.pdf

It finds significantly lower results of the Kleck and Gertz telephone survey

"Conclusions: These findings cast doubt on rates of DGUs reported in an influential study by Kleck and
Gertz, which predict that the police should have known about 98 DGU killings or woundings and 236
DGU firings at adversaries during the time the newspaper was surveyed. The findings reported here were
closer to predictions based on the National Crime Victimization Survey, which suggest that the police
should have known about eight DGU killings or woundings and 19 DGU firings at adversaries."

But, there is surely as little chance of getting a true figure using newspaper reports than with a telephone survey? That is shown here in various responses to the study

http://injuryprevention.bmj.com/content ... prev_el_68

Not that that survey is quoted anywhere else that I can find. The pro gun side quote Kleck and Gertz almost exclusively, after all, it has by far the highest use of DGUs suggesting guns are needed for self defence on a massive scale.

There is one survey that found an even higher rate of DGUs, this survey (the NSPOF) from the National Institute of Justice in 1997 by Cook and Ludwig

http://www.tscm.com/165476.pdf

But I suspect it not used by the pro-gun lobby as it casts doubts on its own survey results which came up with 23 million DGUs in 1994. There were 45 respondants who stated they had had a DGU and one of them claimed 52 DGUs that year. Even applying Kleck and Gertz's means of identifying real DGUs drops the number to a still very high 4.5 million uses.

So the NSPOF is concerned about false positives as

"NSPOF estimates also suggest that
130,000 criminals are wounded or
killed by civilian gun defenders. That
number also appears completely out of
line with other, more reliable statistics
on the number of gunshot cases (at hospitals)"

"Some troubling comparisons. If
the DGU numbers are in the right
ballpark, millions of attempted assaults,
thefts, and break-ins were
foiled by armed citizens during the 12-
month period. According to these results,
guns are used far more often to
defend against crime than to perpetrate
crime. (Firearms were used by
perpetrators in 1.07 million incidents
of violent crime in 1994, according to
NCVS data.)"

The survey sums up its doubts about its own results

"...... people who draw their guns to defend themselves against perceived threats are not necessarily innocent victims; they may have started fights themselves or they may simply be mistaken about whether the other persons really intended to harm them."

The academics involved in the study of guns and self defence are a very argumentative bunch. Here is David Hemenway's critique of Jongyeon Tark & Gery Kleck 2004 study "Resisting Crime: The Effects of Victim Action on the Outcomes of Crimes."

http://www.hsph.harvard.edu/faculty/Hem ... ments.html

Hemenway uses Tark and Klecks own statistics to point out

"First, respondents were injured in 26.4% of the incidents in which they used some form of resistance; when they did nothing, they were injured 18.5% of the time"

"They compare one form of resistance–calling the police or guard–with the other 15 forms (e.g. attacked with gun; threatened with gun, attacked with nongun weapon; struggled; chased offender; yelled; stalled; argued; ran away; screamed) in terms of the likelihood of receiving an injury AFTER taking this mode of resistance. In simple comparisons, nothing is better than calling the police–only 0.9% of the time did this lead to injury. Threatening with a gun is followed by an injury 2.5% of the time; yelling 2.7%; attacking without a weapon 3.8%; struggling 4.1%; and the highest, stalling 4.5%"

"In multivariate analysis, only "ran away, hid" is significantly better than calling the police or guard in terms of not receiving an injury"

So despite various claims in this thread about the ineffectiveness of calling the police it is the second best way of not being injured. The most effective way is my repeated suggestion in this thread of running away. Instead of guns, it would appear you should plan escape routes, have hidy holes and indeed something I think is really cool is get a panic room.

There is further doubt cast on Gary Kleck's belief that DGUs are commonplace. Here are his statistics regarding defense during crimes

http://www.hsph.harvard.edu/faculty/Hem ... ments.html

".... for sexual assaults, only 1 victim in 1,119 total incidents reported attacking or threatening with a gun (15 used a nongun weapon; 38 called the police or guard; 120 attacked without a weapon; 161 ran away; 219 yelled; 343 struggled). In robberies 1.2% of victims used a gun, whereas 3.8% called the police or guard, 12.7% ran away, and 20.5% struggled. In all confrontational crimes, 0.9% of victims reported using a gun, 1.7% used a nongun weapon, 7.2% called the police, 10.1% ran away, 13.8% struggled, and 29.3% did nothing."

So where are the millions of DGUs? Only a tiny percentage of crimes involve a DGU. Then we have to consider how appropriate the DGU was and if it really was self defence.

Indeed looking closer at supposed DGUs finds instances of vigilateism and outright murder

 Here is vigilanteism dressed as self defence

http://en.wikipedia.org/wiki/Joe_Horn_s ... ontroversy

"One vital piece of evidence were segments of Mr. Horn’s 9-1-1 calls which could have possibly incriminated Mr. Horn or shown his innocence. The most scrutinized segment is presented below:

Joe Horn: “I’ve got a shotgun; do you want me to stop them?”

The Pasadena emergency operator responded: “Nope. Don’t do that. Ain’t no property worth shooting somebody over, O.K.?”

Mr. Horn said: “But hurry up, man. Catch these guys will you? Cause, I ain’t going to let them go.”

Mr. Horn then said he would get his shotgun.

The operator said, “No, no.” But Mr. Horn said: “I can’t take a chance of getting killed over this, O.K.? I’m going to shoot.”

The operator told him not to go out with a gun because officers would be arriving.

“O.K.,” Mr. Horn said. “But I have a right to protect myself too, sir,” adding, “The laws have been changed in this country since September the first, and you know it.”

The operator said, “You’re going to get yourself shot.” But Mr. Horn replied, “You want to make a bet? I’m going to kill them.”

Moments later he said, “Well here it goes, buddy. You hear the shotgun clicking and I’m going.”

Then he said: “Move, you’re dead.”

There were two quick gunshots, then a third.

“I had no choice,” Mr. Horn said when he got back on the line with the dispatcher. “They came in the front yard with me, man.”

The 9-1-1 call ended about 80 seconds after the shots were fired, when officers arrived on the scene."

So they moved and now they are dead. If the state does not execute people for theft, why should some people think they can execute other people for theft? The shooter had a very clear choice, wait for the police to arrive. But his blood lust got the better of him. He was desparate to go and kill. It is not as if he resorted to shouting at them or even a warning shot. Move your dead, bang, end.

 Here is another alleged DGU which happens if you call at the wrong house during Halloween dressed as John Travolta

http://en.wikipedia.org/wiki/Yoshihiro_Hattori

"District Attorney Doug Moreau concentrated on establishing that it had not been reasonable for Peairs, a 6-foot-2, well-armed man, to be so fearful of a polite, friendly, unarmed, 130-pound boy, who rang the doorbell, even if he walked toward him unexpectedly in the driveway, and that Peairs was not justified in using deadly force. Moreau stated, "It started with the ringing of the doorbell. No masks, no disguises. People ringing doorbells are not attempting to make unlawful entry. They didn't walk to the back yard, they didn't start peeking in the windows."

"You were safe and secure, weren't you?" Moreau asked Peairs during his appearance before the grand jury. "But you didn't call the police, did you?"
"No sir." Peairs said.
"Did you hear anyone trying to break in the front door?"
"No sir."
"Did you hear anyone trying to break in the carport door?"
"No sir."
"And you were standing right there at the door, weren't you - with a big gun?"
Peairs nodded.
"I know you're sorry you killed him. You are sorry, aren't you?"
"Yes sir."
"But you did kill him, didn't you?"
"Yes sir."

But he was aquitted much to the horror of Japan, who like the British see such actions as a vigilante summary execution disguised as a right.

Further doubt about the number of DGUs comes here

http://propagandaprofessor.net/2012/02/18/estimating-defensive-gun-uses-reasonably/

"FBI statistics show that for the five-year period ending in 2010, there was an average of 213 justifiable homicides per year by firearm. (A justifiable homicide is not necessarily a defensive use, but the vast majority of them fit the description.) The Kleck-Gertz figures indicate that the defenders wound or kill their assailants only 8.3 percent of the time, but this is surely far too low – especially given that many alleged defensive gun uses involve nothing more than mentioning the existence of a gun! And the figures don’t specify how many are fatal. But if indeed there are 2.5 million DGUs per year, then the fatal shootings would account for only .0085 percent!  So let’s just skip Kleck altogether and stick with real numbers."

Then there is the case of Marissa Anderson who was jailed for not killing her assailant. Instead she fired a warning shot into the ceiling of her house. That worked as her assailant ran off, alive. But by not killing him she is now in prison for assault with a deadly weapon.

Marissa-Alexander-shoot-to-kill-or-you-must-not-be-scared-enough/

How do you square Kleck and Gertz with Marissa Anderson? Either there should be millions of dead killed in self defence, or millions in prison for assault with a deadly weapon as they chose not to kill. 


CONCLUSION

 I suspect the truth is somewhere in the middle, but no one will ever get a really accurate figure. The issue is further clouded by what is supposed to pass as a DGU, which if you leave it to a survey respondant is very inaccurate. Closer study of claimed DGUs and most are not.

It is not really known how many DGUs there are in the US. Pro-gun lobbyists use the highest figures that they can find, figures that are so high and when examined in more detail are found to contain a lot of guessing and cases where the use of the gun was not self-defence at all.


SOURCES

Tim Lambert Report 2004

Denton & Fabricius Report

Cook & Ludwig Report

Harvard School of PH - Study of DGUs

David Hemenway Harvard School of Public Health

David Hemenway Report

Hemenway on Kleck & Tark

The Lott Report

Defensive Use of Firearms 1987-90

Violence Policy Centre

Southwick Report

Kleck and Gertz Report

Kleck - Guns and self defence

Kleck on the Branas Report

Gun Owners of America Fact Sheet

Guncite.com general link



Friday, 30 December 2011

A study of audiophile blind comparison and ABX testing

So, we love to have a good discussion/argument/rant on all the audio forums I have seen about the many claims audiophiles make that others dismiss as myths. The arguments go round in circles; I hear a difference - but there cannot be a difference, it is all in your mind - have you tried different cables? - I don’t need to it is all in your mind etc etc, we all know how it goes.

Then there are the debates that involved blind testing and ABX. They get so hot under the collar and circular that many forums have baneed them outright.

Occasionally there are attempts to run proper tests. WHF’s own Big Question is an example. Three What Hifi forum members are invited to their listening rooms and have been blind tested on cables to bit rates. From the issues I have read, there is a confirmation that the myths of differences are not correct, the differences are real. Different bit rates have been correctly identified, different cables have produced different sounds in the same Hifi kit. But, they are blind listening reviews, which are different from ABX tests where people are asked to correctly identify products.

Here is a list of blind listening and ABX tests that I have found on the internet. What I have done is summarise their conclusions.

It is important to note the difference between blind and ABX testing as they produce different results.

Blind tests mean the listener does not know what they are listening and are asked to describe any differences they can hear which is a type of blind testing commonly used in audio. That kind of test often results in low priced hifi 'surprisingly' doing as well as high priced as factors such as image, product reputation is hidden from the listener. Some blind testing also invloves a competition betwen products were say two amps are pitched next to each other and the wimnner progresses to the next round. As you see I have been broad in the defenition of blind testing.

ABX testing more of a test. You listen to product A and product B and are then played X, which is either A or B and have to say which it is. There can be more than A and B as some tests invlove multiple cables. Then any differences have to be clearly audible, which for the likes of cables has not been the case yet.  I have also been broad in the definition of ABX testing.

The aim is to see what the overall result of these tests gives us and whether they provide evidence to back up or deny the reality of alleged audiophile myths. Before you read on here is a test you can try out yourself...

http://www.audiocheck.net/blindtests_index.php

...and here is a very interesting article on a debate between audio sceptic Arny Krueger and Stereophile editor John Atkinson on ABX testing

Stereophile The Great Debate

Finally, for those who say blind testing is designed to produce fails and discredit audiophiles, here are some positive ones where differences have been identified

http://www.head-fi.org/forum/thread/513481/are-blind-tests-bogus-examples-of-blind-tests-with-positive-results


1 - ABX Double Blind Comparator.

This is a web site dedicated to such testing. Back in May of 1977 there was a comparison of amplifiers which found over three tests of two amps each, listeners could tell a difference in two, but not the third which was an even split. It is important to note that not all of the ABX tests here are negative. Some do find differences can be identified. That shows that with some parts of the hifi chain there are real differences, but with others there are not.

ABX Double Blind Comparator Data

A test of interconnects and speaker cables found that no one could pick out the differences between a series of wires from ‘blister pack $2.50 to $990 speaker cable. All the results were even with approximately 50% going for the cheap and expensive options.

There is an interesting comparison of ‘video cables’ which found that once over 50 feet it was easy to spot which was the 6 foot cable and the much longer one.

DACs don’t fair well with CDPs finding an original CDP being distinguishable from a more modern one, but an expensive stand alone DAC being the same as a CDP.

None of the tests involve a large amount of people and some are just of one person.

2 - Effects of Cable, Loudspeaker and amplifier interactions, an engineering paper from 1991.

http://www.apiguide.net/04actu/04mus...teractions.pdf

Twelve cables are tested from Levinson to Kimber and including car jump leads and lamp cable, from $2 to $419 per metre. The results are based on the theory that loudspeaker cable should transmit all frequencies, unscathed to any speaker from any amplifier and loss is due to resistance. There is an assumption that letting through more frequencies with less distortion will sound better. But that seems reasonable to me.


The best performance was with multi core cables. The car jump leads did not do well and cable intended for digital transmission did! The most expensive cable does not get a mention in the conclusions, but the cheapest is praised for its performance and Kimber does well. Sadly there is not a definitive list of the cost of the cables and their performance, so it is not clear as to whether cost equals performance, but the suggestion is that construction equals performance.


3 - Do all amplifiers sound the same? Original Stereo Review blind test.

http://www.bruce.coppola.name/audio/Amp_Sound.pdf

(If that link does not work, this one is a descrptive of what happened)

http://www.hometheaterfocus.com/receivers/amplifier-sound-quality.aspx

A number of amplifiers across various price points and types are tested. The listeners are self declared believers and sceptics as to whether audiophile claims are true or not.

There were 13 sessions with different numbers of listeners each time. The difference between sceptic and believer performance was small, with 2 sceptics getting the highest correct score and 1 believer getting the lowest. The overall average was 50.5% getting it right, so that is the same as you would expect from a random guess result. The cheapest Pioneer amp was perfectly capable of outperforming the more expensive amps and it was ‘striking similar to the Levinson‘.

As an extra to this and for an explanation of how amps can all sound the same, here is a Wikipedia entry on Bob Carver and his blind test amp challenges

http://en.wikipedia.org/wiki/Bob_Carver#Amplifier_modeling


4 - Cable directionality.

Not the best link as it only refers to a test without giving too many specifics. The cable maker Belden conducted a test with an un named magazine which found the result was perfectly random.

I liked the next sentence which was “Belden is still happy to manufacture and sell directional cables to enthusiasts”

The Truth About Audio and Other Cables - AES PNW Section Meeting Report -


5 - Head - Fi ABX Cable Taste Test Aug 2006.

Three cables from Canare, Radio Shack and a silver one were put into the same sleeving to disguise them, a mark put on each one so only the originator knew which was which and then sent around various forum members. The result was that only one forum member got all three correct. The Radio Shack cheap cable and the silver were the most mixed up.

Unfortunately I cannot see from the thread, which is huge how many members took part and what the exact results were.


6 - HiFi Wigwam, The Great Cable debate. Power cable ABX test Oct 2005.

This is a very well done large scale ABX test. A similar set up to Head-fi where four mains cables including 2 kettle leads (stock power cords that had come with hifi products), an audiophile one, a DIY one and a tester CD were sent out forum members. The results were inconclusive to say the least, for example;

The kettle lead was C. There were 23 answers :
4 said that the kettle lead was A
6 said that it was B
8 said that it was C
5 said that they didn't know.

http://www.hifiwigwam.com/showthread.php?654-The-Great-Cable-Debate&highlight=blind+test

The overall conclusion was that the kettle lead could not be properly identified or that one cable was better than another.

EDIT - one of the participants to this test has pointed out that the two kettle leads, described in the test as exactly the same were in fact not identical and were just basic leads which had come with hifi products.


7 - What Hifi The Big Question on cables. Sept 2009

From the Sept 2009 issue. Three forum members were invited to WHF and blind tested where they though the kit (Roksan, Cyrus, Spendor) was being changed, but instead the cables were. The same three tracks were used throughout.

The kit started out with the cheapest cables WHF could find and no one liked it saying it sounded flat and dull. Then a Lindy mains conditioner and Copperline Alpha power cords were introduced and the sound improved.

The IC was changed to some Atlas Equators and two out the three tracks were said to have improved with better bass and detail.

Last the 60p per metre speaker cable was changed for £6 per metre Chord Carnival Sliverscreen. Again, changes were noticed, but they were not big.

Various swaps took place after that which confirmed the above, that the power cords made the biggest difference. When the test was revealed the participants were surprised to say the least!

But, this is not an ABX test, it is a blind listening review and as you read on you find the two produce different results.


8 - Secrets of Home Theatre and High Fidelity. Can We Hear Differences Between A/C Power Cords? An ABX Blind Test. December, 2004

A comprehensive article with pictures and the overall result was 73 out of 149 tests so 49% accuracy, the same as chance.

http://www.hometheaterhifi.com/volum...s-12-2004.html


feature-blind-test-power-cords-manny-introduction.jpg



9 - Boston Audio Society, an ABX test of Ivor Tiefenbrun, the founder of Linn. August 1984


A rather complex testing of Ivor Tiefenbrun himself, who at that time was very pro vinyl and anti digital (the opposite almost of how Linn operate now!). There are various different tests and the overall conclusion was

"In summary, then, no evidence was provided by Tiefenbrun during this series of tests that indicates ability to identify reliably:
(a) the presence of an undriven transducer in the room,
(b) the presence of the Sony PCM-F1 digital processor in the audio chain, or
(c) the presence of the relay contacts of the A/B/X switchbox in the circuit."

http://www.bostonaudiosociety.org/bas_speaker/abx_testing2.htm

Even the founder of Linn could not back up claims he had been making when subjected to an ABX test of those claims.

10 - The (In)famous Audioholics forum post, cables vs coathanger!. June 2004

http://forums.audioholics.com/forums/showpost.php?s=97d4a3c39d247bf955a57b3953326a34&p=15412&postcount=28

11 - Matrixhifi.com from Spain. ABX test of two systems. June 2006.

Two systems, one cheap (A)  with a Sony DVD and Behringer amp (supported on a folding chair) with chepo cables and the other more expensive (B) with Classe, YBA, Wadia and expensive cables and proper stands were hidden behind a sheet and wired to the same speakers.

ppecTD.gif


The results were;
38 persons participated on this test
14 chose the "A" system as the best sounding one
10 chose the "B" system as the best sounding one
14 were not able to hear differences or didn't choose any as the best.

http://www.matrixhifi.com/ENG_contenedor_ppec.htm


12 - AVReview. Blind cable test. April 2008

Some of AVR's forum members attended at a Sevenoaks hifi shop and listened to the same kit with two cheap Maplins cables at £2 and £8 and a Chord Signature at £500. They found the cheaper Maplins cable easy to differentiate  and the more expensive harder to differentiate from the Chord. Their resident sceptic agreed he could hear differences. The final conclusion was;

....from our sample of 20 near-individual tests, we got 14 correct answers. That works out at 70 per cent correct....

So that is the second ABX to join What Hifi which suggests there is indeed a difference. But like What Hiif it shows the difference in results from Blind to ABX testing and how easy it is to try and obscure the two types of test.

http://www.avreview.co.uk/news/article/mps/uan/1863#ixzz0nGpGRfCB

13 - Journal of the Audio Engineering Society, ABX test of CD/SACD/DVD-A. Sept 2007

You need to be a member of the AES to access the article here; (EDIT, the link has changed and I cannot find the actual test referred to)

http://www.aes.org/journal/online/JAES_V55/9/

a summary of which states "A carefully controlled double-blind test with many experienced listeners showed no ability to hear any differences between formats".  The results were that 60 listeners over 554 trials couldn’t hear any differences between CD, SACD, and 96/24.

EDIT - this test is apprently flawed, but basically the hi rez example used was from an original CD. Whether that is a flaw or not is open to further discussion.


14 - What Hifi, Blind Test of HDMI cables, July 2010

Another What Hifi test of three forum members who are unaware that the change being made is with three HDMI cables. As far as they know equipment could be being changed. The cables are a freebie, a Chord costing £75 and a QED costing £150. Throughout the test all three struggle to find any difference, but are more confident that there is a difference in the sound rather than the picture. They preferred the freebie cable over the Chord one and found it to be as good as the most expensive QED. That result is common in blind testing and really differenentiates it from ABX tests.



15 - Floyd Toole from Harman International (AKG, Infinity, JBL) Audio, Science in the service of art 1998

A paper written by Floyd Toole which covers a number of topics about scientific measurements and audio. Go to pages 6 and 7 and there is a paragraph on blind testing. It shows how the 'differences' between speakers were greater when sighted tests were used over blind tests. The obvious conclusion is that sighted tests result in factors other than sound come into play when deciding on what sounds better.

http://www.harmanaudio.com/pv_obj_cache/pv_obj_id_7A7EB027F9D3A7B68272375CB10EFDC694000200/filename/audio_art_science.pdf


16 - Sean Olive, Director of Acoustic Research Harman Int, blog on The Dishonesty of Sighted Listening Tests 2009

http://seanolive.blogspot.com/2009/04/dishonesty-of-sighted-audio-product.html

Research using 40 Harman employees and comparing the results of blind vs sighted tests of four loudspeakers. As with the above by fellow Harman director, sighted tests show bias that blind do not.

Below the article are various responses to the blog, including a very interesting exchange between Alan Sircom, editor of Hifi Plus magazine and Sean Olive. Alan Sircom makes the very interesting point that volume has a role to play with blind tests

"Here's an interesting test to explain what I mean: run a blind test a group of products under level-matched conditions. Then run the same test (still blind), allowing the users to set the volume to their own personal taste for each loudspeaker under test. From my (admittedly dated and anecdotal) testing on this, the level-matched group will go for the one with the flattest frequency response, as will those who turn the volume 'down', but those who turn the dial the other way often choose the loudspeaker with the biggest peak at around 1kHz, saying how 'dynamic' it sounds."

I had not thought of that before. You will end up with different conclusions between a blind test where the volume is set and where the volume can be adjusted. Adjustment allows preferences for different sounds to be expressed, without other influences being present that clearly have nothing to do with sound.

17. Russ Andrews re-cable David Gilmour's recording studio (not a blind test) 2000-2001

This is not a blind test, but I think it is worth including here. The studio used (and I think owned) by David Gilmour was re-cabled using Kimber cables by Russ Andrews. This was apparently after extensive AB testing. I would have loved that to be after extensive ABX testing!

http://www.russandrews.com/viewindex.asp?article_id=astoria&src=blog

(Thanks to Pio2001 for finding the below tests and links)

18. DIY Audio forum, confessions of a poster. 2003

A forum member joined and confessed that " Then I started to hear about some convincing blind tests and finally conducted my own. I was stunned at the results. I couldn't tell a $300 amp from a $3000 in the store I was working at. Neither could anyone else who worked there." Then he did his own blind test on a mate between an Onkyo SR500 Dolby Digital receiver and a Bryston 4B 300 wpc power amp and a Bryston 2 channel pre-amp owned by his mate. The 'red faced' mate could not tell the difference.

http://www.diyaudio.com/forums/solid-state/12752-blind-listening-tests-amplifiers.html

19. The Boston Audio Society, discussion of two blind tests and their analysis 1990

The BAS in an article discussing a CD tweek blind test by Stereophile; " In the CD-tweak test Atkinson and Hammond conducted a 3222-trial single-blind listening experiment to determine whether CD tweaks (green ink, Armor-All, expensive transports) altered the sound of compact-disc playback. Subjects overall were able to identify tweaked vs untweaked CDs only 48.3% of the time, and the proportion that scored highly (five, six, or seven out of seven trials--Stereophile's definition of a keen-eared listener) was well within the range to be expected if subjects had been merely guessing."

Then the BAS are very critical of a Hifi News analysis of a blind test of amps from 2006; " Listeners scored 63.3% correct during those trials where the amplifiers were different (95 of the 150 A-BB-A trials). However, subjects scored correctly only 65% of the time when the amplifiers were the same (26 of 40 A-A/B-B trials.) Another way of saying this is that subjects reported a difference 35% of the time (14/40 trials) when there could have been no difference."

http://www.bostonaudiosociety.org/bas_speaker/wishful_thinking.htm

20. Cowan Audio, an Australian audiophile and a blind test between CD players 1997

A $1800 un named (they were reluctant to name it) versus a $300 Sony which resulted in both only guessing and getting about 50%. William Cowan stated that a sighted test before hand made them say "This will be easy, lets get on with the blind test". Ooops!

http://www.cowanaudio.com/

21. Pio2001's own ABX test between CD and vinyl in Hydrogenaudio 2003

The results were 3/7 and 5/8 correct.

http://www.hydrogenaudio.org/forums/index.php?act=ST&f=21&t=7953

22. Tom Nousaine, article to Tweak or not to tweak? 1988.

A test of identical CDP and speakers but different amps and cables, one being $300 and the other $5000. The results with 7 listeners of varying interest in hifi and 10 trials was a fail.

http://www.nousaine.com/pdfs/To%20Tweak%20or%20Not.pdf

23. AV Science Forum, Monster vs Opus cables. 2002

Not particularly rigorous as in there were not enough tests, but as the poster states "And to cut to the chase, Mike could not identify the Monster from the Opus MM with any accuracy (nor the reverse, which also would have been a positive result if he had been consistently wrong) using our testing methodology. We stopped the test a little less than halfway through, I think we got through 8 A/Bs before we gave up."

http://www.avsforum.com/avs-vb/showthread.php?t=941184

24. Stereo.net, blind testing of two pre-amps April 2008

Its an Australian forum so the conclusion is typically forthright "CONCLUSION:There is bugger all between the 2 preamps, they were so close that any difference could not be reliably picked." The test was run well despite what doubts the tester has.

http://www.stereo.net.au/forums/showthread.php/26875-Blind-Testing-Report-Lightspeed-vs-ME24-preamps

25. Stereomojo Digital amp shootout 2007

Various amps were tested blind, in pairs where the preferred amp went through to the next round. The winner was one of the cheaper amps called the Trends Audio TA-10 at $130, which is the tiny one on the top right of the pile

clip_image012_006.jpg

http://www.stereomojo.com/SHOOTOUT2007INTEGRATEDS.htm

26. Head-Fi ABX Cable Test by member Edwood Aug 2006

Three ICs made with Canare, Solid Silver and Rat Shack cables, but dressed to look the same. Only one person could tell the difference, which you would expect to happen when there is no audible difference and people are most likely guessing.

http://www.head-fi.org/forum/thread/190566/blind-cable-taste-test-results

27. Les Numeriques. A blind test of HDMI cables by a French site (Google Translator used)

Nine participants using no name, Belkin and Monster HDMI cables. Only one claimed to have a preference, but his feedback was inconsistent.

http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=fr&tl=en&u=http%3A%2F%2Fwww.lesnumeriques.com%2Fblind-tests-avec-deux-jurys-experts-et-lecteurs-p770_6175_93.html


28. Home Cinema Fr .Com, a French test of interconnects (Google Translator used) May 2005

The cables included ones from Taralabs, VDH, Audioquest and DIY ones. The result was that no one could reliably tell a difference.

PICT0219S.jpg

http://translate.google.com/translate?js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&sl=fr&tl=en&u=http%3A%2F%2Fwww.homecinema-fr.com%2Fforum%2Fviewtopic.php%3Ft%3D29781210

29. Sound & Vision. Article by Tom Nousaine with 3 Blind Tests of speaker cables. c1995

http://www.nousaine.com/pdfs/Wired%20Wisdom.pdf

All three are fails by the listeners using their own hifi systems and with their choice of track, volume and time.

30. Insane About Sound, Blind Tests of CD vs Audio Files and expensive vs cheap speaker cable. Wall Street Journal Jan 2008

http://online.wsj.com/article/SB120044692027492991.html.html?mod=technology_main_promo_left

Tests set up at an audio show in Las Vegas, found Wav files (52%) doing better than MP3 (33%) when compared with CD and in a test of $2000 Sigma speaker cables vs hardwear store cable 61% of the 39 who took the test preferred the more expensive cable. So nothing conclusive for any of the tests, but interestingly John Atkinson and Michael Fremer from Sterophile magazine were described as easily picking out the more expensive cable.

31. AV Science forum, Observations of a controlled cable test Nov 2007

A blind test between Monster cables and Opus MM, which as far as I can find is $33,000 worth of cable

http://www.avsforum.com/avs-vb/showthread.php?t=941184

but the owner of the very high end kit and cables was unable to tell the difference.

32. The Audio Critic, ABX test of amps Spring 1997

A letter by Tom Nousaine to The Audio Critic in which he describes an ABX test of the owner of a very high end system, where a Pass Labs Aleph 1.2 200w mono block amp was randomly changed with a Yamaha AX-700 100w integrated amp. In the first test the owner got 3 out of 10 identified, then 5 out of 10. His wife then got 9 out of 16 and a friend 4 out of 10 correctly identified.

The letter is split between pages 6 and 7 of the link.

http://www.theaudiocritic.com/back_issues/The_Audio_Critic_24_r.pdf


33. Expert Reviews. Blind test of HDMI cables. Expert reviews 8 Feb 2011

Two TVs, two Sony PS3s and a James Bond film played side by side with the only variable being changed HDMI cables. What is interesting is that there was little difference with the picture, but much more perceived difference with the sound. But, as many preferred the sound of the cheap to the expensive cables.

http://www.expertreviews.co.uk/home-entertainment/1282699/hdmi-investigated-are-expensive-cables-a-scam/4

Note - not an ABX test and the reviewer acknowledges there could also be slight differences in the TVs and PS3s to contend with.

34. Blind test of six DACs, Stereomojo

Like the other blind as opposed to ABX tests this one found the cheapest and most expensive DAC in the final, with only a hairs width between the two in terms of sound.

http://www.stereomojo.com/Stereomojo%20Six%20DAC%20Shootout.htm/StereomojoSixDACShootout.htm

panel1.png


35. The Wilson ipod experiment CES 2004. Stereophile Jan 2004

Tenth paragraph down. A 'trick' blind test where a group at a consumer technology tradeshow thought they were listening to a $20,000 CDP, but were actually, happily listening to an ipod and uncompressed WAV files.

http://www.stereophile.com/news/011004ces/

Sight really does have a major role to play in sound!

36. An evening spent comparing Nordost ICs and speaker cables. AVForums June 2006

Further to the above ipod experiment, a report from a member of the AVForums and his experience of sighted and blind listening tests at a dealers.

http://www.avforums.com/forums/interconnects-speaker-cables-switches/351773-evening-comparing-nordost-interconnects-speaker-cable.html

The conclusion comparing the tests

"And here's what I heard.

1. All the cables sounded subtly different with one exception.
2. Differences were less apparent with some music than others
3. My assessment and experiences "blind" were different to my experiences "sighted""


37. A blind test of old and new violins. Westerlunds Violinverkstand AB March 2006

This is really a bit of fun, but it again shows how we hear differently sighted to blind. In this test 6 violins, three c1700 (including a Stradivari) and three modern were played to a group of string teachers who cast votes 1 to 3 on their preferred violin. The stage was kept dark and they could not see which was which. The Stradivari came last, a modern brand won.

http://www.westerlunds.se/blindtesteng.htm


38. The Edge of audibility, blind test of recordings made with and without a mains filter. Pink Fish Media forum June 2011

You can download and try the recordings yourself. Of those who have already, 2 preferred one, 6 the other and 10 had no preference.

http://www.pinkfishmedia.net/forum/showthread.php?t=101683

39. Try a blind test of bit rates. mp3ornot.com

A really well set out and easy to use bind test of different bit rates.

http://mp3ornot.com/


40. Blind test of CD transports Stereo.net.au Oct 2008

Well set up and described, but to reinforce the Australian stereotype, after one set of failed tests they admitted no one could hear a difference, gave up and drank some beers instead!

http://www.stereo.net.au/forums/showthread.php/10141-Blind-Test-GTG-1-CD-Digital-Transports

EDIT - link presently broken

41. ABX test of tracks with various levels of jitter added. HDD Audio forum March/April 2009

http://hddaudio.net/viewtopic.php?id=15

http://hddaudio.net/viewtopic.php?id=63

One member, MM has recorded his scores and they are no better than random.


42. Stereophile ABX test of power amps July 1997

http://www.stereophile.com/features/113/index.html

There were 505 listeners producing the following nicely made graph of results

blindfig2.jpg

which is a bell curve around random, just as you get from guessing. Yet Stereophile claim there was success with test as some people did better than average. There could be some truth in that as there have been blind test passes for amps. Even so it is a very small part of those tested who really need to tested again to confirm whether or not they were just lucky. The test is not statistically significant enough to say there is an audible difference.

43. Head-fi. A forum member testing cables sighted and blind Nov 2011

http://www.head-fi.org/t/578621/cables-tested-with-results

This provides yet more evidence that sighted and blind testing produces consistently different results whereby people can hear a difference when sighted and cannot when listening blind.


44. Audio Society of Minnesota. Speaker cable listening test. April 2012

https://sites.google.com/site/audiosocietyofminnesota/Home/april-2012-speaker-cable-listening-test 

The results are very mixed with no cable making any clear difference. They accept there is no objective difference, but since there is a difference found which can easily be explained by random selection, they conclude a subjective difference is there and so allegedly "cable do make a difference".


45. The Richard Clark Amplifier Challenge - Reported by Tom Morrow june 2006


"The Richard Clark Amp Challenge is a listening test intended to show that as long as a modern audio amplifier is operated within its linear range (below clipping), the differences between amps are inaudible to the human ear."

It is an ABX test which to pass needs two sets of 12 correct identifications. Reputedly over a thousand have taken the test and none have passed.

"Do the results indicate I should buy the cheapest amp?


No. You should buy the best amplifier for your purpose. Some of the factors to consider are: reliability, build quality, cooling performance, flexibility, quality of mechanical connections, reputation of manufacturer, special features, size, weight, aesthetics, and cost. Buying the cheapest amplifier will likely get you an unreliable amplifier that is difficult to use and might not have the needed features. The only factor that this test indicates you can ignore is sound quality below clipping."

Which is a relief for those who have shelled out a lot on a nice amp.


46. Audio Video Revolution Forum, thread on blind speaker tests, Nov 2007.



Positive results which strongly suggest speakers are clearly different even under blind testing conditions, both objectively and subjectively.


47. PSB Speakers, blind comparison test of four speakers, Nov 2005.





http://www.psbspeakers.com/articles/Birthplace-of-Good-Sound

The writer is happy he did not pick out the cheapo speaker, but he makes no mention of whether or not the speakers were easily identified as different or not.


48. Audio Society of Minnesota, speaker cable test, April 2012



Random results from blind comparison testing of four cables, cable B being the expensive one at $8000. Each cable was pitted against another and preferences or not noted. The results were that cable B won one and lost two of the tests. The cheapest cable lost all three tests, so the author makes a spurious claim to there being subjective preferences, which is OK, but would you spend $8000 on a cable based on such a result?


48. Which? Consumer magazine, 2016

The UK consumer magazine, which values its independence and testing procedures to give fair and independent advice.
"Which? testing has shown that cheap HDMI leads - even value ones costing just a few pounds - can perform just as well as more expensive ones. When we last ran HDMI tests, we found that a £10 HDMI lead from a supermarket gave no discernible difference in picture quality to one costing almost £100."

Read more: http://www.which.co.uk/reviews/televisions/article/buyers-guide-to-hdmi-cables - Which?
49. Trust Me I'm a Scientist - Audio Poll: Neil Young and High-Definition Sound, May 2012

A bind test of a high def WAV file version of Neil Young's self titled debut album against some standard AAC files.

"The majority of you are audio engineers, professional musicians, and ambitious hobbyists, and I figured that if anyone would be able to tell these file types apart, it would be you guys.
So, how did you do?
Well… please accept my warm congratulations to the 49% of you who guessed right.
That’s right: even among our readers, the results came out no better than a coin flip. And we didn’t even need a huge sample size to get a result that’s consistent with the tremendous mountains of research already done in this field."


Conclusion


The clear conclusion is that ABX testing does not back up many audiophile claims, so they become audiophile myths as they show cables do not inherently change sound. Any change in sound quality comes from the listeners mind and interaction between their senses. What is claimed to be audible is not reliably so. Blind testing is also sometimes passed off as ABX. But blind testing is not really testing, it is a review of a product without seeing it, and that allows claims to be made about sound which have not been verified.

If hifi is all about sound and more specifically sound quality, then we should, once the other senses have been removed be able to hear differences which can be verified by being able to identify one product from another by only listening. But time and again we cannot.

So you can either buy good but inexpensive hifi products such as cables, amps, CDPs and be satisfied that the sound they produce is superb. You do need to spend time with speakers as they really do sound identifiably different. Or you can buy expensive hifi products such as cable tec and luxuriate in the build and image and identify one hifi from another by looks and sound. But you cannot buy expensive and identify it from cheap by sound alone. However and this is important,

http://www.audiostream.com/content/blind-testing-golden-ears-and-envy-oh-my#wsTQZ0dOJYDcJv5K.97

After failing a blind test, one hifi buff, no less the editor of Sterophile said

"Over 10 years ago, for example, I failed to distinguish a Quad 405 from a Naim NAP250 or a TVA tube amplifier in such a blind test organized by Martin Colloms. Convinced by these results of the validity in the Consumer Reports philosophy, I consequently sold my exotic and expensive Lecson power amplifier with which I had been very happy and bought a much cheaper Quad 405—the biggest mistake of my audiophile career!"
The author of the article goes on to say;
"My point being, taking part in any kind of blind listening test necessarily creates an unnatural condition, one that we never encounter when listening to music for pleasure."
I agree 100%, I did the same with my set up, reading and researching for this thread I would switch from using my Firestone Fubar DAC and power supply to using the DAC that comes in my MF X_CANV8P headphone amp. All the testing states they sound the same. But they don't!!!!! That is because when I listen for pleasure I can see my set up, its red, green and blue lights on telling me it is working. That gives me pleasure and pleasure makes for better SQ.


(Originally posted on the Head-fi forum)

Monday, 26 December 2011

Speed kills

A favourite amongst safety campaigners, but often ridiculed by motorbike riders, particularly the sport bike riders is the simple phrase 'speed kills'.

Here are a few quotes from The Biker Forum

"What a load of crap, I can't believe any biker would come out with this sort of drivel - lack of control kills, not speed. Are any of the relevant authorities concentrating on that? Are they f*ck :rolleyes:"

"Whoever dreamed up the statement "speed kills" wants hanging. Its the sudden stop that kills"

"Speed does not kill and we only have speed cameras because we have a daft road safety policy based on the misconception that speed kills"

"I'm not advocating driving like an idiot (the road is NOT a racetrack), but the idea that speed and ONLY speed kills is the preserve of the deluded"


There is clearly a complete failure to understand what happens in a road crash/accident whatever you want to call it. Accidents are made up of three parts

CAUSE

Inattention is the most common cause of an accident, or as the law puts it, driving "without due care and attention"

http://www.legislation.gov.uk/ukpga/1988/52/section/3

Not looking properly, failing to take into consideration road conditions or the amount of traffic and losing concentration make up the bulk of causes. The biggest single type is known as right of way accident where basically two vehicles want to be on the same bit of road at the same time. That is followed by loss of control on bends and overtaking or filtering.

http://mile-muncher.co.uk/dft_rdsafety_035422.pdf

However, excessive speed for the conditions removes some of the chance you have to avoid the collision in the first place.This can affect any compensation claim resulting from an accident

http://www.motorcyclecompensation.co.uk/Speed.aspx

Here is one of the examples show by the compensation website

A motorbike overtaking at excessive speed hits a car doing a u turn. The biker was held to 100% responsible as his speed meant he had ignored how his speed affects his ability to avoid potential hazards.


TYPE

Examples of the type of accident are rear end collison, side impact and loss of traction causing a fall or skid


SEVERITY

This is the part that those who say speed does not kill have failed to grasp. It does not matter what the cause or type of accident that you have, the severity and survivability of the accident is determined by the speed at the point of impact, or as one biker put it, the sudden stop that kills.

From a Norwegian study in 2004 on speed and accidents

http://www.trg.dk/elvik/740-2004.pdf

"The main findings of the research presented in this report can be summarised as
follows:

1. There is a strong statistical relationship between speed and road safety. When the mean speed of traffic is reduced, the number of accidents and the severity of injuries will almost always go down. When the mean speed of traffic increases, the number of accidents and the severity of injuries will usually increase.

2. The relationship between changes in speed and changes in road safety holds for all speeds in the range between about 25 km/h and about 120km/h.

3. The relationship between changes in speed and changes in road safety can be adequately described in terms of a power model, in which the relative change in the number of accidents or accident victims is a function of the relative change in the mean speed of traffic, raised to an exponent. The
following exponents summarise the effects of changes in speed:
a. Fatalities: 4.5
b. Fatal accidents: 3.6
c. Seriously injured road users: 3.0
d. Serious injury accidents: 2.4
e. Slightly injured road users: 1.5
f. Slight injury accidents: 1.2
g. Injured road users (severity unspecified): 2.7
h. Injury accidents (severity unspecified): 2.0
i. Property-damage-only accidents : 1.0

4. Several other mathematical functions may describe the relationship between speed and road safety, but the generality and simplicity of the power model makes it superior to other models. The model is, however,
not necessarily valid outside the range of speeds found in the present study (from about 25 km/h to about 120 km/h).

5. The relationship between speed and road safety is causal and can be explained in terms of elementary laws of physics and biomechanics. Speed is clearly a very important risk factor with respect to both accident
occurrence and injury severity."

Anyone with a basic knowledge of physics and some common sense will understand that kinetic energy transfer at the point of impact and its effect on the human body is what causes injuries. The higher the speed, the more kinetic energy there is flying around and the more severe injuries are likely to be. (I accept there is such a thing as a lucky escape, but people can also die from very minor collisions).

CONCLUSION

Speed kills refers to the impact speed has on the severity of an accident which in turn determines survivability. The higher the speed the less survivable an accident will be. Speed also robs the rider of opportunities to avoid the accident as the situation develops. This superb safe driving commercial expalins that in terms even the daftest of biker should understand...