Strand Rate (SR) by Ron Shandler at Baseball HQ typically rests in the 65-80% area. It measures the number of runners a pitcher puts on base that end up scoring. A few weeks ago on Jeff Erickson's Fantasy Focus on XM Radio , Mr. Shandler spoke about Milwaukee Brewer, and sabre favorite, Dave Bush, and the bafflement his ERA is causing. With a BB:K ratio of 4, he is expected to have an ERA in the mid-3.00s. As an explanation, Mr. Shandler points out Bush's unusually low 59% SR . This means four runners out of every ten that get on base while David Bush pitches comes around to score. (Six runners are stranded.)
The Run Expectancy Martix (REM) at Baseball Prospectus looks at the expected numbers of runs a team will score based on number of outs and runners-on-base situation i.e. man on first and one out, second and third with two outs, man on third with no outs, etc. The data has been remarkably consistent over the years and is the basis of the argument against stealing bases that is very popular in the statistically-minded set.
|man on 1st||0.90078||0.53083||0.23338|
|man on 2nd||1.15399||0.70857||0.34067|
|man on 3rd||1.38554||0.9615||0.42989|
|1st and 2nd||1.4636||0.86489||0.45553|
|1st and 3rd||1.81674||1.18925||0.52363|
|2nd and 3rd||2.05521||1.4886||0.58935|
What got me to thinking is the congruence between the Strand Rate and the Run Expectancy Matrix. If a typical SR of 75% means one in four base runners will score regardless of the out situation, how does that jibe with the REM result of a quarter run only in the most unfavorable of situations i.e no one on base and two outs? Can a result that is persistently consistent like REM be wrong while SR is right? Are the two comparable at all?