I alit on an article at Baseball Analysts about the value of the SB to NCAA baseball relative to MLB. The authors found the SB to be more valuable in college than in the major leagues. Within the piece, the authors used Baseball Prospectus' Run Expectancy Matrix to show that a caught stealing costs teams .63 runs. That stat is conventional wisdom.
The link provided contained Joe Sheehan's explanation with 2003 data. The Run Expectancy Matrix shows how many runs a team is expected to score dependent on various states of runners-on-base versus number of outs. The conventional wisdom comes from looking at expected runs for a man on 1st with no outs (.9116). To get the "cost" of a caught stealing, one shifts to no one on-base with one out (.2783). The difference between the two figures is the cost of a caught-stealing in expected runs.
The data is from the 2003 season. However, that does not seem of much consequence. I eyeballed a couple other years and found the data is basically the same. My quibble is the simplicity of the pathway to no-on-and-one-out from one-on-and-no-out. The second situation can be achieved via the CS, but more typically, by the first batter making an out via strikeout or batted ball. Should the two be mixed indiscriminately?
Another thing I noticed was the value of a man on first versus a man on first and third was doubled in all out situations. Is it better to focus on going first to third more often than the adage of not making the last out at third?
Just a thought.