Bill James Guide to Baseball Managers, The (18 page)

Read Bill James Guide to Baseball Managers, The Online

Authors: Bill James

Tags: #SPORTS &, #RECREATION/Baseball/History

BOOK: Bill James Guide to Baseball Managers, The
2.68Mb size Format: txt, pdf, ePub

Durocher, who grew up essentially fatherless, once said that he had spent his life looking for father images. In a sense, all managers in the generation before Durocher (and most managers after) were
paternal
managers, surrogate fathers for their players. Durocher was more like an older brother, not all that much older, and certainly not much more responsible. Other managers did bed checks. Durocher, in effect, gave his players permission to hit the bars and woo the women until all hours, so long as they were ready to play ball at game time. And if you weren’t ready to play ball at game time, God help you.

And because he was so successful, Durocher opened up the field for a certain number of managers to follow—Billy Martin, most obviously. He changed the image of what a manager
could
be, took some of the starch out of it.

Before Durocher, managers tended to be stars. After Durocher, they tended to be scrappy middle infielders.

HANDLING THE PITCHING STAFF

Did He Like Power Pitchers, or Did He Prefer to Go with the People Who Put the Ball in Play?
Durocher liked a hard-throwing pitcher who threw strikes and worked inside. His idea was that if you could find a pitcher who had a good arm and convince him to back the hitter off the plate with an inside fastball, then nail the outside corner, you’d have something.

He had tremendous success in turning around pitchers with this philosophy. Whitlow Wyatt was thirty-one years old when he joined the Dodgers, and had a career record of 26 wins, 43 losses. For Durocher, he went 8–3, 15–14, 22–10, 19–7 and 14–5. Kirby Higbe was 27–34 in his career before joining the Dodgers, but went 22–9, 16–11, 13–10, and 17–8 in his four full seasons under Durocher. Sal Maglie was thirty-three years old and had five career wins before he joined Durocher. Durocher made him a household name. Johnny Antonelli was 17–22 in his career before Durocher. In his first season for Durocher, he went 21–7 with a 2.30 ERA.

With the Cubs, Durocher developed Ken Holtzman and Ferguson Jenkins, plus Bill Hands. Holtzman had obvious promise, and his development was widely anticipated, but absolutely no one foresaw that Ferguson Jenkins had Hall of Fame potential.

Durocher also had good luck with knuckleball pitchers. He got a sensational year out of Freddie Fitzsimmons in 1941, when Fitzsimmons went 16–2 as a spot starter. Fitzsimmons taught the knuckleball to Larry French, whose career was almost over, and French went 15–4 with a 1.83 ERA in 1942. Ten years later, Durocher brought Hoyt Wilhelm to the major leagues.

Did He Stay with His Starters, or Go to the Bullpen Quickly?
In the 1940s and early 1950s, Durocher went to his bullpen more readily than any other major league manager. But with the Cubs near the end of his career, he was leading the league in complete games.

In 1946 Durocher used 223 relievers in 157 games; this led the National League. In 1967 he used 244 relievers in 162 games, but the National League Average was 254.

Did He Use a Four-Man Rotation?
He did after the war when the schedule permitted it, yes. Before the war he had the usual combination of two starters and five starter/relievers making fifteen starts apiece, but when the schedules became more regular after the war, he went to a four-man rotation.

Did He Use the Entire Staff, or Did He Try to Get Five or Six People to Do Most of the Work?
He worked his best pitchers hard, probably too hard. He had many pitchers who led the league in games, starts, and innings.

How Long Would He Stay with a Starting Pitcher Who Was Struggling?
Not long.

What Was His Strongest Point As a Manager?
That his teams gave such great effort.

To finish the thought from before, in many ways Durocher was a new type of manager, but in many ways he was also an anachronism. The public humiliation of players who failed him, the intimidation of the opposition, the manipulation of players through a volatile combination of friendship and fear—all of this was more characteristic of the John McGraw-era manager than of the modern steward.

I don’t endorse this gamesmanship, I don’t admire it, and I wouldn’t hire a manager who treated people that way. That’s beside the point, because somebody who did the kind of stuff Durocher did couldn’t manage in the 1990s. The players wouldn’t respond to it.

But it did produce a wonderful effect on his ball club. Durocher’s teams came to beat you. They hustled, they fought, they looked for every opening and every edge. Like Casey Stengel, he was a master manipulator. And he was a great manager.

If There Was No Professional Baseball, What Would He Probably Have Done with His Life?
He’d have been a show business agent.

L
EO
D
UROCHER

S
All-Star Team

B
ILLY
S
OUTHWORTH

S
All-Star Team

Rolling in the Grass

In the March 1948, issue of
Sport
magazine, Ralph Kiner confessed that he didn’t much like to bunt. Kiner had hit 51 homers the previous season. A
Sport
reader named Bob Wilson ripped out an angry letter to the editor. “Kiner says he doesn’t like to bunt. Well, isn’t that too bad. It looks like Mr. Ralph Kiner doesn’t care whether or not a bunt will help his team, he just wants a homer or nothing.”

To bunt, in the 1940s, was not merely a strategic option. Like standard grammar, the sacrifice bunt had grown into a moral imperative. Everybody bunted, and everybody was
expected
to bunt. The 1948 Boston Braves, champions of the National League, laid down 140 sacrifice bunts, almost one per game. Every major league team bunted at least 56 times, with Ralph Kiner’s Pirates having the fewest.

Decades pass, and the sacrifice bunt has fallen into disfavor. George Orwell’s classic
1984
was written in 1948; Orwell just reversed the last two digits of the year. By 1984 the number of sacrifice bunts (per game) had fallen by more than 40% since the end of World War II. Ralph Kiner had only 9 sacrifice bunts in his career. Harmon Killebrew had 9 fewer.

If I had been asked what happened to the sac bunt, without research, I would have pointed to:

a) the reemergence of the stolen base,

b) artificial turf, and

c) the designated hitter rule.

The stolen base and the sac bunt are competing options. You don’t bunt with Rickey Henderson on first base. When speed came back into the game, the number of situations in which the bunt was in order was reduced. Artificial turf is difficult to bunt on, as everybody has been told, because the ball won’t roll dead in the grass, and the DH rule took the bat out of the hands of the guys who bunted most often.

Unfortunately, none of this fits the facts. More specifically, none of it fits the time line. The number of sacrifice bunts per game

  • declined more than 20% between 1948 and 1957,
  • was fairly constant from 1957 to 1981, and
  • dipped by another 20-plus percent in the early 1980s.

The rise in stolen bases, the arrival of artificial turf, and the adoption of the designated hitter rule all came in the middle of that long stretch, when the number of sacrifice bunts per game changed hardly at all.

A better explanation is that the sacrifice bunt was pushed toward oblivion first by the long ball (1948–1957), and second by logical arguments against the bunt (1981–1984).

Between 1948 and 1957 home runs in the major leagues increased by more than 40%. The increase in power worked against the bunt on several levels, which can be summarized in this statement: that not only do you not bunt with Harmon Killebrew, but you also don’t bunt with the guy who bats
ahead
of Harmon Killebrew. For many years, with offenses based around line drive hitters, managers had thought about
scoring position
, getting the runner
in scoring position
. But Harmon Killebrew doesn’t hit that many singles anyway, so “scoring position” is really not a meaningful concept for him. All you’re doing, by bunting ahead of Killebrew, is inviting an intentional walk.

As home run hitters flooded into the game in the 1950s, bunting opportunities went out. So power, not speed, was the first thing that happened to the sacrifice bunt.

The second decline in bunting, twenty-five years later, is attributable to a higher power: the power of ideas. The most successful American League manager of the 1970s was Earl Weaver. About 1980, Weaver grew into something of a prophet, an Old Testament Prophet, perhaps, spitting baseball wisdom as freely as tobacco juice. I’m a great admirer of Earl Weaver’s, and I don’t mean any disrespect.

Anyway, Weaver didn’t like the sacrifice bunt. As he put it in
Weaver on Strategy
, his fine 1984 book with Terry Pluto, “If you play for one run that’s all you’ll get.” That was the fifth of Weaver’s ten commandments, which he called “laws” because he didn’t want people comparing him to Moses. His sixth law was “Don’t play for one run unless you know that run will win a ballgame,” and his fourth law was “Your most precious possessions on offense are your twenty-seven outs.” All of which means pretty much the same thing: The bunt is for losers.

It was a great day to be a baseball writer, and many writers picked up on Weaver’s ideas. “Baseball is a game of big innings,” wrote Thomas Boswell. Boswell pointed out that in some very large percentage of games (I forget the number), the winning team scores more runs in one inning than the losing team does in all nine. Dan Okrent took the reader inside Earl Weaver’s strategy in
Nine Innings
.

At the same time, sabermetric research, which had built up unpublished for twenty years, was bursting into the light. The research at that time tended to advance the same notion: that the bunt was a bad play. In fact, that’s a direct quote from
The Hidden Game of Baseball
, also published in 1984:

The sacrifice bunt … is a bad play, as several modern-day managers—but not enough of them—have concluded.

—Thorn and Palmer

In my own
Baseball Abstracts
, I was skeptical about the value of the bunt. Earl Weaver clearly had an impact on the thinking of younger managers. Whether the rest of us had any impact, or whether we were just piling on, is an open question. I remember another image from that time, a dugout confrontation between Reggie Jackson and Billy Martin. It happened when Martin signaled for a bunt. Reggie, drawing on his genius for irritating theatrics, attempted unsuccessfully to bunt, and at the same time clearly conveyed to the entire stadium that it was an insult for a hitter of his stature to be asked to lay one down. Words followed; fists, blood, suspensions, threats to resign. The power hitter’s dislike of the bunt, an infant to be scolded in the time of Ralph Kiner, had matured into an ugly adult.

Cynically, we could argue that managers backed away from the bunt because (unlike Billy Martin) they lacked the stones to confront their power hitters. More charitably, there was a battle of ideas. The bunt was seen most often as a bad idea.

Time passes; there are other books and other prophets. The number of bunts per game has gradually increased since 1984, reaching as high as 80 per hundred games (1993). And I’ve had second thoughts, and I’ve done some additional research. I am no longer convinced that the sacrifice bunt is a poor percentage play.

Let’s take the Palmer/Thorn argument. Pete Palmer argued that the sacrifice bunt is a bad play because it tends to create a
worse
situation for the offensive team, rather than a better situation. With a runner on first and no one out, for example, a major league team can usually expect to score .783 runs (that is, if a team is in that situation 1,000 times, they can be expected to score about 783 runs). With a runner on second and one out, on the other hand, the expected runs would be .699. Thus, by bunting 1,000 times, a team could expect to score 84 runs
fewer
than if they didn’t bunt at all.

For most of the situations in which a bunt may be used, Palmer argued, it will result in a net
loss
of runs scored.

As I’ve thought about it over the years, however, I’ve become less convinced by this argument. First, runs scored one at a time are obviously somewhat more valuable than runs scored in big innings. How much more valuable? Perhaps as much as 50% more.

Suppose that you take two teams, one of which scores runs only one at a time, and the other of which scores runs only in groups of three. Earl Weaver’s ultimate team: nothing but three-run homers. One team scores in 50% of all innings, one run per inning; that’s 4.50 runs per game. The other team scores in only one-sixth of its innings, but scores three runs at a time; that also is 4.50 runs per game.

When these two teams play against one another, who will win? The team which has the big innings will win some games 15–2, but will be shut out in 19% of its games. The team which scores one run at a time can’t score more than nine in a game, but will be shut out only once in 500 games. Because of this, the team which scores runs one at a time will win 55% of the games—actually, 55.2816% of the games.

This is a significant advantage. In 162 games, the one-run team would win 90. The biginning team, scoring exactly as many runs, would win 72 games. Depending on how one wishes to phrase it, runs scored one at a time are 11% to 24% more powerful than runs scored in three-run groups. The winning percentage of the one-run team, playing constantly against the big-inning team, is .553, which is 11% above .500. On the other hand, scoring exactly the same number of runs, the one-run team wins 24% more games than the big-inning team.

But wait a minute. This is based on the assumption that the one-run team always plays for one run, even in situations in which it would make no sense to do so. This is the advantage of a one-run strategy not at a time when a manager would actually use a one-run strategy, but simply at an indiscriminate moment. Trailing 6–1 in the seventh inning, the one-run team still can score only one run at a time. Suppose that we altered the study so that the one-run team didn’t bunt when they were behind by two or more runs.

I wrote a simple computer program to simulate this contest. Team B, the big-inning team, scores runs only in three-run groups, and scores once every six innings. Team A scores runs only one at a time, except when Team A is behind by two or more runs. In that case, Team A performs the same as Team B. Both teams score 4.50 runs per nine innings.

The winning percentage of Team A in this simulation was .595 (.603 in home games, .587 on the road). Scoring exactly the same number of runs, the team which usually played for one run won 47% more games than the team which always played for the big inning. This still is not choosing carefully when to play for the big inning, and when to play for one run; this is just using a little bit of discretion. Choosing more carefully when to play for the big inning, the one-run team might win 50 or 60% more games than the big-inning team.

Plug that back into Palmer’s research. Palmer concluded that a team had a run expectation of .783 before a bunt and .699 after a successful bunt—a 12% advantage for the “before” situation. But if one-run innings are as much as 50% more powerful one-for-one, we might still conclude that one should bunt early and bunt often.

This is just one of the problems with the analysis. The .783 “run potential” for a man on first/none out situation is not a fixed value, constant for all occasions; rather, this is the center of a range of values which represents many such situations. With Frank Thomas at the plate, the team’s expected number of runs would be much higher than .783. With Manny Alexander at bat, the number would be much lower.

So the real question is, do the run-potential ranges ever overlap? The .783 figure would often be lower than .783; the .699 figure would often be higher than .699. Would there ever be real-life situations in which the expected runs would be higher after the bunt?

Well, unless the ranges of run potential are awfully narrow, they’d have to overlap, wouldn’t they? .783 isn’t that much higher than .699, considering that

a) the largest variable in a more careful evaluation of the specific situation would be the quality of the hitter, and

b) the variation in run-producing abilities of various hitters is much more than the 12% difference between .783 and .699.

So what this research proves, it seems to me, is not that the bunt is a bad play, but merely that with a runner on first and no one out, there are more situations in which one should not bunt than situations in which one should bunt. Teams should bunt less than 50% of the time. Since teams do bunt less than 50% of the time in that situation, this is hardly a revelation.

In the 1990 edition of the
Baseball Scoreboard
, from STATS Inc., the editors of this exceptional publication studied the value of sacrifice bunts in an article entitled “Do Sacrifices Sacrifice Too Much?” Working with a database of all major league games played in a three-year period (1987–1989), STATS editors Don Zminda and John Dewan sorted out all innings in which teams had a runner on first and no one out. They then split that large group of innings (about 40,000 innings over the three seasons) into two classes—innings in which there was a bunt attempt, and innings in which there was no bunt attempt. Actually, they split the data a lot more ways than that; I’m simplifying.

The study provided an empirical validation of Pete Palmer’s theoretical results. Teams do, in fact, score more runs in innings when they don’t bunt than in innings in which they do bunt.

This study, however, has the same problem, in a different guise. Suppose that the 1927 New York Yankees have 200 situations in which there is a runner on first and no one out. In 100 of those situations Babe Ruth or Lou Gehrig is up to bat; in the other hundred situations, the batter is Joe Dugan or Cedric Durst or Ray Morehart. Suppose that the team bunts 50 times in those 200 situations. Would we expect the Yankees to be equally likely to bunt with Babe Ruth at bat, or Cedric Durst?

Of course not. What would happen is that the team would not bunt with Gehrig or Ruth, but might bunt very often with Dugan, Durst, or Morehart.

If studied after the fact, as STATS did, all of the at bats in which Ruth or Gehrig was at the plate would be sorted into the “nonbunt” category. Many of the at bats with weak hitters at the plate would go into the “bunt” category. Of course the runs scored would be higher in the “nonbunt” category than they would in the “bunt” group. This doesn’t do anything to show that the bunt is a bad play.

I asked Pete Palmer, who is a friend of mine, whether he had tried to determine values for his model with individual players, rather than overall averages. His reply was that he had studied it based on batting-order positions—that is, with a typical number-three hitter at bat, a typical number-seven hitter, etc. The model still produced more runs without bunts than with them.

This doesn’t do much to solve the problem, however, because the variation between individual players is vastly greater than the variation between typical batting-order positions. In the National League in 1993, for example, a typical number-two hitter hit .279 with 10 homers, 64 RBI, a .388 slugging percentage. A typical number-three hitter wasn’t a whole lot better, hitting .291 with 19 homers, 94 RBI, a .439 slugging percentage. In the typical case, the sacrifice bunt would transfer the RBI opportunity to a player whose batting average was only 12 points higher, and whose slugging percentage was only 51 points higher.

Other books

Rebecca Hagan Lee by A Wanted Man
Saint and Scholar by Holley Trent
Dark Passions by Jeff Gelb
The Moffats by Eleanor Estes
The Woman in the Wall by Patrice Kindl
Bold (The Handfasting) by St. John, Becca
Meagan by Shona Husk
Teeny Weeny Zucchinis by Judy Delton