0 Members and 1 Guest are viewing this topic.

http://www.espn.com/mlb/story/_/id/26107902/what-sealed-bryce-harper-record-deal-philliesI threw up in my mouth a little bit.

While opt-outs were discussed during the negotiations, Harper, in the end, said he didn't want one if it cost him a single guaranteed dollar.

Why not just applaud him once and move on? You guys are so weird.

It's journalistic malpractice to not include the following context:The other, better articles on this have explained that the Phillies were only willing to give him the record-breaking amount if there was no opt-out.

saw this on twitter: Since 2012:Winning percentage in games in which Harper plays: .558Winning percentage in games without Harper: .580 (207 games)

We don’t know. Before the Soto explosion, Robles was supposed to be our stud. If he’s half as good as Acuna was last year though, that’s a really good outfield.

He’s getting $70 million over the next two years including the signing bonus. Before a strike that seems almost a certainty. And then he gets paid through 2032. Seems like a good deal for Harper. And a reasonable AAV for the Phillies so they have room to extend Realmuto and Hoskins. And add starting pitching at the deadline which they need.

He’s making 30 million (20 million in bonus) this year, 26 million for years 2-10 and 22 million for years 11-13

Small sample size as usual Imref.

207 games is a small sample size?

As compared to over 1100 games he played. It’s basically like comparing stats from a guy who has played one year to someone who has played six.

Except that in actual statistical terms it's not. Variability is fairly high for most stats over seasons, which is why six seasons is more valuable. But you don't actually lose that much statistical validity if you have a sample of 200 games from the same time period as 1100. The margin of error on a 95% confidence interval (standard) for a sample of 200 is about 7%. At 1100 it's 3%. That result isn't significant at a p value <.05 but it's not entirely meaningless either.

Except that in actual statistical terms it's not. Variability is fairly high for most stats over seasons, which is why six seasons is more valuable. But you don't actually lose that much statistical validity if you have a sample of 200 games from the same time period as the 1100 - that's the key difference. One season versus six is, in baseball terms, apples to pears because variability is high between seasons. Samples from the same population (the same seasons) are apples to apples. The margin of error on a 95% confidence interval (standard) for a sample of 200 is about 7%. At 1100 it's 3%. That result isn't significant at a p value <.05 but it's not entirely meaningless either.

So you would have faith that a player who produced stats for one year would have the same odds of doing it again as someone who did it for six years?? Not meaningless as you say. Anyways I am just teasing imref because VARK is apparently out to lunch. It’s his job. I’m just the back up.

So you would have faith that a player who produced stats for one year would have the same odds of doing it again as someone who did it for six years?? Not meaningless as you say.

I just hope you have a good ST so you don't have to go to Fresno.

No. That's not what I mean. What I mean is that there is actually an ocean of difference between projections drawn off a 200-game (or, for ease of baseball use, 162-game) sample coming from across the seven years than there is of using one complete season.Put otherwise: If you want to know how good a player actually is, would you rather know his stats in (1) one complete season out of seven or (2) 162 games chosen entirely at random out of those seven seasons? (Note that this is not an opinion question: there's a right answer and a wrong one.)

Does this sound like one of the best players in the game after 7 seasons:Average - .270, .274, .273, .330, .243, .319, .249Home runs - 22, 20, 13, 42, 24, 29, 34RBI - 59, 58, 32, 99, 86, 87, 100