Expectations and Realities: Week 4 AP Top 25 College Football Teams

Welcome to the 4th week of breaking down the top 25 teams’ performance compared to their expectations as predicted by vegasinsiders.com. Results from previous weeks can be found at the bottom.

This week Louisville and Virginia Tech fell out of the top 25 and will not be included in this week’s results. Instead they will be replaced with Nebraska and Oklahoma State.

Week 4 Results

In the Table 1 and Figure 1 below you will find the 25 teams ranked in the AP preseason poll, the predicted spread, the actual result, and the percent error. The teams are sorted based on how well the team performed based on the spread for that game. Teams with a positive percent error performed better than expected, or covered the spread, while teams with a negative percent error underperformed, or did not cover the spread. Note: this is not a measure if a team won or lost a game, rather a measure of how well the team won or lost the game.

Rank Team Opponent Spread Actual %Error
1 Florida State Clemson -9.5 -8 -16%
2 Oregon Washington State -23.5 -7 -70%
3 Alabama Florida -14 -21 50%
4 Oklahoma West Virginia -8 -12 50%
5 Auburn Kansas State -7.5 -6 -20%
6 Texas A&M Southern Methodist -34.5 -52 51%
8 LSU Mississippi State -7 5 -171%
11 Michigan State Eastern Michigan -43.5 -59 36%
13 Georgia Trojans -41.5 -66 59%
14 South Carolina Vanderbilt -22 -14 -36%
18 Missouri Indiana -15 4 -127%
19 Wisconsin Bowling Green -26.5 -51 92%
20 Kansas State Auburn 7.5 6 20%
21 Brigham Young Virginia -15 -8 -47%
22 Clemson Florida State 9.5 8 16%
24 Nebraska Miami-Florida -8 -10 25%

Baylor, Notre Dame, Ole Miss, UCLA, Arizona State, Stanford, USC, and Ohio State each had bye weeks. They are not included in the Table 1 or Figure 1. Wisconsin beat the spread by the largest margin at 92%. LSU did not beat the spread with the worst margin at -171% after being upset by Mississippi State. Week 4 was important for the SEC rivalry between Alabama and Auburn. Week 4 was the first week that Alabama beat the spread at a 50% error and the first week that Auburn did not beat the spread at -20% error.

Weekly Summary

Now that we are past the 3rd week of college football we have enough data for some statistics. Every team has played at least three games, which is the minimum amount of data required for statistics. A single factor analysis of variance (ANOVA) reveals that there is no statistical difference in mean percent error across the first four weeks of play (p=0.829). The results of head to head Tukey multiple comparison statistical matchups can be found here. This means that, on average, all teams have performed in a similar manner relative to their expectations. This isn’t unexpected due to the variable nature of the data.

While there may not be any differences now, there might be some practical information to be taken from the data. Below are five boxplots to aid in understanding the data. Each boxplot contains five of the top 25 teams in order to reduce crowding.

Figure 2 reports the percent error for the first five teams. As stated earlier, this is the first week that Alabama has beaten the spread. Will they continue or revert back to their previous three weeks? Will Auburn continue to beat the spread after failing to do so for the first time this season? Baylor had a bye week but expect them to continue their trend of beating the spread this week vs Iowa State. Arizona State and Brigham Young are not consistently beating the spread.

Figure 2 – Boxplots of Percent Error for Alabama, Arizona State, Auburn, Baylor, and Brigham Young.

There are three important details in Figure 3. Firstly, Florida State has yet to beat the spread this season. However it should be noted there percent error for Florida State has increased over their three games indicating they might beat the spread in the near future. The performance of Clemson, Georgia, and Kansas State has been too variable. This is the first week that LSU did not beat the spread.

Figure 3 – Boxplots of Percent Error for Clemson, Florida State, Georgia, Kansas State, and LSU.

The data in Figure 4 is more straightforward. This group of teams has a variable percent error. Michigan State is scoring close to the spread each week. Missouri and Notre Dame are mostly beating the spread. Ohio State beat the spread in two of their three games, but the week they didn’t beat the spread was by a large margin (-240%) skewing their plot.

Figure 4 – Boxplots of Percent Error for Michigan State, Missouri, Nebraska, Notre Dame, and Ohio State

Figure 5 has two groups. Oklahoma, Ole Miss and Oregon have performed consistently each week. Meanwhile, Oklahoma State and South Carolina have performed inconsistently. Oklahoma has beaten the spread the past three weeks while only not beating the spread during week one with a -4% error. Ole Miss has beaten the spread by 52% or more this season.

Figure 5 – Boxplots of Percent Error for Oklahoma, Oklahoma State, Ole Miss, Oregon and South Carolina.

Figure 6 is an interesting group. Firstly, Stanford beat the spread in weeks one and three but fell short of the spread in week two by 200%. Texas A&M has beaten the spread by 51% or more in every week but week three. UCLA has not beaten the spread this season. Furthermore, UCLA hasn’t surpassed the -58% mark. USC beat the spread by more than 100% in weeks one and two but failed to beat the spread by more than 200% in week three. Wisconsin failed to beat the spread by -14% and -17% in weeks one and two, but after coming off a bye in week three smashed the spread by 92% in week 4.

Figure 6 – Boxplots of Percent Error for Stanford, Texas A&M, UCLA, USC, and Wisconsin.

This week’s takeaways: If Baylor, Oklahoma, and Ole Miss keep the same pace, expect them to beat the spread. UCLA won’t cover the spread if they continue the same pattern. Auburn and Alabama have broken their streak. The rest of the top 25 teams are not demonstrating any patterns after four weeks.

 

Important notes:

  1. Spreads can be found at vegasinsider.com.
  2. Percent error is calculated as (Spread-Actual)/Abs(Spread)
  3. I understand that spreads are typically used for gambling purposes and that the lines move. However, it is important for the spreads to reasonably accurate in order for the house or bookie to make money. Lines are a consistent source of weekly predictions.

 

Previous results – week 1 week 2 week 3

Expectations and Realities: Week 3 AP Top 25 College Football Teams

Welcome to the 3rd week of breaking down the top 25 teams’ performance compared to their expectations. Here are the results of week 1 and week 2 if want to see previous performance comparisons. In that article, you will find comparisons of vegasinsider.com’s lines for the AP top 25 teams to the actual scores for that week’s games. Here are the results of week 3.

This week Nebraska and North Carolina fell out of the top 25 and will not be included in this week’s results.

In the Table 1 and Figure 1 below you will find the 25 teams ranked in the AP preseason poll, the predicted spread, the actual result, and the percent error. The teams are sorted based on how well the team performed based on the spread for that game. Teams with a positive percent error performed better than expected, or covered the spread, while teams with a negative percent error underperformed, or did not cover the spread. Note: this is not a measure if a team won or lost a game, rather a measure of how well the team won or lost the game. For example, Oklahoma was expected to beat Tennessee by 21 but outscored them by 24 points. Oklahoma performed 14% better than predicted.

Rank Team Opponent Spread Actual %Error
24 South Carolina Georgia 6.5 -9.5 246%
20 Missouri Central Florida -10.5 -28 167%
22 Ohio State Kent State -31 -66 113%
14 Ole Miss LA-Lafayette -27 -41 52%
8 Baylor Buffalo -33.5 -42 25%
15 Stanford Army -30 -35 17%
4 Oklahoma Tennessee -21 -24 14%
10 LSU Louisiana Monroe -31 -31 0%
3 Alabama Southern Miss -46 -40 -13%
7 Texas A&M Rice -32.5 -28 -14%
16 Arizona State Colorado -16.5 -14 -15%
2 Oregon Wyoming -43.5 -34 -22%
21 Louisville East Carolina -14.5 -10 -31%
11 Notre Dame Purdue -30 -16 -47%
25 Brigham Young Houston -17 -8 -53%
12 UCLA Texas -8.5 -3 -65%
9 USC Boston College -17 23 -235%
6 Georgia South Carolina -6.5 9.5 -246%
17 Virginia Tech East Carolina -10 17 -270%

Spreads come from vegasinsider.com.

Florida State, Auburn, Michigan State, Wisconsin, Clemson, and Kansas State each had bye weeks. They are not included in the table. LSU is the first team in 3 weeks to push the spread. South Carolina exceeded predictions by the largest margin with the upset over Georgia. Virginia Tech had the worst performance after being upset by East Carolina.

Now that we are in the 3rd week of college football we have enough data for some statistics. Teams with byes this week will not be included in this analysis due to insufficient data. Georgia is also excluded since they had a bye week last week. A single factor analysis of variance (ANOVA) reveals that there is no statistical difference in percent error across the first three weeks of play (p=0.556). The results of head to head Tukey multiple comparison statistical matchups can be found here. This means that, on average, all teams have performed in a similar manner relative to their expectations.

While there may not be any differences now, there might be some practical information to be taken from the data. Below are four boxplots to aid in understanding the data. Figure 2 shows the percent error for top 25 teams that have played at least three games. Typically, I would not show this sort of figure. The graph is crowded. However, notice how BYU’s average percent error dwarfs the other teams. This is skewed due to their week two game vs Texas. They were expected to lose by one point but ended up winning by a convincing 34 points. For now, this is considered an outlier and BYU is removed from the analysis. The remaining 17 teams are portrayed in Figures 3 through 5.

Figure 2 – Boxplots of Percent Error for AP Top 25 Teams that Have Played 3 or More Games

There are three important details in Figure 3. Firstly, that the percent error is relatively variable, except for Baylor. Baylor has a median error rate of 36% and has exceeded expectations for all three weeks. Next notice that, so far, Alabama has been underperforming without a single positive error rate in the first three weeks. Finally, LSU has been meeting or exceeding expectations in all of three weeks.

Figure 3 – Boxplots of Percent Error for Alabama, Arizona State, Baylor, Louisville and LSU

The data in Figure 4 is more straightforward. Of the six teams pictured, only Ole Miss is consistently winning against the spread with percent errors of 120%, 105%, and 52% for weeks one, two and three respectively. The percent errors for the other teams in Figure 4 are too variable for any discernible patterns.

Figure 4 – Boxplots of Percent Error for Missouri, Notre Dame, Ohio State, Oklahoma, Ole Miss, and Oregon

With the exception of UCLA, the percent error for teams in Figure 5 is too great. Each team has had weeks were they beat the spread and others where they did not. However, UCLA has consistently not beaten the spread with a median percent error of -65%.

Figure 5 – Boxplots of Percent Error for South Carolina, Stanford, Texas A&M, UCLA, USC, and Virginia Tech

This week’s takeaways: If Baylor, LSU, and Ole Miss keep the same pace, bet against the spread. Bet that Alabama or UCLA won’t cover the spread if they continue the same pattern.

 

Important notes:

  1. Spreads can be found at vegasinsider.com.
  2. Percent error is calculated as (Spread-Actual)/Abs(Spread)
  3. I understand that spreads are typically used for gambling purposes and that the lines move. However, it is important for the spreads to reasonably accurate in order for the house or bookie to make money. Lines are a consistent source of weekly predictions.

Expectations and Realities: Week 2 AP Top 25 College Football Teams

Last week I reported how top 25 teams performed relative to their expectations. In that article I compared vegasinsider.com’s lines for the AP top 25 teams to the actual scores for that week’s games. Here is week 2’s comparison.

During the first week of college football, Washington was the only team to drop from the top 25. Louisville replaced them for the 25th spot. Because of this, Washington was not included this week.

In the table below you will find the 25 teams ranked in the AP preseason poll, the predicted spread, the actual result, and the percent error. The teams are sorted based on how well the team performed based on the spread for that game. Teams with a positive percent error performed better than expected while teams with a negative percent error underperformed. Note: this is not a measure if a team won or lost a game, rather a measure of how well the team won or lost the game. For example, Texas A&M was predicted to beat Lamar by 46.5 points but outscored them by 70 points. Texas A&M performed 50% better than predicted.

Relative Performance of Week 2 AP Top 25 College Football Teams

Rank Team Opponent Spread Actual %Error
16 Notre Dame Michigan -4 -31 675%
24 Missouri Toledo -3.5 -25 614%
14 USC Stanford 3 -3 200%
15 Ole MIss Vanderbilt -18.5 -38 105%
23 Clemson South Carolina State -34 -66 94%
4 Oklahoma Tulsa -24.5 -45 84%
12 LSU Sam Houston State -32 -56 75%
9 Texas A&M Lamar -46.5 -70 51%
17 Arizona State New Mexico -24.5 -35 43%
3 Oregon Michigan State -13.5 -19 41%
10 Baylor Northwestern State -46.5 -64 38%
5 Auburn San Jose State -34 -46 35%
25 Louisville Murray State -35.5 -45 27%
6 Georgia Bye 0 0 0%
2 Alabama Florida Atlantic -42 -41 -2%
18 Wisconsin Western Illinois -41 -34 -17%
21 South Carolina East Carolina -14.5 -10 -31%
7 Michigan State Oregon 13.5 19 -41%
1 Florida State Citadel -56.5 -25 -56%
20 Kansas State Iowa State -12 -4 -67%
11 UCLA Memphis -22.5 -7 -69%
22 North Carolina San Diego State -14.5 -4 -72%
19 Nebraska McNeese State -35.5 -7 -80%
13 Stanford USC -3 3 -200%
8 Ohio State Virginia Tech -10 14 -240%

Use the following chart to help visualize the data in the table.

Many teams played easy opponents during week 2 as you can tell by the number of 30+ point spreads. However, there were some games that were predicted to be close. The most surprising win this week, for a top 25 team, comes from Notre Dame’s 31-0 shutout over Michigan. Notre Dame was favored by 4 points. This indicates that it was expected to be a close game that instead resulted in a blowout.

The first 14 teams in the table exceeded expectations while the bottom 11 underperformed.The top 3 overachievers this week were Notre Dame (675%), Missouri (614%), and USC (200%). The top 3 underperformers this week were Ohio State (-240%), Stanford (-200%), and Nebraska (-80%).

 

Important notes:

1. Spreads can be found at vegasinsider.com.

2. Percent error is calculated as

3. I understand that spreads are typically used for gambling purposes and that the lines move. However, it is important for the spreads to reasonably accurate in order for the house or bookie to make money. Lines are a consistent source of weekly predictions

Expectations and Realities: Week 1 AP Top 25 College Football Teams

Week 1 of college football has come and gone. While we wait for the polls to update, let us take a moment and see how each of the teams in the AP top 25 poll performed relative to their predictions.

In the table below you will find the 25 teams ranked in the AP preseason poll, the predicted spread, the actual result, and the percent error. The teams are sorted based on how well the team performed based on the spread for that game. Teams with a positive percent error performed better than expected while teams with a negative percent error underperformed. Note: this is not a measure of if a team won or lost a game, rather a measure of how well the team won or lost the game. For example, Oregon beat South Dakota 62-13. Oregon won their game in convincing fashion. However, Oregon was expecting to win by 54 points, but won by 49 points.

Relative Performance of AP Top 25 College Football Teams

Rank Team Opponent Spread Actual %Error
21 Texas A&M South Carolina 10 -24 340.0%
12 Georgia Clemson -9.5 -24 152.6%
22 Nebraska Florida Atlantic -20 -48 140.0%
18 Ole Miss Boise State -10 -22 120.0%
15 USC Fresno State -18.5 -39 110.8%
17 Notre Dame Rice -19.5 -31 59.0%
6 Auburn Arkansas -17 -24 41.2%
10 Baylor Southern Methodist -33 -45 36.4%
5 Ohio State Navy -13.5 -17 25.9%
13 LSU Wisconsin -3.5 -4 14.3%
8 Michigan State Jacksonville State -34.5 -38 10.1%
11 Stanford UC Davis -42.5 -45 5.9%
4 Oklahoma LA Tech -33.5 -32 -4.5%
20 Kansas State Stephen F. Austin -42 -39 -7.1%
3 Oregon South Dakota -53.5 -49 -8.4%
23 North Carolina Liberty -31 -27 -12.9%
14 Wisconsin LSU 3.5 4 -14.3%
24 Missouri South Dakota State -25.5 -20 -21.6%
19 Arizona State Weber State -46 -31 -32.6%
2 Alabama West VA -23 -10 -56.5%
7 UCLA Virginia -19 -8 -57.9%
1 Florida State Oklahoma State -20.5 -6 -70.7%
25 Washington Hawaii -17.5 -1 -94.3%
16 Clemson Georgia 9.5 24 -152.6%
9 South Carolina Texas A&M -10 24 -340.0%

I’ve added the following chart to help you visualize the data.

The first 12 teams in the table exceeded expectations while the bottom 13 underperformed. Texas A&M, ranked 21, tops the table exceeding predictions by 340% outscoring number 9 South Carolina 52-28. Expectedly, South Carolina underperformed by 340%. Defending national champions Florida State struggled in their opener underperforming by 70.7% against Oklahoma State. Georgia/Clemson and LSU/Wisconsin games were other notable games as it marks the only other two Ranked teams that lost their opener. The Georgia/Clemson game had an error of 152.6% while the LSU/Wisconsin game had an error of 14.3%. While Georgia trumped Clemson in a 2nd half shut out, LSU rallied in the second half to edge out their spread by half a point against Wisconsin.

On average, the ACC underperformed by 78.8% while the SEC exceeded expectations by an average of 31.3%. Below are the conference averages.

ACC          -78.8%

PAC 12     -12.8%

Big 10       18.0%

Big 12       30.1%

SEC           31.3%

IND           59.0%

Important notes:

1. Spreads can be found at vegasinsider.com.

2. Percent error is calculated as (Spread-Actual)/Abs(Spread)

3. If you like what you see here (or don’t) let me know below. You can also point out mistakes or criticize the article.

The Truth About Your Ice Bucket Challenge Donations

If you are reading this, you have probably heard of the ice bucket challenge.  In short, you get nominated to take the ice bucket challenge.  Once nominated you have two options that you are supposed to choose: either donate $100 to the Amyotrophic Lateral Sclerosis Association (ALSA) or pour a bucket of ice water over your head, donate $10 to the ALSA, and nominate three more people to take the ice bucket challenge.  “Amyotrophic lateral sclerosis (ALS), often referred to as Lou Gehrig’s Disease, is a progressive neurodegenerative disease that affects nerve cells in the brain and the spinal cord.”  It is a terrible disease that ultimately results in death. Donating money to this charity sounds like a good cause.

There are some critics. There are articles and videos that claim ALSA does not spend the money correctly.  After coming across this dissent, I became curious and decided to investigate.  How does the ALSA spend their money and is that spending appropriated correctly? Let us find out.

In this video the author says that less than 8% of the 2012 ALSA expenses went to research.  The 2012 ALSA annual report (see page 12) confirms this claim.  In the table below we can see that 7.71% of ALSA expenses went towards research.  I found it interesting that the consolidated financial summary is accompanied by this comment “The consolidated summary has not been audited or reviewed by the auditors and is not part of their financial reports.” and decided to investigate.  After investigating, I found a discrepancy.  The consolidated financial summary reports a “total combined revenue” of $55,446,772 but the total expenses for 2012 is reported as $15,435,227.  I could not reconcile the numbers in this report.  Feel free to comment if you reconcile the numbers.

Using the expenses for 2012, we see an entirely different situation.  ALSA spent $3,904,240, or 25.3% of their 2012 expenses on research.  In addition, ALSA spent $4,629,111 or 30.0% on patient and community services, $1,859,100 or 12% on public and professional education and $3,269,624 or on fundraising.  In 2012, ALSA spent a total of $13,662,075 or 88.5% of their expenses on research, fundraising, or ALS awareness leaving 11.5% for overhead. Put another way, in 2012 88 cents out of every dollar spent by ALSA went to better understanding ALS.

We find a similar trend for the 2013 year.  In 2013 the ALSA had an expense total of $25,737,701, 66.7% more than in 2012.  Of the $25,737,701, ALSA spent $6,616,367, 25.7%, on research.  While ALSA proportionally spent similar amounts of research, the total dollar amount spent on research increased in 2013.  Additionally, 91.5% of ALSA spending in 2013 went towards research, fundraising or ALS awareness leaving only 8.5% for overhead.

The trend continues for the year ending in 2014.  In 2014 the ALSA had an expense total of $26,204,122.  Of this, ALSA spent $7,170,481, 27.4%, on research.  The ALSA spent 1.7% more in 2014 on research.  Additionally, 92.7% of ALSA spending in 2014 went towards research, fundraising or ALS awareness leaving only 7.3% for overhead.

Of course this doesn’t even begin to address money and awareness raised by the ice bucket challenge.  The ALSA has raised $79.7 million  as of August 25th.  You can rest assured knowing that, for the most part, your donations are being put to good use.  But don’t just take my word for it.  The ALSA meets all the Better Business Bureau’s 20 standards for charity accountability.  In addition Charity Navigator gives them a 4 star rating.

patrick-stewart-ice-bucket-challenge

Why AT&T Next is a Win/Win for Both Consumers and AT&T

Edit: I made an arithmetic mistake in the original article that inflated the savings of the new AT&T Next plans.

AT&T rolled out its new no contract payment plan called AT&T Next. In this model AT&T no longer subsidizes the cost of the phone in exchange for a 2 year contract. Now, consumers will have to pay the full price of the phone. I am going to tell you why that is a good thing.

AT&T will let you pay for your new phone in monthly installments added directly to you bill. There is no down payment, activation fee, upgrade fee, or financing fee (interest). The total cost of the phone divided by the number on months financed is added to you bill.

There are two AT&T Next options—Next 12 and Next 18. With Next 12 the cost of the phone is spit over 20 months with the option to upgrade again at 12 months. After 12 months you have two options for upgrading. Firstly, you can trade in your current working phone for the next phone you want to upgrade to. Or you can pay off the balance on your current phone and upgrade to the next phone. Then the process starts again and you’re paying on a new phone. There is also the Next 18 which has allows for an upgrade at 18 months and the cost of the phone is split over 24 months.

Of course, you don’t have to upgrade. After your phone is paid in full, you are capable of going month to month on your payments while maintaining full ownership of your phone.

I talked to an AT&T representative about this, and his response indicated that it is up to the particular AT&T representative if your phone is in good enough condition to be traded in. However, he did mention that if it powers on and is not cracked then it should be accepted. Dents, scuffs, and general wear and tear are acceptable.

At this point you must be thinking “Why would I want to pay full price for my phone that I was getting at a subsidized price?” AT&T thought the same thing and now offers a $15 monthly discount for data plans less than 10GB and a $25 monthly discount for data plans above 10GB. This monthly discount, subsidy, is why the new AT&T Next is good for you and your wallet. Let me explain. (Assumption: you like having a relatively new phone as often as you can afford it. If you still have a Motorola Razr or Nokia brick, then you’re probably not reading this article anyways.)

With the old 2 year contract plan you could buy a phone at a subsidized price, pay an activation fee and be on your way. After 20 months you are eligible for a phone upgrade if you sign another 2 year contract. You get a subsidized phone price, but you can only get that price once every 20 months. In the table below, you can see the associated cost comparisons for old 2 year contract model and the new Next 12 and 18 models.

Firstly, let us look at the first month’s costs for each of the plans. With the old 2 year contract model you would be required to pay for the cost of the phone and activation fee. For the new flagship phones (i.e. Samsung Galaxy or iPhone) the cost was typically $200. The activation fee is approximately $45. The first month’s costs for 2 year contract are approximately $245 plus any other fees and taxes. The first month’s (net) costs for AT&T Next are $17.50 and $12.08 for Next 12 and Next 18 respectively plus taxes. For the first month of your new Next plan you are going to pay approximately $230 less than your old 2 year contract.

Now let us look at the monthly costs (after the first month) of each of the plans. For the Next 12 you pay $17.50 more each month and for the Next 18 you pay $12.08 more each month than the 2 year contract plan. If you keep your phone until it is paid off you will pay $105 more for Next 12 and $45 more for Next 18. However, if you take advantage of the 12 and 18 month trade-in, you will come out $35 and $27.56 in the black for Next 12 and 18 respectively.

This isn’t a lot of savings. In fact, if you wait too long to trade in your phone you will end up paying more than you would have on the old 2 year contract model.

There are some perks that will have ranging subjective values for different people. For starters, there is not large down payment. You can get your new phone today without a down payment and without any financings fees or interest. When money is tight people tend to avoid spending a lot or money at once. It is pleasant to not be charged more because you can’t pay in full up front. Finally, there is the value of being able to upgrade every 12 months. If you are a savvy at selling things online, you could sell last year’s model and turn a profit.

The new AT&T Next plans will save you a couple bucks. But how does AT&T benefit from this model. Simple, AT&T is no longer subsidizing the other $450 of your new iPhone or Samsung which is more than the $15 a month they give. And if you trade in the phone on your next upgrade, they can sell that as a refurbished phone.

In summary, the new AT&T Next plans can save you up to $35 a year if you upgrade to a new phone every 12 months. It can also cost you as much as $105 over 20 months if you choose not to upgrade. It isn’t a lot, but every dollar counts. With T-Mobile’s Jump and AT&T’s Next installment plans, we might be seeing the start of a shift in how we buy phones.

Is 2014 Job Creation Truthfully Faster in States that Raised the Minimum Wage?

I came across a Huffington Post article New Analysis Debunks Claim That A Higher Minimum Wage Kills Job Growth. I was intrigued. I am a huge fan of statistics and data. I was expecting a complex statistical analysis of employment data that compares states with minimum wage increases versus those without. I was sadly disappointed.

While the Huffington Post article is what I stumbled upon, the data originally comes from 2014 Job Creation Faster in States that Raised the Minimum Wage from the Center for Economic and Policy Research (CEPR) blog. CEPR compared the employment growth (data collected from Bureau of Labor Statistics) of all 50 states and Washington D.C from the last five months (August through December) of 2013 to the first five months (January through May) in 2014. On January 1, 2014 13 states raised the minimum wage. Of those 13 states, four states (Connecticut, New Jersey, New York, and Rhode Island) of them pass legislation to raise the minimum wage while the other nine states (Arizona, Colorado, Florida, Missouri, Montana, Ohio, Oregon, Vermont, and Washington state) raised the minimum wage at the beginning of the year due to inflation.

The chart below was compiled by CEPR and shows the percent change in employment by state. This graph is the only semblance of data analysis. Three major observations stood out to me from this chart. Firstly, states with minimum wage increases are mixed in with states without minimum wage increases when ranked in order of decreasing employment growth. There is no clear separation between the two just from looking at the graph. Secondly, almost all states had positive employment growth (43/51). Finally, two states with no minimum wage increase had the greatest employment growth, while one state with a minimum wage increase had the greatest employment decline. These are my subjective observations and others might have different observations.

What did the author say? You might have a good guess if you read the titles of the aforementioned articles. Huffington Post said that this data “debunks” the idea that higher minimum wage kills jobs. CEPR said that job creation is faster in states with minimum wage increases. How did they come to this conclusion?

The author concluded that states that increased the minimum wage had faster job growth by comparing the mean job growth from states with and without minimum wage increases. The author reports:

The average change in employment for the 13 states that increased their minimum wage is +0.99% while the remaining states have an average employment change of +0.68%.

This must be the case. Basic arithmetic tells us that 0.99% employment growth is greater than 0.68% employment growth. Unfortunately, basic arithmetic is not the correct tool to determine if means are different. Statistics provides a whole host of tools to compare means. In this case, a t-test is the correct tool to compare the means of employment growth with two groups (states that increased the minimum wage and states that do not increase the minimum wage).

What does the t-test tell us? Not much. The t-test reveals that the mean employment growth for states with a minimum wage increase is not statistically different than the mean employment growth for states without a minimum wage increase (p=0.2135). The figure below shows a boxplot for both groups.

I was curious and further broke down the minimum wage group into two groups. One group pass minimum wage specific legislation while the other group had minimum increases due to inflation. The correct statistical test to use here is an analysis of variance (ANOVA) since we now have three groups of employment growth. The ANOVA results are very clear, minimum wage laws had no statistical effect on mean employment growth (p=0.0543). The figure below shows a boxplot for each group.

The t-test and ANOVA results reveal that there is no statistical difference among employment growth between states with minimum wage increases and those without minimum wage increases. The boxplots help to visualize the variability in employment growth. The overlapping error bars are a strong statistical indicator that there is no difference.

Practically, what does this mean? Not a whole lot. This data does not the support the claim made by CEPR, that increasing the minimum wage causes faster job creation. This data does reveal that job growth is not solely dependent upon minimum wage laws. This makes sense to me. In 2012, only 1.1% of workers made minimum wage. This means that greater than 98% of jobs pay more than minimum wage. Each state has many different economic policies that have a large effect on the local economies. Historical data would need to be analyzed for each state in order to see the effect of minimum wage on employment growth before and after a change in minimum wage legislation.

minimum wage