Skip to Content

Operation Warp Speed demonstrated that vaccine development need not occur on a decadal time scale. Americans got vaccines in less than 12 months because the Food and Drug Administration (FDA) was pushed out of the way. The FDA has returned to business as usual, privileging bureaucratic rules over the well-being of Americans.

Consider some recent FDA headlines. In February, the agency shut down an Abbott Laboratories factory making baby formula over bacterial contamination. A job well done.

Yet the shutdown predictably led to a formula shortage and the FDA failed to communicate this to the Biden administration until panicked parents were desperately scouring stores. We could have relaxed tariffs and FDA labeling requirements on European formula, which the House and Senate have now done, before the shortage became acute.

The FDA has thwarted containment of the Monkeypox outbreak. An effective vaccine exists, and one million doses were in a warehouse in Denmark awaiting shipment, but unfortunately in a new warehouse. The FDA insists on inspecting every warehouse from which drugs are shipped. The European Medicines Agency had inspected the warehouse and judged that it met European and American safety standards. Better Americans get sick than we trust Europeans.

The FDA approval process is slowing development of vaccines against multiple COVID-19 variants. Instead, we are using vaccines tailored to the 2020 variant. According to pharmaceutical CEO Patrick Collison, “In our view it is probably true that, with competent execution, we could roll out pan-variant COVID vaccines before the end of 2022. … Not having pan-variant vaccines in 2022 is best thought of as a choice.”

The FDA is finally considering over-the-counter sales for at least one birth control pill. Women would no longer have to visit a doctor, greatly reducing cost and improving access.

In addition to Operation Warp Speed, the Trump administration also championed Right to Try legislation for drugs. The FDA is actively subverting this law. Here is pharmaceutical CEO Vivek Ramaswamy’s description: “The agency absolutely hates the Right to Try law. … [N]o rational company wants to alienate the FDA, even if that means giving a cold shoulder to Right to Try.”

Such “accomplishments” are not new. The regulation of pharmaceuticals has been a disaster. Congress delegated the agency this power in two steps, authority to determine if drugs were safe in 1938 and the power to regulate efficacy in 1962. The FDA in 1962 also gained authority over the testing protocols for new drugs. Today only drugs the FDA declares safe and effective can be legally sold in the U.S.

Demonstrating efficacy drives up the cost of and lengthens the approval time for new drugs. The delays have been deadly. Justification for this strong statement comes from drugs approved for use first in Europe and eventually by the FDA. The illnesses and deaths while Americans waited on the FDA are attributable to the delay.

Fears of drug companies pushing modern-day snake oil on an unsuspecting public motivate efficacy regulation. This fear is not unreasonable. Drug companies might also co-opt doctors through financial incentives for prescribing or boastful advertising.

Yet beginning with Sam Peltzman’s seminal study, economists have found these fears unfounded. Hospitals and insurers provide a check on exaggerated claims about drugs. Yet our government-structured market, based on an abiding distrust of markets and profits, reduces life-saving innovation and makes insulin so expensive in the U.S.

FDA regulation is bad economics but also immoral. The use of whatever pharmaceuticals we wish to buy for ourselves should be a human right. Right to Try correctly describes medical choice as a right. All drugs proven safe should be available over the counter; prescriptions demonstrating medical necessity would trigger insurance coverage. Drug companies would price sleep aids and pain meds to be affordable for the self-medicating.

Operation Warp Speed gave us a glimpse of what biomedical research freed from governmental control can accomplish. The FDA is reverting to its normal, bureaucratic, foot-dragging ways. Americans should not have to die or suffer from bureaucratic delays.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Major League Baseball’s moving the 2021 All-Star Game from Atlanta over Georgia’s new voting law symbolizes businesses’ new willingness to take sides on political issues, typically the progressive side. Businesses previously avoided offending potential customers or employees. Selling to both Republicans and Democrats maximizes profit!

Vivek Ramaswamy explores the causes and consequences of “woke” business in Woke Inc: Inside Corporate America’s Social Justice Scam. The book offers numerous valuable insights and creative analyses. Mr. Ramaswamy is the child of immigrants from India who grew up in Ohio. He attended Harvard for undergrad and Yale Law School and worked in pharmaceuticals including as CEO of Roivant Sciences before stepping down in 2021.

Progressive business leaders emphasize “stakeholders” (workers, suppliers, communities, etc.) over stockholders, the owners. Mr. Ramaswamy sees stakeholderism as a ploy. A CEO serving many masters need not follow orders from any: “By becoming accountable to literally everyone, they become accountable to no one.” Managers with a fiduciary duty to the stockholders can be held accountable.

Many companies cultivate glowing reputations to cover their misdemeanors. Mr. Ramaswamy highlights Volkswagen, hailed as the world’s most sustainable automaker. “Clean diesel” cars briefly made VW the world’s top automaker. Except the company was using “defeat devices” to cheat on emissions tests.

The author views finance’s wokeness an arranged marriage. The financial crisis bailouts sparked Occupy Wall Street and big fines from the Federal government. Goldman Sachs and others led on wokeness to deflect attention from their misdeeds.

Employees often push wokeness. Mr. Ramaswamy notes that Roivant’s Ivy League grads arrived woke. The New York Times’ newsroom staff have similarly staged numerous woke revolts. Leftists view everything as political, so making all institutions advance progressive social goals fits the game plan.

Mr. Ramaswamy believes that corporate politics seriously threatens our democracy. Politics – based on one person, one vote – should decide questions like inequality or climate change. CEOs have excessive influence when corporations do politics. Making businesses maximize profit was “about protecting the rest of society from a Frankensteinian corporate monster.”

Woke business involves firings for politically incorrect views. Google fired engineer James Damore in 2017 for questioning its’ gender equity hiring policy. Just this month software company Outreach fired Griffin Green for his Tik Tok videos. Sexist and racist behaviors can disrupt workplaces; as a free market economist, I grant managers significant discretion on company business. But managers seem to be placating the social media mob while performing scant due diligence.

What to do about this? Mr. Ramawarmy suggests using existing laws against religious discrimination: “[S]ince wokeness is a religion, employers can’t impose it upon their employees.” Does “wokeness” qualify as religion? Columbia University’s John McWhorter provides a strong affirmative argument in Woke Racism.

Social media censorship is another element of woke business. Let’s grant social media bias against conservatives. Again, the question is what to do. I believe that only governments can censor. A media company only denies a speaker the use of its platform and cannot prevent the speaker from using other platforms.

Mr. Ramaswamy offers another intriguing suggestion. Criminal law holds that the police cannot have someone conduct an otherwise illegal search. Twitter and Facebook act as government agents in censoring political speech. Again, existing law can address a problem.

The problem of woke business, I think, goes beyond an opportunistic scam. Corporate America’s use of wokeness as a cover for making profit likely stems from an unwillingness or inability to defend the morality of business. Notre Dame’s James Otteson observes how many people believe that business should “give back.” But only those who have done wrong must give back. Professor Otteson argues that “Honorable business is neither morally suspicious nor even morally neutral: it is a positive creator of both material and moral value.”
Wokeness will not protect business for long because progressives, as Mr. Ramawarmy notes, are hostile to business. This arranged marriage will not produce lasting bliss for corporate America.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

In one of the final decisions of a momentous term, the Supreme Court halted the replacement of coal-fired power plants in West Virginia v. EPA. The decision constitutes a major victory for representative government.

Several states and power companies challenged the EPA’s 2015 Clean Power Plan (CPP), which was forcing the early retirement of coal-fired plants. The story starts in 2009 with President Obama’s “cap-and-trade” legislation for the electric power industry to fight climate change. After failing in the Senate, President Obama directed the EPA to enact cap-and-trade via regulation.

To do so, the EPA changed the meaning of pollution control technology in the Clean Air Act. Previously pollution control meant plants could still operate with emissions-reducing measures; EPA sought to replace plants with facilities emitting less carbon dioxide. The court ruled that the Clean Air Act did not give the EPA authority to restructure the electricity industry. Such action would be a “major power.” Here is the Congressional Research Service’s description of the “major power doctrine”: “The Supreme Court has declared that if an agency seeks to decide an issue of major national significance, its action must be supported by clear statutory authorization.”

I completely agree with this philosophy. Legitimate government rests on the consent of the governed. Meaningful consent must be tangible and closely tied to the government power in question. A dictator can always claim popular consent, with the secret police generating displays of support.

In America, consent comes from our elected representatives passing legislation. Reinterpreting existing laws to grant new authority is dictatorial. The same government philosophy justified the Centers for Disease Control claiming control over rental housing for its eviction moratorium.

Even considering the big picture – climate change – I see the Supreme Court as correct. The Clean Air Act dealt with pollutants directly causing harm; smokestacks and auto tailpipes putting out chemicals producing smog. The link between carbon dioxide emissions today and climate changes decades from now is indirect and almost entirely (and necessarily) based on computer models. Climate change differs enough from smog to require separate consent from the people.

Blue state liberals do not see it this way. Massachusetts Senator Elizabeth Warren said, “Our planet is on fire, and this extremist Supreme Court has destroyed the federal government’s ability to fight back. This radical Supreme Court is increasingly facing a legitimacy crisis, and we can’t let them have the last word.”

This is nonsense. The court did not say Washington could not reduce greenhouse gases or close coal-fired power plants, only that Congress must authorize this. If most Americans truly believe that climate change is an existential threat, the House and Senate should be able to pass legislation, even over a Senate filibuster.

Discovering authorization for major policies through decades-old laws undermines self-governance and rejects the moral equality of the citizens of a free nation. Citizen participation in regulatory rulemaking is negligible. Proposed regulations have a public comment period and regulatory agencies must “respond” to comments. But agencies can proceed despite negative comments.

Citizens of a free nation should respect each other. A citizen unable to convince her fellow citizens through reasoned argument of the propriety of government action is obligated to respect this disagreement. Genuine consent must also be voluntary. Verbally abusing or firing dissenters from their jobs or censoring information violates genuine consent.

Enacting climate change policies in quasi-authoritarian fashion will almost certainly prove self-defeating. Reducing the future costs of global warming will require consistent application of policies for decades. Sustaining such policies over time requires the considered consent of most Americans.

Politicians who feel justified in imposing policies absent the genuine consent of the governed must, it seems to me, view themselves as superior to others. Their opinions count more than those of others and are never the ones in error. Elitist politicians should not govern a free country.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

We celebrate America on the Fourth of July. But do Americans today still respect each other enough to constitute a functioning country? Responses to the Supreme Court’s decision to overturn Roe v. Wade illustrate this animosity and the challenge we face maintaining liberal democracy.

The Supreme Court did not outlaw abortion in Dobbs v. Jackson, but rather returned it to the states and their legislatures. State can protect abortion in their constitutions. America will now address abortion through federalism.

America faces bitter partisan division. Reactions to Dobbs reflect sentiments revealed in polls. A Pew Research poll found that 70% of Democrats and 62% of Republicans highly engaged in politics said that the other party makes them “afraid.” A Yahoo News/YouGov poll found that 25% of Republicans and 23% of Democrats chose “a threat to America” as the phrase that “best describes people on the other side of the political aisle from you.”

A nation is a cooperative venture. Defending its citizens and territory is any nation’s most fundamental task. National defense involves shared sacrifice for a common goal. Who will sacrifice for the freedom of persons they despise? Contributions to the common good must flow from mutual respect and goodwill. Yoram Hazony nicely describes how citizens should feel toward each other in The Virtue of Nationalism: we suffer others’ hardships as our own and share in their joy and happiness.

Contempt toward other citizens threatens America as a liberal democracy. Liberalism here refers to the classical sense and not leftist politics; liberalism’s core values are the moral equality of all and respect and tolerance for others. Democracy involves shared governance and one person, one vote.

Our discourse today displays neither tolerance nor respect. When Blue and Red America see each other as immoral, they deny moral equality. Neither side will likely let persons they fear as threats govern after an election.

Political liberalism and the rule of law enabled peaceful coexistence and cooperation. These institutions established ordered liberty and domestic peace, unleashing the Great Enrichment and a thirty thousand percent increase in standards of living.

Peaceful coexistence requires compromise and mutual accommodation. Abortion’s two seemingly irreconcilable views — a woman’s right to choose and the defense of life – provide a formidable challenge. And a nontrivial number of Americans seemingly view violence as justified to either end mass murder or defend bodily integrity.

Is our country destined to be torn apart? Not necessarily. Decades of religious warfare across Europe weighed heavily on America’s founders. The First Amendment’s separation of church and state was their compromise, enacted not because religion as unimportant, but because people would fight over religion.

The founders’ compromise let everyone could worship as they chose in exchange for not forcing their views on others. It worked because Americans decided that religious freedom was more important than dictating to others.

Middle ground almost always exists once we look carefully. Some arises from the costs of enforcing laws. If some states severely restrict abortion, “medical tourism” to states where it remains legal will be possible; Justice Kavanaugh’s concurring opinion in Dobbs warned that state travel restrictions would be unconstitutional. Just how will opponents go to enforce restrictions?

Potential for compromise also emerges when people carefully scrutinize their beliefs. Do opponents view abortion as a surgical procedure? What exactly does a meaningful right to choose for women entail? Might making birth control and “abortion” pills widely available offer a compromise?

Returning abortion to legislatures means that we, through our elected representatives, must grapple with these questions. Federalism also enables compromise. Nationally Americans are evenly split on abortion, but enormous variation exists across states. When states decide policy, Alabama and New York can set policies reflecting the differing views in each state.

Demanding total submission to your views basically initiates civil war when opponents prefer fighting to submission. Recognizing our moral equality is crucial here. If Blue and Red America insist on the right to force their views on others, we may choose conflict over cooperation.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The reborn United States Football League has completed its first regular season. USFL players made $4,500 per game, or $45,000 for the season.

By contrast, top NFL players make one thousand times more than this. How can such salary differentials exist across the same profession?

Pro athletes earn what economists call rent, a concept first detailed by 19th Century British economist David Ricardo. A rent is a payment exceeding the minimum a person requires to work at a job. Any factor of production in the economy can earn rent.

The factors affecting the minimum someone requires to play football, the reservation salary (or wage), illustrate how the USFL can pay such lower salaries. The most important consideration for the USFL is whether a player plays in the NFL. With a NFL minimum salary of $610,000, no current NFL player should choose to play in the USFL instead.

Top college players earn more than $45,000 from Name, Image, and Licensing deals and should not leave early for the USFL. Enormous skill differences exist across potential pros. Players better than the “replacement” level will command higher salaries.

Some USFL players may get NFL tryouts based on their play and baseball players earn little in the minors. The opportunity to advance to the NFL is part of the USFL’s compensation. The original USFL “discovered” players who later played in the NFL, like undersized linebacker and 2022 Hall of Famer Sam Mills.

The attractiveness of jobs also matters. People will work for less in desirable jobs or desirable locations. Fame and popularity make being a pro athlete very desirable before considering the money.

Past pay and expectations also affect reservation salaries. Green Bay’s Aaron Rodgers has made hundreds of millions of dollars in his career and likely would not play another year for $1 million. Ratcheting salaries down once players expect to be paid millions would be extremely difficult.

The NFL reservation salary for talented young players, if we could erase all expectations of playing for millions, is close to zero. Why then do NFL teams pay $2.8 million average salary? Two forces explain this. The first is revenue; the NFL generates more than $10 billion annually. Play at a level to produce this fan interest and revenue requires talented players, who generate tens of millions for their teams.

So, teams can afford eight figure salaries. But why do owners, who like money, pay more than the reservation salary? Competition for talent, as in all labor markets, drives salaries up to the level justified by revenues. Successful teams generate more revenue and owners often value winning; winning requires top players.

Sports salaries are almost entirely rent. Owners and players will maneuver to capture these rents for themselves. Competition is greatest when players can negotiate and sign with any team, or with free agency, which owners oppose.

For nearly 100 years, baseball’s reserve clause prevented free agency. The reserve clause let teams automatically renew a player’s contract for the next season for the same or higher salary. Teams always renewed good players’ contracts.

Sports illustrate how advancing one’s personal interest sometimes requires collective action. Absent free agency, a player’s only bargaining leverage is refusing to play.

While the loss of a star player can cripple a team, when players jointly withhold their services – go on strike – they can shut a league down. Unions help organize collective action by workers. Owners must also cooperate to prevent bidding for players.

The reserve clause worked only because no owner tried bidding players away from other teams with higher salaries. After baseball established free agency, the players periodically accused owners of collusion, or agreeing not to bid aggressively for free agents.

Like all activity in a market economy, everyone involved in pro sports must voluntarily participate. Since many kids dream of being pro athletes and sports generate billions in revenue, voluntary participation is not a problem.

But sports generate enormous rents and competition to capture these rents ensures enduring strife between labor and management.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The February shutdown of an Abbott Laboratories plant in Michigan due to contamination precipitated the nationwide baby formula shortage. The plant finally resumed production this month. Whether these events reflect corporate greed or bureaucratic bungling illustrates why we so often disagree about policy.

Let’s start with some facts. Abbott is one of the four largest formula producers and makes the Similac and Elecare brands. The Food and Drug Administration (FDA) closed the plant after investigation of a whistle-blower’s report confirmed poor sanitation and Abbott recalled potentially tainted formula. Contaminated formula has been linked to two deaths. The resulting shortage has left parents scrambling madly.

What do these facts show? In one view, Abbott put profits ahead of babies. As the left-wing Guardian observes, “The embattled baby formula producer Abbott used windfall profits to enrich investors instead of replacing failing equipment that was likely injecting dangerous bacteria into its infant nutritional products.” The “prioritization of shareholder wealth” shows the “rot in the nation’s economic system.”

Abbott’s actions demonstrate the need for consumer protection. But policing corporate greed is hard. The regulations consumers enact must be enforced. Companies lobby to keep regulators’ budgets small, resulting in too few and poorly paid inspectors, who then curry favor with companies in hopes of getting hired for better salaries. Companies “capture” the protectors.

An alternative view starts with bureaucratic bungling. The FDA mailroom took four months to get the whistleblower’s report to the proper office. After ordering the shutdown, the FDA did not tell the Biden Administration about the impending shortage.

Bad policy also contributes. A complicated tariff-quota system protects U.S. formula makers from European competition. FDA labeling requirements, not safety concerns, keep this formula from being legal in the U.S. These policies could have but were not waived when Abbott’s plant closed. President Biden eventually sent military planes to retrieve some.

Government regulation of food safety is poor policy based on Upton Sinclair’s view in The Jungle. Yet all purchases in a market economy are voluntary, so how does sickening or poisoning customers yield long-term profits? Food processors would still face litigation without the FDA and insurance companies would cover these losses. Insurers (and consumers) would demand assurance of quality, perhaps from a third party like Underwriters’ Laboratories.

Quality assurance requires accountability. Who at the FDA will lose their job for misplacing the whistleblower’s report or not alerting the rest of Washington? Politicians pass laws and regulations appearing to protect Americans without truly delivering.

Which side is correct? Difficulty determining this arises largely from how world views shape our thinking. Economist Thomas Sowell such views “visions” and saw most policy disagreements as arising from two conflicting visions, the constrained and the unconstrained.

Visions are simple but often form the basis for theories, labeled paradigms by Thomas Kuhn. Paradigms shape inquiry in all fields, including economics. I would distinguish economics’ two main world views or paradigms as markets are incredibly complex and work amazingly well, versus markets frequently fail and enlightened economists can improve outcomes. Although sounding like cover for laissez-faire or government activism, most economists see these as describing how the world works, with policy advice following.
Paradigms are often difficult to tested against each other. Economics cannot conduct controlled, society-wide experiments. Consequently evidence, like the events at Abbott, never conclusively demonstrates markets failing or working. No simple test will ever confirm which worldview is more accurate.

When paradigm shifts occur, they do not typically involve one side conceding. As Professor Kuhn demonstrated, revolutions occur as young scholars recognize the new paradigm’s superiority and the old paradigm’s proponents retire. This reflects another reality: once we embrace a worldview, abandoning it is very difficult.

Getting the facts correct is always important. But disagreements arise from interpretation through different world views. Listen to economists with different world views describe events and decide for yourself which view makes more sense.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The killing of 19 students and two teachers at Robb Elementary School in Uvalde, Texas, has outraged Americans. The malfeasance of law enforcement during the tragedy is highly disturbing and demands reforms.

Police officers reportedly waited outside the classroom for over an hour. The commanding officer evaluated the situation as a “barricaded shooter,” not an “active shooter” calling for immediate entry.

A Border Patrol SWAT team finally entered and killed the assailant. Quicker action might have saved some victims. Economics counsels that there are no solutions in this world, only tradeoffs.

Ideally no one would ever try to kill children at a school. Unfortunately, evil exists. We can only manage, not eliminate, school shooting risk.

Let’s start with the frequency of school shootings. One frequently cited database tracks all school gun violence, like students getting in an argument and shots being fired. Such events differ enormously from Columbine, Newton or Uvalde.

Bradley Thompson of Clemson University has compiled a list I will use. While school shootings seemingly happen all the time, Professor Thompson counts 14 events and 109 deaths since 1997.

Can we reduce this further, perhaps with better anger management? Over the past 25 years, over 100 million people have gone through high school (not all graduated). 16 individuals perpetrated the 14 shootings, or one out of 6 million students. The overwhelming majority of young people learn to control their anger.

Hardening schools is another possibility. America has 100,000 public and 30,000 private schools, so only one in 10,000 schools has experienced a mass shooting in 25 years. Many hardened schools will never face an armed intrusion. A teacher reportedly propped open the door the Robb Elementary shooter entered.

Locked doors will inconvenience teachers and students thousands of times for every intruder stopped. The infrequency of shootings challenges the human capacity for diligence.

The Uvalde assailant, like many school shooters, had no criminal record and no reported mental health incidents. Most shooters are not juvenile delinquents. I doubt psychologists can identify the one in six million in advance.

America has over 400 million guns, far more per capita than any other nation. Given all these guns, other countries’ gun control laws will work differently here. Even if we repeal the Second Amendment, Americans wishing to do evil will likely obtain guns. We likely must react to these rare events.

Experts stress the need for an immediate response by the first officers on the scene. Unfortunately, the dawdling at Uvalde was not unprecedented. The delay was 47 minutes at Columbine in 1997 and 58 minutes at Marjory Stoneman Douglas High School in Florida in 2018.

After earlier shootings, experts recommended putting police officers in schools. An officer offers a chance to stop an incident before it starts. Not perfect protection: an officer will not always prevail against a well-armed assailant. Yet delaying a perpetrator might enable locking school doors and arrival of other officers.

Taxpayers paid for officers for schools. But at Douglas High the officer stayed safely in the school parking lot and Robb Elementary’s officer failed to engage the assailant. Taxpayers, I think, expected police officers to try to stop school shooters.

Can we expect better? Writing at Reason.com, J.D. Tuccile thinks not because, “officers are regular people working a unionized public-sector job” and have “no stake in the situation and families waiting at home.”

I think most police officers take their responsibility to “protect and serve” very seriously. Ours is a government of the people. Police officers ultimately work for us.

Detailed rules of engagement should be crafted by experts and not voters, but we set the broad parameters. If we want school shooters engaged immediately, we can and should insist on this.

We need a timely armed response to school shooters. Security guards at banks routinely engage bank robbers. If America’s police forces will not step up, we could cut police budgets and hire private security for our schools.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Elon Musk’s Twitter takeover has stirred many emotions. The deal also illustrates the unprofitability of media bias, which claims of media bias must consider.

The drama began with Mr. Musk announcing on April 4 a 9% stake in the company. After turning down a seat on the Board of Directors, Mr. Musk proposed on April 9 buying Twitter at $54.20 per share, or about $44 billion, and taking it private. The Board accepted the deal on April 25, which is currently paused over concerns about robot accounts.

The stock was under $40 a share at the end of March, so Mr. Musk offered 35% above market value. Stock markets are forward-looking with prices reflecting investors’ projections of future profitability. Mr. Musk likely believes he can make Twitter worth more than $44 billion. We do not know all the changes he intends. In addition to committing to free speech, Mr. Musk has mentioned the pricing of premium service, eliminating advertising and open-source algorithms.

Plausibly some of the expected increase in the value of Twitter is from eliminating political bias. This illustrates the general point: media bias should reduce profits. Allegations of liberal bias must address this challenge, as I argued in a 2001 Cato Journal paper.
Liberal bias reduces profits by alienating conservative or moderate users, readers or viewers. Revenue is lost by alienating potential customers, or in Twitter’s case, banning one of its most followed users, Donald Trump.

The economics of bias in the era of the Big Three TV networks were even more problematic. If CBS, NBC and ABC were as liberal as alleged, they would lose conservative viewers and divide the liberal viewers three ways. A network could increase its audience and profits by leaving the liberal news cartel.

Economists assume businesses maximize profit. In part, this helps in building models to study the economy. But it also reflects self-interest. People start businesses to make money, so most entrepreneurs should choose more profit if possible.

The separation of ownership and control in corporations creates one challenge to profit maximization. Owners get the profits while the managers doing the work may receive little of the extra profit. I will not consider this important topic in economics and finance today.
One acceptable deviation from profit maximization is highly relevant for media bias.

Imagine a TV station owner who cares about politics, say election of the next mayor. We might expect the owner to slant news coverage and only run ads for the favored candidate.
Why do economists accept this? Normally the best way to get things we want is buying them. Make money running your business and then buy yachts or ski villas. But sometimes you can “buy” what you want more readily through your business. A $10,000 reduction in profit from biasing coverage of the mayor’s race may affect the outcome more than a $10,000 campaign contribution.

Persistent media bias involves owners choosing politics over profit. David Halberstam in The Powers that Be detailed the influence of major news company owners in the 1950s, yet more owners leaned right than left. One owner is far more likely to trade profit for politics than many. Some stockholders will prefer profit while not all choosing politics will be liberal.

Twitter, Facebook and Google (Alphabet) are all publicly traded corporations.
Extreme social media bias should also be very costly. Thousands of corporate investors must choose liberal politics over profit. Bias also invites entry by “fair and balanced” or conservative rivals, either as with Rupert Murdoch and Fox News or through a corporate takeover.

Precise definition and measurement of bias remains elusive. Psychology warns that our impressions of bias may be biased. The economic difficulty of sustaining liberal bias suggests carefully scrutinizing the evidence. And if a social media company committed to free speech is more valuable, some good capitalists should provide one for us.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

It’s a seemingly simple request. Surely Washington can lower the price of gasoline to provide Americans relief at the pump. Democrats in the House recently passed the Consumer Fuel Price Gouging Protection Act to this effect. Yet as 4,000 years of experience shows, governments have little ability to lower market prices through command.

Markets are a system of voluntary social cooperation. No person can compel another to do anything in a market. Each sale of a good or service must involve a willing buyer and a willing seller, with willing defined relative to each party’s other options. I’ll return to some concerns about the voluntariness of market transactions shortly.

Many critics believe big oil companies dominate the world economy. But ExxonMobil, Texaco, Shell and others cannot force anyone to buy gas. And we must pay the station’s price to buy gas.

Production of most goods involve contributions from many persons. Thus, “willing seller” means compensating each contributor (employees, suppliers, etc.) sufficiently to voluntarily participate. A market economy gives people the freedom to choose, which would be great even if markets left everyone poor. But free markets also produce unprecedented prosperity.

The Consumer Fuel Act prohibits price gouging for gasoline during emergencies. Presumably, President Biden will declare an emergency if the bill becomes law so the Act would function as a price ceiling (legal maximum price) for gas.

Government uses force. Price controls insert force into the voluntary market economy. If President Biden sets a maximum price of say $3 per gallon, anyone selling gas for more than this could be fined or arrested.

Price ceilings represent limited intervention into markets. No seller is forced to sell gas at the set price. Suppliers normally sell more at higher prices, which is the Law of Supply. Consequently, less is available under price ceilings than at the market price (the price where the quantity of gas drivers demand equals the quantity offered for sale).

Before the pandemic in 2019 Americans purchased 140 billion gallons of gas. The Act implicitly promises Americans this much gas for a price below $4.60/gallon. This is a false promise. Exactly how much less gas will be available is debatable; the decline should be modest initially and increase over time.

The gas provided under a price ceiling goes for less so the drivers who buy this gas benefit. But price ceilings produce shortages; if you are my age, you might remember 1979’s gas lines. At high market clearing prices, goods are still available for purchase. Shortages are very painful, as the ongoing baby formula shortage demonstrates. Is paying $5 a gallon worse than being unable to drive to work?

Let’s return now to the voluntariness of market transactions. You might be reading this and thinking, “But I have to buy gas.” Or perhaps, “People have to work to eat.” Let’s examine these points.

Life involves many decisions taken at different times. You choose to take a job, where to live, and to drive a car to work. This commits you to buying gas each week. It may feel like compulsion but is just the last part of the plan. Failing to buy gas massively disrupts your plan: you could lose your job and then your house. You buy gas today almost regardless of the price but can adjust if high prices persist.

What about working? Yes, we need a job but not any specific job. Consequently, no employer – no other person – can command you. You can also work for yourself. And Nature as opposed to other people drives this necessity: people need food, shelter, and clothes to survive.

Governments seem powerful but our market economy consists of 330 million Americans and billions of others across the globe. The price of gas reflects oil market participants’ actions worldwide. Politicians cannot lower the cost of producing gasoline or other goods, although policies can artificially increase prices. Rising inflation prompted President Richard Nixon to impose wage and price controls in 1971, and the House wants to try again.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

San Francisco is one of the most beautiful and affluent cities in the world. And yet San Francisco, Los Angeles, and Seattle, have significant and growing homeless populations.

Anyone interested in the ills of the West Coast should read Michael Shellenberger’s book “San Fransicko: Why Progressives Ruin Cities.” His answer: “much of what I and other progressives had believed about cities, crime, and homelessness was all wrong.”

First some background on the author. Mr. Shellenberger has mostly written on environmental issues. He was a Time magazine “Hero of the Environment,” and founded the Breakthrough Institute and Environmental Progress. His most recent book, “Apocalypse Never,” argued against climate change as an existential threat.

How specifically do progressives destroy cities? For one, by tolerating lawbreaking and reducing or eliminating penalties for minor criminal offenses. Beyond this, progressives “prefer homelessness and incarceration to involuntary hospitalization for the mentally ill and addicted” and ignore the failures of “harm reduction” for drug addiction. They further object to temporary shelters or placing conditions on access to housing. And finally, they accuse anyone disagreeing with them of hating the poor and minorities.

The closure military bases after the Cold War ended factors in. The exit of military families ended party competition; as Mr. Shellenberger notes, California voted for a Democrat for president just once between 1948 and 1988. Liberal dominance in Sacramento precludes any state check on cities.

Perhaps the most fascinating element is the role of non-profit organizations. Proponents of voluntary society and minimal government, like myself, envision a major role for non-profits. Charities can provide virtually all welfare state services, probably more effectively than government. How then are non-profits part of the problem?

Many influential progressives work for non-profits as opposed to holding elected office. Some non-profits will have bad ideas, just as some businesses do. But businesses with bad ideas lose money. A deli which tried selling mud sandwiches would quickly learn they need to adjust or go out of business.

But as Mr. Shellenberger details, the non-profits never have to recognize their mistakes or bear any consequences. They receive funding from liberal foundations and government contracts to administer homelessness programs. The perverse logic of the public sector explains some of this; government failure often produces budget increases. The progressive non-profits also skillfully portray all dissenters as hating the homeless.

The book also challenges libertarians on drug legalization. Libertarians believe that the harm from drug prohibition outweighs any potential benefits. Yet de facto legalization of hard drugs in San Francisco has not gone well, with staggering numbers of overdoses and poisonings.

Let’s unpack this starting with the mentally ill, who comprise a sizable portion of San Francisco’s homeless. I agree that “no sane psychiatrist believes that enabling and subsidizing people with schizophrenia, depression and anxiety disorders to use fentanyl and meth is good medicine.” Beyond the mentally ill, I view the opioid epidemic as a tragedy: America offers historically unparalleled wealth and opportunities, and yet too many Americans find life not worth living. That is a topic for another day.

Would jailing drug abusers improve their lives? I agree with Mr. Shellenberger that people rarely address drug addiction until ready to change. But he argues that the prospect of prison often provides such motivation.

Mr. Shellenberger ironically notes how progressives so badly fail the poor and powerless, whom they care so much about. I would offer a different take. Progressive planners intent on solving all the world’s problems do not truly care about real people. They care about their grand plans, which may or may not help living, breathing humans. I see this as behind the frequently documented friction between people helping the homeless and the progressive non-profits.

California leads the nation in many trends, for better or worse. Let’s hope homeless encampments are not one. Liberals have long celebrated good intentions over results. “San Fransicko” details the consequences of decades of reckless disregard of the consequences of well-intentioned progressives.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

We have been told repeatedly to “follow the science,” which usually means submit to expert rule. But experts, no matter how smart, cannot run society efficiently.

Recognizing the limits of experts is not anti-science. The Federal Reserve’s inflation forecasts provide more evidence on this.

Adam Smith in 1776 was one of the first to recognize that institutions were often products of “human action but not human design.” They arise and evolve spontaneously as people discover things which improved their lives. Economists Ludwig von Mises and Friedrich Hayek detailed the limits of expertise during debates over socialism in the 1930s and 1940s.

Once you know what spontaneous order is, you recognize its prevalence, and not just in economics. Consider language. No one person or group formulated English and new words continue to emerge. People used “ain’t” before it was in a dictionary.

The absence of a planner allows the use far of more information than any person or board could possess. Spontaneous orders achieve far greater complexity.

Leonard Read’s classic essay, “I, Pencil,” illustrates this truth: no one knows how to produce a simple pencil. Money is another spontaneous institution. People wanting to exchange peacefully with each other instead of taking or stealing discovered that using gold or silver made trading easier.

But governments took control of money before the rise of political liberalism which held that government was legitimate only when serving the people. Before liberalism powerful kings and emperors forced their subjects serve them.

The Federal Reserve System is quasi-public, with a board of governors appointed by the president and 12 regional banks controlled by member commercial banks. The Fed possesses considerable independence from politicians. The governors serve staggered 14-year terms. Member banks select the regional bank presidents.

Economists have found central bank independence to generally be beneficial. Nations with more independent central banks have significantly lower inflation rates without sacrificing growth. Economist Robert Barro likens central bank independence to the elusive free lunch.

Yet independence makes it difficult to ensure that the Fed truly serves citizens, a central argument of Alex Salter, Daniel Smith and Peter Boettke in “Money and the Rule of Law.” Just as taxation resembles theft (armed persons taking money that belongs to others), creating new currency resembles counterfeiting.

The consent of the governed distinguishes taxation from theft or monetary policy from counterfeiting. The Fed conducts discretionary monetary policy, increasing or decreasing the money supply to try to stabilize the economy. Economists do not agree whether monetary policy can accomplish this.

The Keynesian models of the 1960s, which allowed fine-tuning of the economy, were badly flawed. Most economists likely agree that the Fed cannot improve economic performance without accurate forecasts.

We are experiencing the worst inflation in forty years. The Fed’s Open Market Committee (OMC) conducts monetary policy and makes projections of key economic variables, including inflation. The Fed prefers a different price index than the Consumer Price Index (CPI), and this index currently shows lower inflation than the CPI, 6.3% versus 8.5%.

The differences, however, are not important: both have risen sharply over the past year. The Fed did not see this coming. As tallied by Florida Atlantic economist William Luther, in December 2020 the OMC projected 1.9% inflation for 2022; as late as December 2021 the forecast was only 2.2%. If price increases slow in the second half of 2022, inflation will be less than the current 6.3% but still well above the forecast.

The failure is not for want of experts. The Fed employs hundreds of PhD economists and pays top salaries. Many top professors in macroeconomics and money have Fed appointments.

Experts assure us they will improve society if we obey their commands. Yet when we pay attention, we see the limits of expertise. The Fed’s inability to see the onrushing freight train of inflation that has smacked America shows again that experts know much less than they think.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Rapidly rising house prices are contributing to inflation but may also signal a housing market bubble. Overpriced real estate contributed to the financial crisis of 2008 and the Great Recession. Do we need to fear another housing crash in addition to inflation?

How much have prices gone up? The Case-Shiller index has jumped 35% since January 2020; between 1996 to 2006, this index rose 125%. Some markets have seen increases of nearly 45%. Prices in Alabama have increased at about the national rate.

What exactly is a bubble? A bubble refers to excessively high and rising prices for assets. What is too high? Consider how you could decide how much to pay for a house. One approach considers the value you would get from living in the house. This would depend on your income, other expenses, and preferences.

You could also answer based on what others would pay. You might only be willing to pay $100,000 to use a Florida beach condo. But if you could sell it for $500,000, you would gladly pay more than $100,000.

A bubble occurs when prices exceed the value in use, referred to as the fundamental value. High and rising prices can be self-sustaining. Someone might pay $500,000 for the condo to sell it for $700,000, and someone might pay $700,000 hoping it will go to $1 million. Eventually, the speculative bubble must burst because prices cannot rise to infinity.

Investors can lose lots of money in a speculative bubble. But investors spend their money and should be aware of the potential for loss. Bubbles can though have detrimental effects for the economy. Prices serve as signals. Builders will build more homes in response. But these homes will not be needed after the bubble bursts. Temporarily high prices might lead homeowners to make decisions they will later regret, like thinking their house will fund their retirement and saving less or borrowing against equity which then disappears.

Economists have developed new statistical tests to try to detect bubbles. One is produced by the Dallas Federal Reserve Bank. Any test for a bubble must estimate the fundamental value, or price based exclusively on use. The main one the Dallas Fed uses is the ratio of home prices to rents. Purchase prices and rents should be related since potential homebuyers could always rent instead.

This method then considers economic factors affecting the price-to-rent ratio. If economic factors drive up the price-to-rent ratio, this is not a bubble. Finally, they consider whether an observed increase in the ratio might occur by chance. When the price-to-rent ratio sufficiently exceeds that based on economic forces, we are in bubble territory.

The price-to-rent ratio has now been indicating a bubble for six consecutive quarters. When calculated retrospectively, this ratio indicated a bubble for a full decade before the Great Recession. It is not 2006 yet. And an index using the home price to income ratio has a weaker bubble signature.

No one statistical test will ever conclusively identify an asset bubble. Part of the reason is the same reason bubbles start: people disagree over future fundamental values. Many Americans have decided to move due to the COVID-19 pandemic. Remote work is allowing people to move out of cities. Others are fleeing blue states where governments enacted draconian COVID restrictions. Lumber shortages and price spikes related to COVID disruptions slowed home building. House prices may reflect economic forces after all.

The prevalence of bubbles is one reason economists disagree over the desirability of free markets. Markets prices are supposed to act as signals, as described, but bubbles produce unreliable signals. Markets will not effectively direct production with frequent bubbles. Free market economists see bubbles as relatively infrequent, unavoidable, and often caused by government.

What does a potential bubble mean for you or me? Would-be real estate moguls should exercise caution. Homeowners should be wary of capital gains they have not yet realized. If the Federal Reserve gets serious about controlling inflation, interest rate hikes may well burst a housing bubble.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

In 2020, the federal government started sending checks to many Americans in response to COVID-19. The presumption was that Uncle Sam’s checks would make us better off. But government transfers cannot make a nation wealthier and have contributed to inflation.

To understand this, we must distinguish money and currency. Money is a medium of exchange, or a way to make purchases. People accept money when they sell things because they expect to exchange it for things they want. Many different items have served as money in different places and times, from tobacco to gold, silver, paper, and soon possibly Bitcoin.

You or I have money because we produced something, worked for someone, or sold something of value, like a car. Or because someone who earned money gave it to us as a gift or inheritance. Money represents unconsumed production.

Currency is the item serving as money. In the U.S. today, it is the dollar of course, green pieces of paper or entries in bank accounts.

Once an economy moves beyond using commodity money, currency must be produced. Whoever produces the currency can acquire goods and services without producing anything. This explains the necessity of controlling counterfeiting. A good currency must be hard or impossible to duplicate.

Governments long ago took over supplying currency because the supplier can usually make some extra to spend themselves. When metal coins served as currency, a type of counterfeiting called “scraping” was possible. Someone could scrape a little off several coins and mint an extra one. Kings protected against scraping by placing their royal seal on the coin; a defaced seal would reveal scraping. Of course, kings charged for minting coins.
Beginning with the CARES Act, Uncle Sam authorized $4.6 trillion and spent $3.6 trillion, much of it transferred to households and businesses through stimulus checks, the Payroll Protection Plan, and assistance for landlords. Yet the government can only make currency, not money. If you earn an extra $3,000 from part-time work, you can spend more than before but there are also more goods and services available. A $3,000 stimulus check did not make more goods and services available.

One of Adam Smith’s great insights in “The Wealth of Nations” was recognizing that wealth depends on our ability to produce and consume. Previously people associated wealth with possessing large quantities of gold and silver. Smith recognized that the true value of gold was its ability to be exchanged for consumption goods.

More currency does not increase our ability to produce goods and services. Currency creation can make the persons who get the dollars to spend first better off. Suppose the Federal Reserve doubled the number of dollars and gave them all to you. Prices would likely double, but you could still buy a lot, lot more than before.

Government can also tax money from some Americans and transfer it to others. But taxes and transfers cannot make us all better off. If Uncle Sam taxed everybody $10,000 and gave us the cash back, we would merely be where we were before.

COVID spending arguably reduced our ability to produce goods and services. Government checks caused some to work less or stop working altogether. Labor shortages have worsened supply chain problems, making us at least temporarily poorer.

Inflation costs Americans as well. Most Americans will receive raises to offset higher prices (this is part of the inflation), but prices and our incomes do not rise simultaneously. And we have no guarantee that a raise, when it comes, will completely offset inflation. The cost of inflation includes fear and anxiety.

All of this began with an impossibility: government redistribution cannot make all Americans more prosperous. Confusing currency and money sustains an impression that larger bank balances increase prosperity. Today, some politicians promote government checks as relief from inflation. They apparently hope to pull the same ruse on us again.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Cities across America own golf courses, but analysis from the Reason Foundation shows that many lose money on them.  Is subsidizing golf a proper function of government?

The Reason study found over cities reporting finances for their golf courses.  Seventy percent of cities lost money in 2020, which may have been unrepresentative due to state COVID restrictions.  All four Alabama cities reported on (Fort Payne, Gadsden, Millbrook and Pelham) lost money on golf.

In the market, losses indicate that businesses are not producing value equal to the value of resources used.  Producing low valued products or services with scarce resources reduces our standard of living.  Are cities wasting money on golf?

Answering this question is a little tricky.  Golf courses create value in two ways.  One is through play (although golfers sometimes wonder why they pay for such frustration).  Golf courses also boost the value of adjacent real estate.  Developers may build golf courses to sell houses, not to earn profits from course operation.  Privately owned courses may create value even if losing money.

Cities generally do not generate auxiliary revenue from golf, so operating deficits are likely being covered out of the budget.  Tax dollars have alternative uses, like repairing roads.  Should the courses be closed, or is the problem poor management?

Interestingly, firms in the golf industry manage municipal courses.  Companies offering this service include Troon, Hampton, Club Corp, and Arnold Palmer.  Peter Hill helped build one of the first management companies.  A Golf Week story summarized his observations: “Often, municipalities allowed courses to fall into disrepair, didn’t manage the books well, or had trouble finding the proper price point.  Or sometimes they realized they simply didn’t know how to run a golf course efficiently.”

Abraham Lincoln said that government should only do things that people “need to have done, but can not do at all, or can not so well do, for themselves.”  The existence of private golf courses creates a strong presumption against government golf.  Yet local governments provide many services also provided by businesses.  Government offers unfair competition for business.  A city facility charging prices too low to cover costs rarely goes out of business and the manager (a government bureaucrat) may not be fired.  Businesses can go broke competing with tax subsidized prices.

The political influence of avid golfers is a bad reason for government golf.  Although municipal golf courses are open to all, only 24 million Americans played a round in 2019.  Avid golfers are a portion of this total and on average have incomes well above the national median.  Taxpayers should not fund anyone’s hobby.

Two better arguments exist.  The first is expanding opportunities.  Golf is an expensive sport which many children never get to try.  It offers many networking opportunities for professionals, so some familiarity could help promote upward income mobility.

Private efforts like the World Golf Foundation’s First Tee program, however, may better achieve this goal.  Cities could use tax dollars for vouchers for play by low-income residents or golf field trips for middle and high school students instead of owning golf courses.

The second argument is as an amenity.  Restaurants, museums, recreation, and golf affect the “livability” of a region and help businesses attract and retain good workers; economists call these local public goods.  Businesses do not provide these and may have to pay higher salaries for employees in areas with few amenities.

The conditions for legitimate investment in amenities, however, are strict.  There must be no privately owned, open to the public golf courses within reasonable driving distance.  (This also alleviates unfair competition concerns.)  Many amenities are not available in rural areas.  Evidence should be presented to show the value of golf versus other amenities (e.g., bowling or laser tag).

Local governments are probably better served focusing on roads, schools, and trash collection.  Enterprising politicians are sometimes called political entrepreneurs.  Politicians who want to go into business should use their own money, not tax dollars.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Russia’s invasion of Ukraine sent already rising oil prices even higher. Record gas prices are fueling the highest inflation rate in 40 years. President Biden blames high gas prices on Mr. Putin, but administration policies are hampering U.S. oil production.

Markets are forward-looking and incorporate new information almost instantaneously. Anticipated events will affect commodity and stock prices before they occur. Experts’ surprise at the full-scale invasion suggests that this likely explains the price rise from $90 to $120 per barrel over the next two weeks. But the increase from $40 in October 2020 to $90 in February seems hard to blame on Mr. Putin.

The Institute for Energy Research (IER) maintains a scorecard on Biden energy policies. Mr. Biden canceled the Keystone XL pipeline on Inauguration Day. The XL segment was not going to be completed until 2023, so White House press secretary Jen Psaki is correct that this is not reducing oil supplies today. But by foreshadowing administration policies, it could easily have driven up prices.

The Biden administration has stopped development in the Arctic National Wildlife Refuge and the Alaska National Petroleum Reserve and halted new leases on Federal lands and waters. A court ruling blocking a large Gulf of Mexico lease has not been appealed.

Ms. Psaki repeatedly cites 9,000 unused Federal leases as demonstrating industry culpability for high prices. As IER explains, oil production involves two steps: leases and drilling permits. Companies first sign leases for exploration and then apply for drilling permits where oil is found. A near doubling of the permit approval time under President Biden has produced a backlog of 4,000 applications.

President Biden has reversed President Trump’s reforms of the National Environmental Protection Act and the Clean Water Act. The policy process previously allowed environmental groups to endlessly litigate required environmental reviews, tying up production and pipelines for years. Wise policy should balance environmental costs and economic benefits and proceed when we decide that the benefits outweigh the costs. Prior to the Trump reforms, environmental groups nearly possessed veto power.

Mr. Biden is simply, in IER’s view, delivering on his 2020 election pledge: “No ability for the oil industry to continue to drill period. It ends.” And now the president is asking Iran, Venezuela and Saudi Arabia to pump more oil. Everyone, it seems, except America.

Anyone believing that climate change poses an existential threat to humanity must advocate such policies. Meeting the new goal of limiting temperature rise to 1.5 degrees Celsius will require an end to the use of fossil fuels within 10 or 20 years, not the distant future.

Prices and quantities are related. At a sufficiently high price, the quantity consumers are willing and able to purchase (the textbook definition of demand) will be zero. Banning gasoline pushes the quantity to zero but can also be interpreted as driving the price to infinity. High and rising gas prices are not a flaw of fighting global warming, they are the plan.

The only glitch is perhaps that the Ukraine invasion gave us 2023’s price of gas in March 2022, resulting in more pain sooner than intended. California Governor Gavin Newsome, who wants to ban the sale of gas-powered cars by 2030, now generously proposes rebates to Californians as relief from $6 a gallon gas.

We may be approaching a point of no return for domestic oil and natural gas production. Developing oil and gas involves enormous capital investment in wells, storage, transportation (pipelines or railroads), and refining or processing. These investments require years of use to recoup.

I do not support ending fossil fuel use to fight global warming, and you may wish to discount my investment insight. But how can drilling oil or natural gas wells to be used for only 20 (or perhaps now 15 or 10) years be profitable? A four-year reprieve from a Republican president may soon be irrelevant. A credible commitment not to ban fossil fuels may soon be necessary to significantly increase production.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

I support free markets and economic freedom. But do all markets make society better off? The college cheating industry offers a challenge. An internet search quickly reveals the abundant assistance available.

Companies and freelancers will write papers, even giving money-back guarantees. Uploading pictures of exam questions on a phone can get answers delivered. Entire classes and degree programs can be taken.

As a professor, I could easily moralize about cheating. But let’s consider the economics. A market for cheating exists because some college students are willing to pay for help. Specifically, they will pay enough to induce individuals capable of, for example, writing good term papers, to do so. The compensation must also offset any guilt about participating in misconduct.

Cheating clearly predates the internet, but now greatly enables this market. Students can easily connect with providers. Services can pay for ads to appear on internet searches. And paper writers use the internet to research topics quickly. Students demand custom-written papers because of plagiarism detection software. Software can now easily identify content lifted from the internet. Paper writing services routinely include plagiarism reports to assure customers of original content.

Further exploration of the supply and demand sides of the market raise concerns about higher education. On the supply side, many writers (seemingly) are graduates from colleges in the United States, Canada or Britain. (Poorly written papers are apparently common with the cheapest services.) Unemployed honors English grads offer many of the testimonials from cheating industry workers.

Some higher education critics argue that we have too many college graduates. Their evidence is often ambiguous. That the cheating industry can hire persons capable of researching and writing good papers on tight deadlines for about $10 per page speaks volumes about the job opportunities of at least some college grads.

On the demand side, the major question is why cheating works. Teachers warn cheaters that they only cheat themselves. This statement contains some truth. Cheating lets students complete an assignment or class without learning the material.

Does this truly help? Suppose someone cheats their way through truck driving school. How will they get and hold a job if they cannot put a truck into gear and drive it? Given this, why pay truck driving school tuition and then pay to cheat? The demands of jobs should limit the demand for cheating.

Cheating is more likely on classes unrelated to the jobs students will seek. College curricula feature required courses of little direct relevance to a major, like chemistry for future bankers. Shortening the bachelor’s degree by eliminating unrelated required courses should mitigate cheating.

College and graduate degree requirements are imposed by occupational licensing to reduce the number of practitioners. Occupational licensing is government permission to legally work in a field and has grown enormously in the United States.

Such degree requirements will be particularly susceptible to cheating; employers will not care if applicants lack irrelevant knowledge. What are the cheating industry’s consequences? The willingness of some to cheat requires professors and universities to incur costs to control and deter cheating.

The costs parallel the costs to businesses of shoplifting and employee theft. We could enjoy a higher standard of living if no one was willing to cheat (or steal). Cheating also diminishes the value of grades and degrees.

This is often described as unfair to students who study and earn their grades. But for the economy, cheating makes grades and degrees less effective in identifying strong students for employers. This is particularly costly when employers cannot quickly identify unqualified applicants, unlike in the truck driving case.

Cheating seemingly resembles other “victimless” crimes like illegal drug use. But this is not correct. The contract students have with colleges prohibits academic misconduct. Cheating involves contract violation, not merely consuming an unpopular product.

People will supply what others are willing to buy. But contracts are a foundation of economic freedom and enforcing contracts is a fundamental task of government. Protecting economic freedom does not require tolerating the cheating industry.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

French economist and journalist Frederic Bastiat wrote, “When goods don’t cross borders, soldiers will.” That international trade contributes to peace is a tenet of classical liberal and contemporary libertarian thought. How might trade accomplish this, and what does the evidence show?

Trade reduces conflict in two ways. First, war disrupts trade; goods are unlikely to be shipped across a battlefield. Both exports and imports benefit a nation. The larger the volume of trade, the greater the pain from disruption.

Second, doing business can change attitudes and perceptions. Nations do not trade; individuals and businesses do. The individuals engaged will get to know citizens of the other nation. They will see each other as real people. Peaceful interactions break down hatred and create friendships.

Other social and cultural exchanges can also do this. Exchange students, scientific collaborations, sporting events and art exhibitions help erase nationalistic attitudes which politicians exploit for personal benefit. Cultural exchanges help inoculate populations against opportunistic, power-hungry leaders.

The impact of trade disruption is real but only deters fighting if national leaders carefully weigh benefits and costs. Emotions can cloud rational thought. Overconfidence is often prevalent; both warring nations often expect a quick and easy victory. Overconfidence may also lead to underestimation of the costs from trade disruption.

International trade generates enormous economic benefits independent of its impact on war and peace. Economists since David Ricardo have recognized comparative advantage as the basis of beneficial trade. We benefit from buying the best and most affordable food, furniture, clothes, minerals, and computers from around the globe. And we need not trade only with our close friends; businesses might trade with foreigners we consider untrustworthy.

What does history show? World War I violates the trade and peace thesis. The global economy would not attain 1914’s level of trade again for almost 75 years. England, France and Germany had significant scientific and cultural exchanges, and as Barbara Tuchman observed in The Guns of August, their royal families were even blood relatives. Commerce and cultural ties did not prevent the Great War.

International trade fell considerably by World War II. The Great Depression and the ensuing trade war left little trade in place. Few goods crossed European borders in 1939.

NATO and Soviet bloc nations engaged in little commerce during the Cold War. Extensive trade occurred on each side of the Iron Curtain, but few goods crossed. International trade does not explain the U.S. and Soviet Union avoiding World War III.

Formal tests of the trade and peace thesis must include countries that never fought. Statistical tests examine “dyads” or pairs of countries to identify factors affecting the probability of conflict. A 2008 study confirmed that higher trade volumes reduced conflict. The worldwide increase in trade between 1970 and 2000 reduced the likelihood of conflict between a pair of nations by 20 percent.

We must be careful about causality when two peaceful nations trade. The potential for disruption matters for establishing trade relations. For instance, Mcdonald’s has suspended operations in Russia over the Ukraine invasion. Mcdonald’s will not open restaurants they expect to close soon. Businesses do not desire supply chain disruption and so will not source parts from a nation with which conflict is likely.

The 2008 study employed several statistical methods to these alternatives, increasing confidence in its results. Still, it may be impossible to “grow” trade between two hostile nations to make war less likely.

Russia’s invasion of Ukraine may provoke a new Cold War. Is international trade crucial to maintaining future peace? In the near term with ongoing hostilities in Ukraine, a suspension of trade is justified, if merely to signal our moral outrage. In the longer term, trade increases the cost of war. The libertarian argument is technically correct. Unfortunately, the economic benefits of trade appear minor in the political calculus leading to war.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The COVID-19 pandemic disrupted life in many ways. Governments across America assumed new powers without explicit authorization. But laws restricting businesses were also suspended. As the pandemic ebbs, we should evaluate this deregulation experiment and consider permanent changes.

Americans for Tax Reform counted 846 suspended federal and state rules. Some were narrow matters, like allowing ambulances to transport patients to urgent care facilities. Others were more substantial, to enable the production of ventilators and not require the CDC to perform all COVID tests.

Chicago Mayor Rahm Emmanuel famously opined, “Never let a crisis go to waste.” Many politicians have taken this to heart during the pandemic. As the research of economic historian Robert Higgs shows, “crises” are frequently used to permanently expand government.

I totally oppose exploiting crises for political gain. Temporarily adjusting rules when changed circumstances alter the benefits and costs is prudent and wise. But deceptive wording to make a temporary suspension permanent undermines social trust. We should be helping each other during a crisis, not guarding against dirty tricks. Politicians who try doing so deserve our scorn.

The hundreds of waived rules provide natural experiments, and we should evaluate the evidence. Many proponents of government rules fear that unregulated markets would produce disaster. What happened without government supervision during pandemic deregulation?

Economists are undertaking such research and the results will emerge in published studies. Sober deliberation might lead us to amend or abolish some of these laws. This is how a crisis should change policy.

What already seems clear? Allowing restaurants to sell cocktails to go provided important relief. Thirty-nine states and the District of Columbia permitted this during the pandemic with more than a dozen making this permanent. Alcohol can account for over one third of sales and has high markups over cost and take-away food sales alone could not make up for this revenue.

The value of cocktail freedom going forward is unclear. Customers probably valued their favorite cocktails when forced to dine at home. To-go drinks will need to be distinctive to remain attractive to take-home customers given the steep markups.

Health care has featured some significant rule waivers. Telehealth has received an enormous boost. Like remote work, the required technology has existed for some time. Legal restrictions were holding telehealth back. The pandemic forced experimentation for patients fearful of catching COVID at a doctor’s office.

Telehealth, though, offers enormous benefit going forward, particularly for residents of underserved rural areas. Safety is also a factor: individuals with health conditions can avoid potentially dangerous drives to doctors’ offices. Patients with rare illnesses or difficult cases can consult more specialists.

State licensure creates barriers for virtual consultation across state lines. State medical boards claim to uphold quality in licensing, but this is only true if other states license unqualified quacks. I read about a Pennsylvania patient again facing a two-hour drive to Johns Hopkins in Maryland with the end of the pandemic exemption. Does the Pennsylvania medical board truly think that doctors at Johns Hopkins – one of the nation’s leading medical schools – are not qualified to treat Pennsylvanians?

Pandemic deregulation waived limits on medical professionals known as scope of practice regulation. For example, physician assistants were allowed to practice to the extent of their training. Scope of practice limits are driven by profits, not safe medicine and simply keep professionals from fully employing their expertise. Researchers will determine if these exemptions increased misdiagnoses; if not, this would demonstrate the limits’ lack of medical purpose.

Liquor stores have opposed takeaway cocktails and offered a safety rationale: customers might imbibe while driving home. But so could thirsty liquor store customers. Pandemic deregulation’s most enduring benefit may prove to be exposing bogus rationales for rules benefitting one group of businesses over another at the expense or inconvenience of consumers.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The United States and Europe imposed economic sanctions on Russia for its unprovoked invasion of Ukraine. I will let others debate the sufficiency of this response and consider the economics and effectiveness of sanctions.

Economists have analyzed sanctions both theoretically and empirically. Theory helps us identify differences between observed outcomes and the unobserved alternative without sanctions. One immediate implication: a proper evaluation must include cases where the threat changed policy. In any model of negotiations and conflict – including wars, labor strikes, and sanctions – the costs of conflict push the parties to negotiate.

Indeed, wars and strikes should not occur with perfect information. If Ukraine and Russia both knew the outcome of the invasion, they could negotiate a settlement based on the outcome and avoiding the death and destruction. Sanctions tend to be imposed when things break down.

Sanctions temporarily block trade between parties. If we currently trade very little with a nation, a halt is not very impactful. And the potential for political or military conflict makes businesses less likely to establish trading relations. Choosing a supplier in a country likely to be sanctioned will only produce supply chain disruption.

This produces a paradox. Sanctions would be most effective against allies yet are imposed against enemies. Although this is sensible, sanctions get employed when least likely to be effective.

Sanctions can take a long time to work. South Africa’s racist Apartheid regime faced sanctions for 30 years before collapsing. If sanctions take years or decades to work, is this success?

South Africa faced diplomatic, travel, and cultural sanctions in addition to economic sanctions. Separating the impact of economic and other sanctions is extremely difficult. Yet we need to assess the benefits of economic measures.

Gary Hufbauer, Jeffrey Schott and Kimberley Elliott have compiled the most extensive sanctions database through three editions of Economic Sanctions Revisited. For each case they estimate effectiveness in achieving political goals (on a 16-point scale), the cost imposed on the target country’s economy, and cost to the initiating country. “Success” to varying degrees in occurs about one-third of cases. Sometimes sanctions have been notoriously ineffective, like the League of Nations sanctioning of Italy for invading Ethiopia in 1935.

The researchers define success relative to political goals. Alternatively, we might ask whether sanctions imposed significant costs on the offending nation. Will sanctions make Putin pay for invading Ukraine?

Perhaps. Sanctions reduced the GDP of white-ruled Rhodesia (now Zimbabwe) by over 10 percent in the 1970s and Iraq’s GDP by nearly 50 percent after its 1990 invasion of Kuwait. International cooperation is crucial for effectiveness because most trade must be shut off for the greatest possible impact.

Sanctions have been called the economic equivalent of carpet-bombing cities. They inflict pain on “noncombatants.” The Apartheid sanctions hurt oppressed black South Africans; incidental harm must factor into our evaluation.

Many people see nations through a collectivist lens, justifying harm to any Russians since Russia invaded Ukraine. As a proponent of personal freedom, I reject all forms of collectivism. Thousands of Russians have reportedly been arrested for protesting the invasion. Again, incidental harm must be taken very seriously.

Fortunately, sanctions are increasingly targeted. The Obama administration began targeting banks doing business with rogue regimes and selected Russian banks have been banned from the SWIFT international payments system. Improved surveillance reduces the ability of banks to violate sanctions without penalty.

I have focused on economics, but sanctions also have a moral dimension. Halting trade offers a way to denounce the invasion: we will not trade with barbarians. And Russian oil seems irredeemably stained with Ukraine’s blood.

Sanctions and halting energy imports could impose nontrivial costs on Russia. Unfortunately, invasions are rarely launched on strict cost-benefit grounds, limiting their impact. Yet this should not diminish the economic and moral significance of sanctions.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The Johnson Center and the American Institute for Economic Research recently held a conference on “The Future of Higher Education.”  Today I will share some of the insights from this event.

Higher ed faces a crossroads if not a crisis.  As Purdue University president and former Indiana governor Mitch Daniels noted, only college has experienced steeper price increases than healthcare.  Student loans now total $1.9 trillion, and politicians call for forgiveness.  Radical, woke professors alienate many Americans.

Two societal factors increase the challenge.  The first is a declining pool of traditional college age students.  The second is an extremely tight labor market encouraging employers to hire applicants without college degrees.  And the COVID pandemic has created uncertainty regarding students’ continued interest in the traditional college experience.

Speakers addressed two questions:  Will higher education survive?  And should it survive?  Having spent my entire adult life in higher ed, it is depressing to hear the desirability of universities questioned.  But increasing numbers of “woke and broke” graduates legitimate the question.

President Daniels spoke about Purdue’s efforts to control costs, including a partnership with Amazon for textbooks.  Purdue has increased enrollment, in part because parents and students do not fear tuition hikes.  The university has also experimented with Income Share Agreements (ISA’s) under which students pay a percentage of income following graduation instead of tuition.  Currently colleges face no penalty if grads do not earn enough to repay loans.  ISAs should better align incentives for students and colleges.

The conference featured panels on higher ed culture and financing.  Phil Magness, author of Cracks in the Ivory Tower, showed that a plurality of university faculty have always self-identified as liberal.  But the imbalance has grown dramatically since 2000, with 70 percent now liberal and extreme liberals outnumbering conservatives.  Many disciplines feature few dissenters from liberal orthodoxy.

Imbalance imperils freedom of expression as demonstrated by the “canceling” of speakers or faculty.  But as Dr. Magness explained, imbalance also fuels bogus research.  Arming America: The Origins of a National Gun Culture won a prestigious Bancroft Prize for history but was revealed to be falsified.

A tolerant campus culture must start at the top.  Under President Daniels, Purdue adopted the University of Chicago’s “Principles” for free expression.  Universities adopting the Chicago Principles signal opposition to cancel culture.

Administrative growth is a major driver of rising costs.  Administrators now outnumber full-time faculty; Yale allegedly has more administrators than undergrads.  The drivers of administrative growth, however, are less clear.  An amenities arms race involving luxurious dorms, celebrity chefs, and fancy recreation centers also escalates costs.

Pano Kanelos shared insights from his presidential tenure at St. John’s College and the newly founded University of Austin.  One surprising observation: inflation-adjusted net tuition has risen little over the past decade at most colleges.  Net tuition is what students pay after subtracting a college’s assistance.  Tuition continues rising but schools desperate to fill each year’s class offer larger scholarships.  Colleges bid aggressively for the dwindling pool of college age students.

Dr. Kanelos noted how college rankings like U.S. News reward spending per student.  Top universities spend prodigiously to boost their status (recall Yale’s numerous administrators).  Schools outside of the top fifty try to keep up because prospective students see amazing amenities when touring other universities.

There is some encouraging news.  Colleges must be accredited to receive Federal funding; most are accredited by one of six regional associations.  Until recently, colleges had to use their region’s accreditor; the Trump Administration ended the regional monopolies.  Competition should eliminate accreditation waste.

Predictions of doom for higher ed are not new.  The late Clayton Christensen predicted that half of colleges would close in a decade.  The University of Austin received inquiries from over 4,000 professors and 10,000 prospective students before setting up a website.  I think many Americans still hope higher education can contribute to a prosperous and virtuous future.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

The New York Times’ 1619 Project examines the impact of slavery on America. One essay contends that our economic system was built on slavery. Was America’s ascension as an economic powerhouse due to slavery?

Slavery was a repugnant and evil institution. Its abolition is a sign of humanity’s moral progress. Slavery taints America’s founding and was incompatible with “all men are created equal.” Yet Emancipation took 90 years and a terrible civil war, plus another 100 years to extend the promise of freedom to all Americans.

Journalist Nikole Hannah-Jones curated the series of essays. Princeton University’s Matthew Desmond authored the essay linking American capitalism to slavery, drawing heavily on the scholarly research of the “New History of Capitalism” (NHC). I will focus on Desmond’s claim that, “Slavery was undeniably a font of incredible wealth.”

Economic historians have extensively studied slavery, including Nobel Prize winner Robert Fogel. Economic theory and econometrics have illuminated the institution’s operation. Economic historian Phillip Magness draws on this plus his own research in his The 1619 Project: A Critique, which I rely on here.

A plausible connection exists between slavery and prosperity during the first half of the 1800s. Textiles produced using cotton largely from the southern U.S. were an enormous part of the Industrial Revolution.

Before examining economic details, let’s consider the big picture. Slavery existed throughout human history. If slavery really could produce “incredible wealth,” presumably humanity would have prospered before 1776 or 1619.

How important was cotton to the American economy? NHC Historian Edward Baptist claims that cotton accounted for nearly half of U.S. economic activity in the 1830s, a statistic cited supporting reparations for slavery. As Dr. Magness describes, Baptist erred; he “proceeded to double and triple count intermediate transactions involved in cotton production,” namely things like tools, land, and transportation. Yet GDP includes only final goods and services to avoid double counting. As Magness continues, “Baptist’s numbers are not only wrong – they represent a basic unfamiliarity with the meaning and definition of GDP.”

Professor Desmond, again following NHC historians, attributes the 400% increase in cotton productivity between 1800 and 1860 to American slavery’s brutal efficiency. But economists Alan Olmstead and Paul Rhode, who established the productivity increase, attributed it to improvements in cotton seeds.

Another Desmond claim was the necessity of slavery to produce cotton in sufficient quantities. This turns on a technical question, returns to scale in cotton production. If economies of scale existed, only large plantations using slaves might have been able to capture these economies. Yet economic historians found no important returns to scale; Stanford’s Gavin Wright concludes that cotton, “could be cultivated efficiently at any scale.”
Furthermore, if slavery was necessary to produce cotton, supply should have collapsed following Emancipation. Yet the price of cotton returned to pre-war levels by the end of the 1870s.

Economists have documented an enormous increase in standards of living beginning around 1800. A plot of world per capita GDP over time shows a true hockey stick-shaped takeoff then. Economic historian Deirdre McCloskey calls this the Great Enrichment. Humanity prospered as slavery was abolished.

People long believed that commanding and enslaving others was the key to prosperity. This was incorrect. Professor McCloskey argues that thousands and thousands of improvements, discovered through the imagination and intelligence of ordinary people, produced the Great Enrichment. Prosperity results when freedom unleashes everyone’s full potential.

A society organized on commands and whips requires obedience, which in turn requires keeping the underlings from fully recognizing their humanity. Once Harriet Tubman recognized her value as a human being, she was never going to be anyone’s slave. Slaves repeat the tasks they are commanded to perform as opposed to discovering how to perform these tasks more efficiently.

The political philosophy of liberalism, founded on the moral equality of all people, gave birth to capitalism. Liberalism implies bot that government must serve the citizens and that no human being should be a slave. That capitalism produced modern prosperity and ended slavery is no accident.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Inflation topped 7% in December, the highest level in forty years. The Biden administration has tried blaming rising prices on corporate greed with antitrust enforcement as a remedy. Does this make economic sense?

We must first consider what is inflation. Measured by the rate of change in the Consumer Price Index (CPI), economists define inflation as an increase in the general price level. Increases in the prices of some goods with others remaining unchanged raises the CPI but are changes in relative prices. Relative price changes result from changed economic conditions like with lumber in 2020.

A “pure” inflation is an equal percentage increase in all prices, including wages and salaries. Inflation also involves an expectation of continued price increases. Pandemic-related production disruptions might cause price increases but not continued increases; prices should stabilize once production resumes and back orders are filled.

Is the last year’s CPI increase due to relative price changes or true inflation? We clearly have had some relative price increases, for things like lumber and new and used cars (37% price increase over the past 12 months). But many CPI components have increased by 5% or 6%. Most prices are rising.

Interest rates provide the best gauge of future inflation. They are based on the decisions of thousands of persons, each investing her own money, and superior to any expert’s forecast. Florida Atlantic University economist Will Luther calculates that the bond market currently forecasts 2.6 (2.2) percent annual inflation over the next five (10) years. Markets expect inflation to moderate but not disappear.

Now we can turn to greed and antitrust. I will not distinguish between greed and self-interest here. Economists assume everyone acts in their self-interest; for businesses this means selling for the highest prices possible. But consumers must voluntarily purchase what businesses want to sell and competition between sellers limits prices.

Greed only explains rising prices if competition has been reduced. State business closure orders during COVID helped bankrupt thousands of small businesses. Yet the impact of these failures on the overall level of competition is likely modest.

Furthermore, reduced competition would likely generate a one-time price increase; with less competitive pressure, a business might raise prices by 5%. Since greed is not causing inflation, more aggressive antitrust enforcement will not stop inflation.

Economists across the political spectrum recognize this. Larry Summers, formerly secretary of the Treasury under President Clinton, said on Twitter: “The emerging claim that antitrust can combat inflation represents ‘science denial.’”

Precedent exists for using inflation fears to justify unrelated policies. Until the 1970s, Washington regulated railroads, trucking and airlines. This was not just safety regulation but control of the number of firms, routes of operation, and prices. Economic research documented the harms of this regulation: higher prices, reduced productivity, and poorer transportation options.

The principle of concentrated benefits and dispersed costs from public choice economics explained the persistence of such regulations. The companies and their unions, including the powerful Teamsters, benefited from regulation. Consumers faced an enormous total cost but small individual costs. Regulation was crucial to the industry but a minor issue for consumers.

Then something amazing happened. America faced high inflation and Senator Edward Kennedy sought an issue to boost his presidential hopes. Future Supreme Court Justice Stephen Breyer was on the senator’s staff and knew about the economic research. Senator Kennedy held widely publicized hearings touting deregulation to offset the pain of inflation. President Carter got on board and by 1980, all these industries were deregulated.

Attributing causality is virtually impossible in public policy. But most histories of deregulation cite Senator Kennedy’s hearings as highly important in the process. Deregulation as a cure for inflation is economic silliness. Yet confusion over inflation may have enabled beneficial policy change.

Policy makers I suspect remember this lesson. Expect politicians to try selling their pet projects as fighting inflation. But as economist Milton Friedman famously said, “Inflation is everywhere and always a monetary phenomenon.” Alleged inflation remedies should be evaluated on their own merits.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Barry Bonds and Roger Clemens were recently not elected to the Baseball Hall of Fame in their final year of eligibility, reportedly over their use of performance enhancing drugs (PEDs). The case illustrates some of the economics of rules and the nature of “positional goods.”

The case for both Bonds and Clemens based on performance is overwhelming. Bonds is the career and single-season leader in home runs and won seven Most Valuable Player Awards. Clemens won 354 games, ninth all-time, and seven Cy Young Awards. Each was compiling Hall of Fame (HOF) careers before ever using PEDs.

Baseball’s all-time hits leader, Pete Rose, is also not in the HOF, banned for life by Commissioner Bart Giamatti for betting on games while managing the Cincinnati Reds, making him ineligible for the Hall. Precedent exists to exclude greats from Cooperstown. But Pete Rose knowingly broke a rule for which a lifetime ban was a plausible penalty. By contrast, MLB never punished Bonds or Clemens for PED use. Indeed, MLB promoted the steroid-fueled home run chase between Mark McGwire and Sammy Sosa in 1998.

PEDs are one of many ways players can improve their performance. We celebrate players doing everything they can to get better and gain advantage. We find the contests compelling because the players take them so seriously and perform at such a high level. Why ban some efforts to improve performance?

Adverse health consequences provide an immediate answer. But on closer examination, other ways to gain competitive advantage harm health and are not banned. Most NFL offensive linemen weigh over 300 pounds. Every extra pound increases stress on the heart and many players never lose this weight when finished playing. Many others adversely impact work-life balance. Aspiring tennis stars, for example, have long sacrificed any semblance of normal teen years.

Why do leagues need rules to keep players from potentially damaging their health? Economics provides insight. Individually players achieve an advantage by taking PEDs. Yet only the top 750 players make the big leagues and only stars sign $100 million-plus contracts. 2,000 aspiring big leaguers could take PEDs and be able to hit baseballs farther, but the number of available roster spots will not increase.

If most players take PEDs or if all football linemen bulk up, the competitive advantage cancels out. Roster spots are what economist Robert Frank labeled “positional goods,” cases where position versus others matters. With widespread steroid use, players incur health risks without gaining competitive advantage and not stop and risk their roster spot.

League-wide rules are necessary to regulate PEDs and other undesirable efforts to achieve an edge in the quest for positional goods. And with a rule in place, penalties are justified because everybody knows the rules.

What types of efforts at improvement should be banned? Given the numerous ways to improve performance, no alternative exists to letting leagues decide. Owners, management and players have their interests (both physical and financial) at stake and can best balance benefits and costs. When steroid use began to diminish fan interest, MLB implemented penalties for positive tests.

Because MLB did not implement penalties for steroid use until 2004, Mr. Bonds and Mr. Clemens were arguably not breaking the rules. Yet steroids were already technically banned in baseball in the 1990s. I say technically because Congress banned anabolic steroids in 1990, bringing them under MLB’s rules against possession and use of illegal drugs.

PEDs in baseball as demonstrating the futility of externally imposed prohibitions. Prohibitions can always be evaded. Psychologists know that behavior problems go unchecked until a person recognizes that they have a problem and need to change. Only a commitment from players and owners will result in prohibitions with teeth.

I frequently extol freedom, but individuals will sometimes want to sacrifice some freedom to regulate competition for positional goods. Only clear rules demarcate vigorous competition from impermissible advantage. Baseball never punished Mr. Bonds or Mr. Clemens for PED use, so I disagree with HOF voters doing so.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.

Over 100,000 Americans await organ transplants and over 6,000 die annually while waiting. From an economic perspective the decades-long organ shortage has a simple cause: paying organ donors is illegal. Price controls predictably produce shortages.
Payment for organs has been outlawed since at least 1948. The 1984 National Organ Transplant Act established the Organ Procurement and Transplantation Network to allocate donations by uniform criteria. Around 40,000 transplants occur annually, with over 80 percent of organs coming from deceased donors.

A price ceiling is a legal maximum price imposed by government on a market. Selling for more than the set price becomes illegal. If the control is less than the market price (called the equilibrium price) and the law is enforced (price ceilings often produce black markets), a shortage ensues. Rent control is an example of a price ceiling.

Prohibiting sales sets the price to zero, which must be less than the equilibrium price. The waiting lists and deaths are real life results of the shortage. Lifting the ban on payment should increase the number of organs available for transplant.

Buying and selling organs has been illegal and deviating from customary practice can make people cringe. This discomfort, though, is temporary; we should require a better reason to continue a deadly prohibition.

Harvard’s Michael Sandel in What Money Can’t Buy notes two objections to “commodification,” or converting something into a good to be bought and sold. The first is fairness, or concern over being forced to transact due to necessity: “A peasant may agree to sell his kidney or cornea to feed his starving family, but … he may be unfairly coerced … by the necessities of his situation.” Fairness also involves whether only the rich could afford organs.

The second objection is corruption, which holds that, “certain moral and civic goods are diminished if bought and sold.” Donation is no longer a noble sacrifice but a cash transaction. And monetary payment could crowd out voluntary donations.
Whether desperate acts are voluntary is debatable. We sometimes do things because we “have” to, even if we acknowledge that no one pointed a gun at us. True voluntariness may require more than an absence of coercion.

Yet money often improves desperate circumstances. Imagine a 35-year-old man providing sole support for a family dying suddenly and unexpectedly without life insurance. In addition to the emotional toll, the family likely faces financial hardship. Payment for the man’s organs could help support the family. Is making the family rely on charity better?
Would only the rich get transplants with payments? Not necessarily. The Organ Network could make payments and still use the current criteria for allocating organs. But any wealthy persons buying organs for themselves would be off the waiting list.

The corruption argument is based, I think, on a view of money as inherently corrupt. But money is just a medium of exchange, letting people buy whatever they choose. Money is a tool of voluntary market exchange, and exchange respects the moral value of all persons.
I could try to obtain a kidney for transplant in three different ways. First, I could say, “I need a kidney, you have one to spare, let me have one.” I could cry, beg, and guilt the person into donating. Second, I could try to take one by force. Third, I could offer to give or do something in exchange.

Beg, steal, or trade. Personally, I think that trade is the best way to proceed. Money just helps people reach mutually agreeable exchanges.

If you oppose payment for organs, remember that simply prohibiting payment saves no lives. Suppose a person’s next of kin will not allow organ donation but would agree if offered payment. Without payment the organs go to the grave and persons awaiting transplant remain desperately ill.

Opponents should offer a solution to the shortage, not merely block one they dislike. Marketing campaigns for organ donation have not worked. Should the government compel the harvesting of organs from the deceased? I think that paying donors provides a great solution to the shortage.

Daniel Sutter is the Charles G. Koch Professor of Economics with the Manuel H. Johnson Center for Political Economy at Troy University and host of Econversations on TrojanVision. The opinions expressed in this column are the author’s and do not necessarily reflect the views of Troy University.