Citizens’ misinformation about inflation is not their fault. There is an army of so-called experts aligned around governments trying to convince them that inflation is caused by anything and everything except the only thing that can make aggregate prices rise at the same time: devaluing the purchasing power of the currency.
Sound money is as important as independent institutions. It protects the citizen from the perverse incentives of governments to pass their imbalances to the population, and it is essential to guarantee the essence of liberty, which is economic freedom.
The middle class in all developed economies is disappearing through a constant process of erosion of its capacity to climb the social ladder. This is happening in the middle of massive so-called stimulus plans, large entitlement programs, endless deficit spending, and “social” programs.
The reality is that those who blame capitalism and free markets for the constant erosion of the middle class should think better of it. Massive money printing and constant financing of larger governments with new currency have nothing to do with capitalism or the free market; it is the imposition of a radical form of statism disguised as an open economy. Citizens who hail the latest government stimulus plan fail to understand that the government cannot give you anything that it has not taken from you before. You get a $1,000 check, and you pay three times over in inflation and real wage destruction. That is why a group of economists and experts have launched the Honest Money Initiative. To stop the destruction of the fabric of the economy, the middle class, and businesses via constant debasement of the currency that governments monopolize.
Citizens rarely understand inflation. Many believe that inflation is equivalent to rising prices and therefore blame those who place the tag on a product for the loss of purchasing power of a currency. However, inflation is caused by more units of currency going toward the same number of goods and services. Printing money above demand is the only thing that makes prices rise in unison. If a price rises due to an exogenous reason but the quantity of currency remains equal, all other prices do not rise.
New documents obtained by Rep. Jim Jordan show that an undercover FBI employee reported on a subject who attended an ‘SSPX-affiliated’ church in California. The documents also show that FBI offices in Los Angeles and Portland were involved in the creation of the FBI’s memo that described Traditional Catholics as potential domestic terrorists.
Can’t Blame the FIB. I know some Catholics and occasionally associate if I don’t see them first. They are a shifty bunch. Always talking about God and heaven being “up there” and stuff. Everyone knows government is the real god. The real god has his/her/zer own disciples. FIB, CIA, DEA, TSA FCC, IRS, CDC, FDA, EPA…Fauci. Non-believers also have their own special place waiting for them. FEMA.
WASHINGTON, D.C. — FBI Director Christopher Wray has seemingly been caught lying to Congress about the extent of his agency’s surveillance of Traditional Catholics in the United States, particularly the Priestly Society of St. Pius X (SSPX).
House Judiciary Committee Chairman Jim Jordan (R-Ohio) released a letter addressed to Wray earlier today announcing that documents he obtained from the FBI last month indicate that its field office in Richmond, Virginia, coordinated with two other offices across the country to spy on Traditional Catholics.
The finding stands in contrast to Wray’s previous testimony that an FBI memo describing Traditional Catholics as potential domestic terrorists was only utilized at the one location in Richmond.
“On July 25, 2023, the FBI produced a version of the Richmond document with fewer redactions than the two previous versions it had produced,” Jordan’s letter begins. “This new version shows that the FBI’s actions were not just limited to ‘a single field office,’ as you testified to the Committee. The document cited reporting from an ‘FBI Portland liaison contact with indirect access’ who informed on a ‘deceased [Racially or Ethnically Motivated Violent Extremist (RMVE)] subject’ who had ‘sought out a mainline Roman Catholic community’ and then ‘gravitated to [Society of Saint Pius X (SSPX)].’”
Jordan continued by noting that the newly obtained document also states that “an FBI undercover employee with ‘direct access’ reported on a subject who ‘attended the SSPX-affiliated [redacted] Church in [redacted] California, for over a year prior to his relocation.’”
It is “most concerning,” Jordan exclaimed, that “it appears that both FBI Portland and FBI Los Angeles field offices were involved in or contributed to the creation of FBI’s assessment of traditional Catholics as potential domestic terrorists.”
The original eight-page document Jordan is referencing was first leaked to the public by FBI whistleblower Kyle Seraphin in February. The document prompted a near-universal outcry among Catholics across the country when it was released. On July 17, Jordan requested that the FBI hand over a “less redacted” version of the memo for the committee before 12 p.m. on July 25 or else Wray would be held in contempt of Congress. The details of that memo are what Jordan has presented today.
Victoria Nuland has gone to Africa. Gone to Africa to talk some sense into the Nigeriens and convince them to return to the shackles of Paris. Gone to Africa to harvest blood diamonds and cobalt. Gone to Africa to masturbate on Gaddafi’s grave. Gone to Africa to trade glass beads for slaves.
Victoria Nuland has gone to Africa. Gone to Africa to talk some sense into the Nigeriens and convince them to return to the shackles of Paris. Gone to Africa to harvest blood diamonds and cobalt. Gone to Africa to masturbate on Gaddafi’s grave. Gone to Africa to trade glass beads for slaves.
Victoria Nuland has gone to Africa to help the bank boys keep their dicks in the mother continent, to help keep the siphon tubes stuck into the mother continent, to help keep the Russians and Chinese out of the mother continent. Traveling around the mother continent in the mask of a medieval plague doctor, collecting the fat leeches and replacing them with new ones.
The AFRICOM emblem looks like a vagina, and Victoria Nuland looks like an involuntary pelvic exam. She makes me feel like a lost kid in a cornfield at dusk. She has mushroom clouds in her eyes.
Soon Victoria will leave Africa and go home, back to the land where corporations are people and flags are gods, where the presidents have dementia and the poor have college degrees, where alienation flows like water and bullet casings fall like rain, where people wear airpods to mute the screams of their hearts and the homeless, where the middle class talk only to their Uber drivers and strangers they’ve mistaken for their Uber drivers, where soldiers march for fascism while flying rainbow flags, where war is a lucrative industry and journalism is a crime.
She’ll come home to a house that no millennial will ever be able to afford, into the loving embrace of her blood-spattered husband. They will make freakish, horrifying love that night, and she will fall asleep and dream of passing out cookies while the world turns to fire.
I had a dream, too. One of the strange ones that always come true. A pentagon was smashed to pieces by a giant black fist. I don’t know what it means or what future it portends, but I do know Victoria Nuland wasn’t passing out any damn cookies.
ROBBIE ROBBERSON
NEW YORK CITY
ATTENDING ROCK AND ROLL HALL OF FAME
2000
Member of the rock group, “The Band”
Jaime Royal “Robbie” Robertson[1]OC (July 5, 1943 – August 9, 2023) was a Canadian musician.[2] He is recognized for his work as lead guitarist for Bob Dylan in the mid-late 1960s and early-mid 1970s; as guitarist and songwriter with the Band from their inception until 1978, and for his career as a solo recording artist.
Robertson was born Jaime Royal Robertson[5] on July 5, 1943. He was an only child. His mother was Rosemarie Dolly Chrysler, born February 6, 1922.[6] She was Cayuga and Mohawk,[7] raised on the Six Nations Reserve southwest of Toronto, Ontario. Chrysler lived with an aunt in the Cabbagetown neighbourhood of Toronto and worked at the Coro jewellery plating factory. She met James Patrick Robertson at the factory and they married in 1942.[8]
Rosemarie and James Robertson continued to work at the factory where they met. The family lived in several homes in different Toronto neighbourhoods when Robbie was a child.[9]: 55 [10]: 65 He often travelled with his mother to the Six Nations Reserve to visit her family. It was here that Robertson was mentored in playing guitar by family members, in particular his older cousin Herb Myke. He became a fan of rock ‘n’ roll and R&B through the radio, listening to disc jockey George “Hound Dog” Lorenz play rock ‘n’ roll on WKBW in Buffalo, New York, and staying up at night to listen to disc jockey John R.’s all-night blues show on WLAC, a clear-channel station in Nashville, Tennessee.[11]: 56 [12]: 65–66
When Robertson was in his early teens, his parents separated. His mother revealed to Robertson that his biological father was not James, but Alexander David Klegerman, a man she met working at the Coro factory. Klegerman was Jewish.[13] He became a professional gambler and was killed in a hit-and-run accident on the Queen Elizabeth Way. She had been with Klegerman while James Robertson was stationed in Newfoundland with the Canadian Army before they married. After telling Robertson, his mother arranged for the youth to meet his paternal uncles Morris (Morrie) and Nathan (Natie) Klegerman.[14][15][16]
When Robertson was fourteen, he worked two brief summer jobs in the travelling carnival circuit, first for a few days in a suburb of Toronto, and later as an assistant at a freak show for three weeks during the Canadian National Exhibition. He later drew from this for his song “Life is a Carnival” (with the Band) and the movie Carny (1980), which he both produced and starred in.[17]
The first band Robertson joined was Little Caesar and the Consuls, formed in 1956 by pianist/vocalist Bruce Morshead and guitarist Gene MacLellan. He stayed with the group for almost a year, playing popular songs of the day at local teen dances. In 1957 he formed Robbie and the Rhythm Chords with his friend Pete “Thumper” Traynor (who would later found Traynor Amplifiers). They changed the name to Robbie and the Robots after they watched the film Forbidden Planet and took a liking to the film’s character Robby the Robot. Traynor customized Robertson’s guitar for the Robots, fitting it with antennae and wires to give it a space age look. Traynor and Robertson joined with pianist Scott Cushnie and became The Suedes. At a Suedes show on October 5, 1959, when they played CHUM Radio’s Hif Fi Club on Toronto’s Merton Street, Ronnie Hawkins first became aware of them and was impressed enough to join them for a few numbers. [10]: 66 [11]: 56–57 [18][19]
Sports, even collegiate sports, is a business. A business with history, pageantry, and tradition, but a business nonetheless. Institutions capable of winning on and off the field will inspire new generations of tradition and rivalries. Those that can’t will end up like Sewanee, which once had the most powerful football program in the South. This isn’t new to the modern era, it’s a reality inherent to competition.
For college football fans, it’s already been a wild August week before the first kickoff.
Reminiscent of the Europe of old, and, hopefully, the America of the future, the collegiate athletic landscape in the last several years has witnessed a massive redrawing conference kingdom borders. The most powerful empires are the SEC and the Big Ten, with the former adding the Universities of Texas and Oklahoma and the latter pursuing manifest destiny in the West with the addition of Southern California and UCLA in 2022, and Oregon and Washington this past week.
This shift in borders coincided with a negotiation of television rights. Disney (which owns ESPN and ABC) secured a monopoly on the SEC by adding full broadcast rights to their games to a preexisting arrangement with ESPN that included streaming rights for $3 billion over ten years. The Big Ten was able to package both broadcast and streaming rights with Fox, CBS, and NBC for $7 billion over seven years. Factoring in other revenue sources, industry analysts expect Big Ten and SEC schools to pull in $70 million per school starting in 2024.
The Big 12 responded with its own additions after losing two of its founding members, adding Colorado, Arizona, Arizona State, and Utah this past week. These acquisitions were the result of a successful television deal of their own with ESPN and Fox that is expected to net the Big 12’s programs over $30 million a year.
The loser of this zero-sum game for territory is the Pac-12, the self-named “Conference of Champions,” which now finds itself with four remaining members. It now seems inevitable that the Pac-12 will go the way of the once-proud Southwest Conference, and the less proud Western Athletic Conference, into the dustbin of pigskin history.
Understandably, these major disruptions to a sport fueled by the dynamics of hate-filled rivalries and proud tradition have resulted in a cascade of digital denunciations about the damaging costs of blind greed, the reliable boogeyman for anyone unhappy with particular economic outcomes.
But what does an alternative universe where greed does not exist in the future of sports look like? The boom in sports television-programming rights is a market response to a radical transformation in entertainment consumption. The rise of streaming services has turned sports programming into premium real estate for advertisement as one of the few entertainment options that is watched live. Sports leagues have leverage while Hollywood actors and writers are on strike. While sports channels are laying off analyst-driven content in an age of podcasts and other independent digital content, networks like Fox are investing in the creation of new sports leagues to help fill their time slots.
In The Anti-Capitalistic Mentality, Ludwig von Mises wrote at length about the extent to which capitalism and the pursuit of profit rile the prejudices of various interest groups upset at the ways changing consumer tastes create new challenges. As Mises explains, conservative anticapitalists lament that legacy powers can be ruined if they fail to meet the changing demands of consumers, while progressives condemn the riches that are awarded to those that triumph.
The Pac-12 is a perfect illustration of this dynamic at work.
While it is easy to portray Big Ten and SEC officials as the villains of conference realignment, a look at history paints a picture in which the death of the Conference of Champions resulted from its own entrepreneurial failure.
As Stewart Mandel documented for The Athletic, the seeds for the now Pac-4 were planted over a decade ago.
Consider how the IRS recently pried open PayPal, Venmo, and Cash App accounts with transactions over $600. Consider also that the Supreme Court just ruled that the IRS can investigate your bank accounts without notification in some circumstances, including if you are a friend, family member, or associate of someone who owes the IRS.
“Experts” at the Federal Reserve and other central banks proudly broadcast the potential “financial inclusion” that could be achieved with a central bank digital currency (CBDC). In the Fed’s main CBDC paper, “Money and Payments: The U.S. Dollar in the Age of Digital Transformation,” they make it clear: “Promoting financial inclusion—particularly for economically vulnerable households and communities—is a high priority for the Federal Reserve . . . a CBDC could reduce common barriers to financial inclusion.”
The term has a ring to it that signals support for progressive goals. “Inclusion” is part of the Orwellian trio of terms “diversity, inclusion, and equity,” which, as Dr. Michael Rectenwald writes, means “surveillance, punishment of the ‘privileged,’ sacrifice of national citizens to global interests, and the labeling as ‘dangerous’ and marking for (virtual) elimination those supposed members or leaders of ‘hate groups’ who oppose such measures.” The central banks’ use of “financial inclusion” involves the same reversal of meanings.
Financial Inclusion and Unbanked Households
Consider that a retail CBDC would be like having a bank account with the Federal Reserve, even if it is intermediated by another bank. There is a lot of guesswork about how a CBDC will be implemented, but some say that it will not just be like having a bank account with the Fed, but that it could be exactly that.
Either way, if a CBDC were genuinely aimed at financial inclusion, it would offer something to those who have chosen to forgo a bank account entirely. This “unbanked” population constitutes about 5.4 percent of US households according to a 2021 Federal Deposit Insurance Corporation (FDIC) survey. The survey asked each household why they do not have a bank account, and the responses indicate that minimum balance requirements, privacy, trust, and fees are the most significant factors.
Figure 1: Unbanked households’ reasons for not having a bank account, 2021 (percent)
The critical question, then, is this: what does a CBDC offer these households that physical cash and other nonbank financial services (e.g., check cashing, money orders, prepaid cards) do not?
Privacy (or Lack Thereof)
A CBDC undermines privacy. Whatever a central bank might say about privacy protection with a CBDC can be safely dismissed. The Fed paper, for example, says, “Protecting consumer privacy is critical. Any CBDC would need to strike an appropriate balance, however, between safeguarding the privacy rights of consumers and affording the transparency necessary to deter criminal activity.” We should not conflate the characteristics of a CBDC with those of cryptocurrencies in general, which offer anonymity and pseudonymity to their users.
Kiriakou later became a well-known whistleblower. He was the only CIA employee who went to prison for the agency’s torture program, sentenced in 2013 to 30 months behind bars—not because he himself tortured anyone, but because he told an ABC News reporter about the waterboarding to which the agency subjected a war on terror captive.
When Kiriakou was a CIA official, he says, the agency leaked regularly to The Washington Post correspondents Woodward, David Ignatius and Joby Warrick—as well as “a half-dozen reporters” at The New York Times—because Langley spymasters knew they “will carry your water.”
John Kiriakou looked up from his desk at CIA headquarters and was stunned to see The Washington Post investigative reporter, Bob Woodward, walking through the secure area without an agency escort. On another occasion, Kiriakou, who rose at the CIA to become executive assistant to the deputy in charge of operations, the spy agency’s dark activities—saw CNN host Wolf Blitzer wandering unattended through the same area, despite the CIA’s ban on communicating with the media.
“We like to think there’s a Chinese wall between the CIA, especially senior CIA officials, and the American media,” Kiriakou recently told the London Real podcast. “In fact, they’re in bed together.”
Kiriakou later became a well-known whistleblower. He was the only CIA employee who went to prison for the agency’s torture program, sentenced in 2013 to 30 months behind bars—not because he himself tortured anyone, but because he told an ABC News reporter about the waterboarding to which the agency subjected a war on terror captive.
These days, Kiriakou is outraged for a different reason: the tight connection between the CIA and the media elite. All too often, he says, the national security journalists who are granted access by Langley can be trusted to see world affairs—and the U.S. empire’s dominant role in them—the way the CIA wants them to. Whether it’s the war in Ukraine, tensions with Russia and China, or U.S. military exploits in the Middle East and Africa, coverage in The New York Times, The Washington Post and on television reflects the slanted view of the national security establishment.
When Kiriakou was a CIA official, he says, the agency leaked regularly to The Washington Post correspondents Woodward, David Ignatius and Joby Warrick—as well as “a half-dozen reporters” at The New York Times—because Langley spymasters knew they “will carry your water.”
Washington journalists who contradict the U.S. national security line—even legendary ones like Seymour Hersh, who enjoyed CIA access for many years—soon find themselves in the cold, according to Kiriakou. Hersh once worked for The New York Times and The New Yorker, but was forced to publish his exposé on the lethal U.S. raid on Osama bin Laden’s compound, which tied the alleged 9/11 mastermind’s execution more to clandestine collaboration with Pakistani intelligence than to American heroics, in the low-circulation London Review of Books. Last year, Hersh was relegated to Substack to publish his investigative report on the explosion of Russia’s Nord Stream pipeline, which blamed the act of war on U.S. Navy divers in a secret CIA operation ordered by President Joe Biden. (The New York Times still finds the sabotage a “mystery.”)
Hersh forced to self-publish? “That’s how bad it’s gotten in the United States,” Kiriakou says.
“Back in the good old days, when things were more innocent and simple, the psychopathic Central Intelligence Agency had to covertly infiltrate the news media to manipulate the information Americans were consuming about their nation and the world,” observed Caitlin A. Johnstone in MR (Monthly Review) Online. Now the CIA is the media, she ruefully concluded.
In 1977, Johnstone reminded her readers, Carl Bernstein of Watergate fame exposed the fact that the CIA supervised 400 reporters as agency “assets.” (Bernstein conveniently overlooked The Washington Post, which has a long history of coziness with intelligence. The newspaper’s current owner, Amazon founder Jeff Bezos, is a major CIA contractor.) When Bernstein’s article ran in Rolling Stone, it caused a tempest. Nowadays, nobody blinks an eye when “liberal” TV channels like CNN and MSNBC openly employ veterans of the CIA, FBI, NSA and other security agencies, such as commentators John Brennan, Jeremy Bash, Michael Hayden, James Clapper and Malcolm Nance.
Even Rolling Stone, once the voice of 1960s counterculture, which published radical and progressive writers like Tom Hayden, David Harris, Dick Goodwin and Robert F. Kennedy Jr.—can no longer be trusted by free-thinking readers. RS is one of the publications vacuumed up by the upstart empire, Penske Media, which also purchased Variety, Hollywood Reporter and most of the entertainment industry media as well as New York Magazine.
Under Penske Media—run by Jay Penske, the 44-year-old known for his floppy hair, model-like looks and not much else save the fact that his father is trucking mogul Roger Penske—Rolling Stone has taken a sharp turn to the right. When not attacking Kennedy as an “anti-vaxxer” and “conspiracy”-obsessed lunatic, RS touts the bloody stalemate in Ukraine and the presidency of “boring” Biden.
Japan could never have won the war against the U.S. It could not produce enough war goods to defeat the U.S., let alone the combined economic strength of the Allied powers (Hanson 2017, 303). Wartime production statistics strongly suggest U.S. war planners understood victory was inevitable long before the atomic bombing.
But Halsey maintained, “The first atomic bomb was an unnecessary experiment…. It was a mistake to ever drop it. Why reveal a weapon like that to the world when it wasn’t necessary? … [The scientists] had this toy and they wanted to try it out, so they dropped it…. It killed a lot of Japs, but the Japs had put out a lot of peace feelers through Russia long before” (qtd. in Alperovitz 1995, 331).
There were many great battles during the Second World War, however, the production battle was far and away the most important. A close examination of wartime production statistics strongly suggest that U.S. war planners understood victory over Japan was inevitable. Was the atomic bombing of Japan even necessary to win the war?
On August 6, 1945, the government of the United States dropped an atomic bomb on Hiroshima, Japan. The bomb killed sixty-five thousand Japanese instantly. Another sixty-five thousand inhabitants of Hiroshima perished in the following months. On August 9, the U.S. government dropped an atomic bomb on Nagasaki, killing thirty-five thousand instantly and another thirty-five thousand before the end of the year. Over the following decades, thousands more died from medical complications caused by the atomic bombing. In short, the U.S. government killed over two hundred thousand Japanese with atomic weapons. Fully 96.5 percent were civilians (Dower 2010, 199; Overy 2022, 790).
The atomic bombing was a watershed in history. Since the bombing, the specter of nuclear war has haunted humanity. For this reason, a survey of prominent journalists ranked the bombing as the most important event of the twentieth century (Walker 2005, 311). The controversy over the event is commensurate with its significance. Debates over the atomic bombing are waged with more ferocity and contempt than debates over almost any other historical topic.[1] Although many questions are involved, the debate almost inevitably comes down to this question: was it necessary?
Arguments over the bombing often appeal to statements from U.S. government officials. For example, President Harry S. Truman claimed the atomic bombing was “the greatest thing in history” and “saved millions of lives” (qtd. in Alperovitz 1995, 513, 517). By contrast, Admiral William D. Leahy—the highest-ranking U.S. military officer throughout the Second World War—thought the atomic bombing was unnecessary:
It is my opinion that the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan. The Japanese were already defeated and ready to surrender because of the effective sea blockade and the successful bombing with conventional weapons…. My own feeling was that in being the first to use it, we had adopted an ethical standard common to the barbarians of the Dark Ages. I was not taught to make war in that fashion, and wars cannot be won by destroying women and children. (1950, 513–14)[2]
Statements from government officials cannot establish whether the atomic bombing was unnecessary. Any argument that the bombing was unnecessary must be based on the facts of the war. To be sure, statements from government officials can be crucial in the search for essential facts. Still, the facts must be independently verified and interpreted.
Unfortunately, the vast literature on the atomic bombing overlooks the most important facts of the war—namely, the economic facts. At its core, the Second World War was an economic war. Economic conflict caused the war, and the economic battle was by far the most important battle.[3] It is impossible to fully understand the war in general and the atomic bombing in particular without understanding the economics of the war. This paper introduces vital economic facts about the Second World War into the literature on the atomic bombing.
The central thesis of this paper is that the atomic bombing of Japan was unnecessary. Basic wartime economic statistics show that the United States had an overwhelming economic advantage over Japan during the Second World War. The U.S. used its commanding economic position to wage a debilitating economic war against Japan. Production statistics show the U.S. economic war caused the Japanese economy to collapse. Additionally, production statistics strongly suggest that U.S. political and military leadership did not view Japan as an existential threat after 1943. Invading Japan was unnecessary for the same economic reasons that the atomic bombing was unnecessary.
The Big Economic Picture
An economic analysis of the Second World War must begin by comparing the sizes of the combatants’ territories, populations, and armed forces. All else equal, a combatant with more territory has an advantage over a combatant with less territory. A larger territory is more difficult to conquer and occupy, and it has more natural resources needed for war. As table 1 shows, the Allied powers’ home territory was 23.9 times larger than the Axis powers’ home territory. The U.S. alone was 6.3 times larger than the combined home territories of the Axis powers. The Japanese homeland was only 4.9 percent of the size of the continental United States. Even when Japanese colonial territory is considered, U.S. home territory was 4 times larger than total Japanese territory. Clearly, the U.S. had a massive territorial advantage over Japan.
The relative size of the combatants’ populations is another relevant factor in any war. All else equal, the combatant with the larger population can devote more manpower to the war effort. The total population of the major Allied powers (412.6 million) far exceeded the total population of the Axis powers (194 million). Moreover, China’s and India’s populations were 450 million and 360 million, respectively (Ellis 1993, 253). Hence, total population of all the Allies was approximately six times larger than the Axis population. As for the Pacific war in particular, the population of the U.S. (129 million) was almost twice the population of Japan (72.2 million).
As table 2 shows, nearly 56.9 million served in the Allied armed forces, while 30.4 million served for the Axis powers. And 16.4 million Americans served in the armed forces, compared to 9.1 million Japanese. As table 3 indicates, 13.8 Japanese servicemen died for every 1 American in the US-Japanese Theater. It is true that two-thirds of Japanese military deaths were due to starvation or illness (Dower 1986, 298). Still, the American kill ratio averaged five to one for the war. And it skyrocketed to twenty-two to one between March 1944 and May 1945 (Miles 1985, 134). In short, the American armed forces were much larger than their Japanese counterparts, and the Americans were far more deadly.
The U.S. thus had a significant advantage over Japan in terms of territory, population size, and servicemen. However, greater numbers do not guarantee victory. History is full of examples in which a smaller force was able to easily defeat a much larger force. Two obvious examples are the Spanish conquest of the New World and the Opium Wars. How can a smaller force prevail? The answer is superiority in war goods.
War goods are required to fight and win wars. All else equal, a force better equipped with war goods has an advantage over a force poorly equipped. To take an extreme example, a force with machine guns has an advantage over a force with wooden spears. But war goods do not fall from the sky; they must be produced. The combatant with the superior economic ability to produce war goods has an important advantage in war. Indeed, the production advantage is the decisive advantage in modern war.
Gross domestic product (GDP) is the most common measure of a country’s capacity to produce. In 1945 total Allied GDP was 5.1 times greater than total Axis GDP. U.S. GDP alone was 3.2 times greater than total Axis GDP. The combined GDP of all the other major powers—the UK, the USSR, France, Germany, Italy, and Japan—was only 94 percent of U.S. GDP. Amazingly, U.S. GDP was 10.2 times greater than Japanese GDP in 1945. Put differently, Japanese GDP was only 9.8 percent of U.S. GDP when the atomic bombs were dropped.
Military spending is a good indicator of a combatant’s economic capacity to wage war. Table 5 shows the military spending of each major power during the war. By 1945 Allied military spending was 3 times greater than total Axis military spending. U.S. military spending alone was 1.3 times greater than total Axis GDP in 1945 and 5.7 times greater than Japan’s military spending. Shockingly, Japanese GDP was only 23.3 percent of U.S. military spending in 1945. Military-spending statistics show that the U.S. had an enormous economic advantage over Japan and all the other major powers.
Artistic rendering of political philosopher Gene Sharp in his office
This year marks the fiftieth anniversary of publication of Gene Sharp’s The Politics of Nonviolent Action. Economist Kenneth Boulding likened Sharp’s book to The Wealth of Nations saying, “There is a single theme of immense importance to society played in innumerable variations.” The papers that constitute this symposium explore different aspects of Sharp’s book, emphasizing its contemporary relevance.
This year marks the fiftieth anniversary of the publication of Gene Sharp’s insightful and important book, The Politics of Nonviolent Action, which was originally published in 1973. The papers that constitute this symposium explore different aspects of Sharp’s book, with an emphasis on its contemporary relevance. In this introduction, I provide an overview of the key themes of the book and the papers that follow.
The Politics of Nonviolent Action was a revised version of Sharp’s dissertation that he wrote while pursuing his doctorate at Oxford University.[1] His work on nonviolence was heavily influenced by Mahatma Gandhi, the focus of Sharp’s first two books (Sharp 1960, 1961). Nonviolent action refers to a range of activities aimed at bringing about change without resorting to physical and aggressive force. The theory of nonviolent action is, in reality, a theory of collective action: it seeks to understand how ordinary people can come together to use the power they possess against an opposition that possesses an advantage in physical violence. It is focused on strategic means and makes no normative judgment about whether the ends pursued are “good” or “bad.”
Sharp’s book consists of three parts, which together offer a comprehensive treatment of nonviolent action. The foundations of nonviolent are presented in part 1, “Power and Struggle.” Sharp begins by distinguishing between two views of political power. The monolith theory treats political power as given, durable, and concentrated in the hands of the political elite. In this view, the populace is dependent on government rulers, which limits the ability of people to exercise power in controlling the state.
The alternative to the monolith theory is the pluralistic-dependency theory of political power.
He said that this fallacy stems from the god complex of some economists and the desire to make economics more like physics
The value of money shouldn’t be stabilized because there is nothing stable about a healthy, progressing, dynamic economy. Consumer preferences, technology, natural resources, and a million other variables are constantly changing,
Why have central banks settled on a 2 percent price inflation target? Project Syndicate asked four economists about this target and whether it is still appropriate. I’ll summarize their answers and then consider Mises’s position on “stabilization policy.”
Four Economists’ Answers to “Is 2 Percent Really the Right Inflation Target for Central Banks?”
Michael Boskin, Stanford University professor, Hoover Institution senior fellow, and former chair of the Council of Economic Advisers to George H.W. Bush, concludes that 2 percent is probably about right, mainly due to the negative consequences of a higher target. He considers whether a higher target could be maintained in a stable way as it comes with more variations in the returns to capital, less credibility regarding the price stability component of the dual mandate, and less restraint on government spending.
John Cochrane, who is also a Hoover Institution senior fellow, suggests that the central bank and the government should not target a price inflation rate but the price level instead. The resulting stability would give confidence to firms, investors, and government bond buyers. For Cochrane, the most important thing is maintaining stable expectations so that inflation is one less thing for people to worry about as they make their economic decisions.
Brigitte Granville, professor at Queen Mary University of London and the author of Remembering Inflation, thinks that 5 percent is a better target. She cites empirical research that shows no effect on real economic growth when price inflation is in the 5 percent range. She does caution that stability is key, however. Her reasoning for a 5 percent target doesn’t make sense and is contradictory: “[Falling to a 2 percent target] would mean further compression of real household incomes,” but she also says “a recovery in real average wages, alongside higher-than-2% inflation, would provide a much-needed boost to productivity, as it would motivate workers . . . and create incentives for more labor-substituting investment.” Your guess is as good as mine.
Finally, Kenneth Rogoff, professor at Harvard University and former chief economist of the International Monetary Fund, says that a higher target should be adopted due to nominal wage rigidities and the zero lower bound for nominal interest rates. More inflation means that employers can more easily pay workers less in real terms without having to decrease nominal wages. Also, a higher long-term price inflation rate would give the central bank more room to cut interest rates in a crisis.
Rogoff says that the ability to impose negative interest rates would allow central banks to continue to target 2 percent. He gives some radical ideas on how to do that, like “phasing out large-denomination currency notes” and “relaxing the one-to-one exchange rate between the digital- and paper-currency dollar.” (!)