Two of the study co-authors, Carolin Seele and Stefan Meldau, collect buds for the subsequent analysis of plant hormones and tannins. (Bettina Ohse)
Picture the woods in early spring — the misty, pale green appearance of trees just starting to sprout, the young deer munching their way across the sun-dappled forest floor.
But behind that placid exterior, a battle of wits is raging. And despite not having a brain or any central nervous system to speak of, the trees are pretty wily.
In a study published in the most recent edition of the journal Functional Ecology, scientists in Leipzig, Germany describe the brilliant way that wild maple and beech trees figure out when roe deer are eating them — and enact a strategy to make sure the critters don’t return for another snack.
The stakes are high for young trees. If they can head off the threat from deer early, the saplings have a chance of catching up with their unbitten peers and making it to adulthood. But too much nibbling will make the trees stunted, or condemn them to death.
So the plants have evolved a complex set of responses specifically to deal with herbivore threats. Whenever a branch is snipped — by a deer, insect or human — trees released “wound hormones” called jasmonates. These chemicals help with the recovery process. They also play a role in interplant communication; when one plant releases jasmonates, their neighbors start to ramp up their defenses against disease and insect attacks as well. It’s like a forest-wide alarm system.
But the trees studied in Leipzig seemed able to recognize specific threats, and tailor their response accordingly. When a roe deer was eating their branches, the trees released a second set of chemicals: first the hormone salicylic acid, then bitter chemicals called tannins. The salicylic acid boosts protein production, allowing the trees to regrow what was lost, and the tannins make the trees distasteful to deer, which prevents further snacking.
“On the other hand, if a leaf or a bud snaps off without a roe deer being involved, the tree stimulates neither its production of the salicylic acid signal hormone nor the tannic substances. Instead, it predominantly produces wound hormones,” Bettina Ohse, lead author of the study, explained in a statement.
To test whether this was truly a tailored response to a roe deer threat, the scientists in Leipzig attempted do some outsmarting of their own. They simulated roe deer snacking by clipping the trees, then trickling deer saliva onto the broken branches with a tiny pipette. The fake deer attack sparked the same defensive response as an actual bite from a deer, suggesting to the scientists that trees are able to recognize deer saliva and respond to it. Now, Ohse and her team are performing the experiment on other tree species, to see whether some have better defensive mechanisms than others.
Keep that in mind the next time you consider matching wits with a tree.
Sarah Kaplan is a reporter for Speaking of Science.
ONE OF THE most shocking things about Thursday’s announcement of the Equifax data breach is the sheer scale of the numbers involved. Particularly the Social Security numbers. Yes, there have been plenty of large data breaches before—5 million SSNs revealed in a Kansas Department of Commerce leak in July, 80 million in the notorious 2015 Anthem health insurance breach—but with Equifax’s revelation that 143 million Americans may have had their SSNs stolen (along with other sensitive personal information), security experts are pressing for a fundamental reassessment in how, and why, we identify ourselves.
Considered along with the data stolen from various other breaches, hacks, and leaks, “it’s a safe assumption that everyone’s Social Security number has been compromised and their identity data has been stolen,” says Jeremiah Grossman, the chief of security strategy at the defense and threat monitoring firm SentinelOne. “While it may not be explicitly true, we have to operate under that assumption now.”
SSNs, which have been around since the 1930s, have only one intended purpose: to track US citizens’ earnings and contributions to the Social Security program. (In an uncanny twist, the Social Security Administration itself sometimes uses Equifax services to help verify a person’s identity during the process of setting up a “My Social Security” account, an SSA spokesperson told WIRED on Friday. But the Administration doesn’t share Social Security numbers with Equifax.) Other collection of SSNs is generally legal, but the Social Security Administration has no involvement in wider use of the numbers. “The card was never intended to serve as a personal identification document,” the Administration says on its website. “The universality of SSN ownership has in turn led to the SSN’s adoption by private industry as a unique identifier. Unfortunately, this universality has led to abuse.”
Problems stem from a number of places. Your Social Security number is supposed to be kept secret, which is an increasing challenge in the digital era. And unlike other, similar secrets (like credit card numbers and passwords), SSNs are extremely difficult to change. The Social Security Administration can issue you a new one in extreme cases of identity theft or abuse. Even if you are able to alter your SSN, though, so many institutions already have your original number on file that criminals can often successfully leverage the stolen information for years. On top of all of that, the new number you receive remains tied to the old one.
“The SSN is used for purposes entirely unrelated to its original purpose. That almost always leads to problems,” says Marc Rotenberg, president of the Electronic Privacy Information Center, which has been advocating for SSN usage reform for more than two decades. “Congress needs to step up and hold hearings. We need laws that limit the collection and use of SSNs. And we need to penalize companies that collect SSNs but can’t protect [them].”
The conventional wisdom about SSN security, which is actually pretty wise, is to limit how often you give your information out. Some organizations that ask for your SSN can still interact with you without it, and sometimes there are ways around providing it. (For example, some utilities like internet providers won’t require your SSN if you pay them a deposit to insure your account.) But too often these measures are inconvenient or impractical, and there are still numerous situations in which it is impossible to avoid submitting your SSN, like on tax forms or background checks. Some regulatory initiatives have had success curtailing SSN distribution, like this week’s Center for Medicare Services announcement that SSNs will be removed from Medicare benefits cards. But that step alone took years to implement.
Experts across numerous privacy and security fields agree that the solution to the over-collection and over-use of SSNs isn’t one particular replacement, but a diverse array of authentications like individual codes (similar to passwords), biometrics, and even physical tokens to create more variation in the ID process. Some also argue that the government likely won’t be the driving force behind the shift. “We have a government that works at a glacial pace in the best of times,” says Brenda Sharton, who chairs the Privacy & Cybersecurity practice at the Goodwin law firm, which has worked on data privacy breach investigations since the early 2000s. “There will reach a point where SSN [exposure] becomes untenable. And it may push us in the direction of having companies require multi-factor authentication. Change may come from enterprise and private companies responding to the threat by requiring additional identifiers.”
Health care companies, for instance, could use a different system from education, which could use a different approach than financial institutions. If credit reporting agencies like Equifax use different identifiers than SSNs, your electric company and your wireless provider could ask for those identifiers to run background checks. And if this new identifier were easy enough to change (unlike SSNs), breaches, leaks, and other unintended exposures would be less consequential.
“The whole SSN as identifier regime needs to be scrapped,” says Eduard Goodman, global privacy officer at the identity theft protection firm CyberScout. “As we see more and more issues with the centralization of data, different schemes for different uses—biometrics for in-person interactions/transactions, some form of advanced encryption or blockchain technologies online. The solutions are already in front of our eyes.”
Having a few identifiers to keep track of would be more complicated for consumers than the current system, but it would have numerous benefits and would still be less to manage than the tangle of usernames and passwords that exist online. “Can I imagine a world where we don’t have this one identifier floating around?” SentinelOne’s Grossman asks. “We have it on the web. We don’t log into Facebook or LinkedIn or Google with our social security number, so I can imagine that world. We actually live it online.”
The personal security situation on the internet as it stands now is certainly fraught, but it would be possible for organizations to implement strong and diverse authentication factors that cut down on the dramatic exposure that currently exists with SSNs. Even simply mailing customers an additional pin number has allowed the Internal Revenue Service to reduce identity theft-related tax fraud. “It’s kind of a watershed moment, but whether and what kind of changes companies implement will depend on their business model and how much of a hit they take,” after data breaches, Goodwin’s Sharton says. “These incidents can be reputationally devastating.”
The impacts of Equifax’s breach could push the company to advocate for new identifiers and authenticators in the credit reporting and financial sectors. Everyone with a Social Security number–also known as the hundreds of millions of people in the US with a Social Security number– is counting on a change.
What happens when white supremacists take genetic ancestry tests (GAT)? One might think that they would avoid GAT, since genetics has decisively shown the common African ancestry of all people. Furthermore, emerging research on consumers of GAT suggests that white individuals seek them out to search for some kind of exotic, but unknown ethno-racial heritage to spice up their personal histories. However, Jo Phelan and colleagues (Phelan et al. 2014) have shown that GAT encourage people to understand race and racial differences as genetically determined. Thus, perhaps white nationalists would be interested in using GAT to police the boundaries of their community. As John Law, a member of the National Vanguard, said in a 2006 podcast: “Who’s White? . . . Non-Jewish people of wholly European descent. No exceptions.”
Established in 1996, Stormfront has been an online forum for users to ruminate about racial differences, welfare leeches, immigrant invasions, and reviving eugenic social policy. Stormfront posters often assert they are not racists, but are rather fighting to preserve a White Western culture being undermined by a globalized, multicultural society. Fine, but posts overflow with outrageously racist slurs. White nationalists use the site to seek community and counsel, complain about people of color, and even seek dating advice:
I’ve started dating a woman who’s really smart, pretty, funny, and cool . . . except that she mentioned that some great-great relative . . . was Native American. . . . It is bugging me . . . that there is that Native American DNA stuck in her gene pool now and that it may re-emerge from time to time. Am I being overly critical?
So how are GAT interpreted by people with a strong belief in race as a biological essence and the defining characteristic of individual worth and social relations? What kinds of insight could come from people who respond to queries about having a great-great nonwhite ancestor with adages like “if there was a turd in the punch bowl that had been strained out, would you take a drink?”
A team of researchers and I have followed Stormfront discussions where over six hundred individuals have posted their GAT results. Many of these come as good news for white nationalists, either confirming their white purity priors or offering a pleasant surprise: “I was surprised there wasn’t more German. Evidently, the Y DNA said ‘Nordic’ and traces back to the Cimbri tribe, which settled in Denmark.”
But some posters report news that they consider troubling: some fraction of ancestry from nonwhite or non-European populations, or a mitochondrial or Y-DNA haplogroup most commonly found outside Europe. For instance, one wrote: “Hello, got my DNA results and I learned today I am 61 percent European. I am very proud of my white race and my european roots.” Another poster replied: “I’ve prepared you a drink. It’s 61 percent pure water. The rest is potassium cyanide. I assume you have no objections to drinking it. . . . Cyanide isn’t water, and YOU are not White.” Yet such exchanges are actually rare, and are mostly reserved for posters who are taken to be trolls provoking regular Stormfronters.
A more common response to what posters see as “bad news” is identity repair work. One woman discussed the GAT taken by her adopted sister, which “‘predicted’ my sister’s [mtDNA] haplogroup to be L3, which means she’d have to be of African origin. . . . Is it possible for a White person to have this haplogroup? She is tall, has straight blonde hair, dark green eyes, pale skin and NO traits of african ancestry whatsoever.”
Supportive posters warned about possible contamination of the sample and advised a corroborating test. Others talked about Caucasian ties to some of the subbranches of the L3 haplogroup. Still others comforted that even if the result was accurate, after generations of European mixture with that original ancestor, “her percentage of possible foreign ancestry/genetic makeup would then literally be nonexistent, at this point, yet that same mtDNA marker will have still remained.” Responses in this vein often denigrate GAT as a (Jewish) conspiracy “out to prove that race doesn’t exist and [that] we are never ‘full white,’ just because a half evolved ancestor of ours resided in a non-european area.”
One irony is that despite a genetic-determinist ideology of race, Stormfronters are willing to negotiate the meaning of GAT through the same sort of affiliative self-fashioning that Alondra Nelson (2016) identified in African American GAT users. They accept the information that fits with prior identities and deny what doesn’t. Some go even further, using GAT to build theories of race. For example, one poster argued that
non-European ancestry in one’s autosomal DNA isn’t good, but that non-White autosomal DNA can be cut in half every generation. . . . I am more strict with Y and mtDNA haplogroups because these haplogroups are passed from father to son, mother to daughter, and remain virtually unchanged indefinitely. . . . The biracial female with a White Mother or the biracial male with a White father are the lesser of two evils when it comes to potential assimilation.
This update to the one-drop rule leverages information from GAT about the distinction between nonrecombining Y and mtDNA, marking out lineages and autosomal DNA that disclose overall population contributions to one’s DNA inheritance.
White nationalist interpretations of GAT are important because they reflect more than simple ignorance or misunderstanding of the science. Most population geneticists would be appalled at the use of their variation-based research to build typological theories of human classification. But these scientists have produced tools open to such interpretations. GAT rests on an infrastructure presumed to be good and evil in conventional ways: that is, good for citizens to learn about themselves, bad because of privacy threats and undisclosed, open-ended data mining. But what GAT also does is set up a whole new infrastructure for racists to endow their groundless theories with a high-tech scientific imprimatur and to convince each other of the myths that mobilize them as a social group in the first place.
Nelson, Alondra. 2016. The Social Life of DNA: Race, Reparations, and Reconciliation After the Genome. Boston: Beacon Press.
The first time President Obama met Donald Trump shortly after his election, he warned him about the critical nature of the threat North Korea posed our country. Yet, months later, Trump has intentionally kept vacant every diplomatic position that is relevant to solving this crisis.
The truth is that there is no obvious solution to this problem, but if a path does exist, it’s hard to find it with gutted diplomatic leadership.
I wrote a piece for Huff Post on the importance the diplomatic components missing from Trump’s approach to this problem. I hope you’ll read it and forward it to your friends today.
The foreign policy that President Trump previewed as a candidate – lots of rhetorical bluster with no actual policy ideas behind it – has metastasized as advertised during the first six months of his administration. Nowhere is that more apparent than in Trump’s approach to the growing nuclear weapons capability of Kim Jong Un’s despotic regime in North Korea. This week, Trump promised to rain “fire and fury” down on Pyongyang if Kim continued to threaten North America with a possible future ICBM-mounted nuclear warhead. As per usual, this threat had no specifics behind it, and was immediately backtracked by Trump’s Secretary of State, who feared an escalation of words that could spin into an escalation of actions.
But as a frequent and pointed critic of our president’s foreign policy (or lack thereof), let me give Trump credit for getting halfway there on a workable North Korea policy. For years, we thought North Korea was a decade away from being able to threaten America with a nuclear weapon. But Kim Jong Un has rapidly increased the pace of testing and development of rockets and nuclear weapons. In the face of this new evidence, we believe that the threat could mature before the end of Trump’s first term.
The most crucial question that remains is that of Kim Jong Un’s state of mind. There are, of course, two basic possibilities: either Kim is a rational actor who wants nuclear weapons as a means of securing his survival and will not actually use them because he knows the counterattack would be the end of his regime; or he is an actual madman and could be provoked into using the weapon despite the apocalyptic consequences that would follow. A sensible U.S. policy toward North Korea needs to acknowledge that both scenarios could be true and seek to counter both possible interpretations of Kim’s intentions.
If you believe the first interpretation, then tough talk – backed up by a credible military threat – is not an irresponsible policy tool. If Kim Jong Un is a rational actor, then he will not use the weapon out of fear of his own destruction. Thus, he needs to know that if he ever does something insane like launching a weapon at Guam, we have the capability and the willingness to respond disproportionately. Now, as could be expected, Trump mishandled the tough talk by using over-the-top language that seems more suited to Game of Thrones than modern international diplomacy. And his claim that in six short months he has dramatically upgraded America’s nuclear arsenal is both false and easily knowable as pure braggadocio since that upgrade could not happen in such a short timeframe and without funds or authorization from Congress. But Pyongyang does need to know that we are serious about a military response if the regime ever decides to test us with an attack on or near the U.S. or our treaty allies (like South Korea or Japan).
The problem is that tough talk and credible military options are only half of the necessary policy. The other half – an actual policy designed to protect our allies and pursue a path to halt North Korea’s nuclear weapons program – seems to be intentionally nonexistent right now. And if you are worried that Kim Jong Un might launch a weapon of mass destruction, then you either have to use the military option preemptively (assuring the deaths of hundreds of thousands of Koreans) or have a political and diplomatic strategy to either end the regime or end their pursuit of an ICBM-mounted nuclear weapon.
Here are some of the most important components of the second missing half of a tough and sensible North Korea strategy.
1. Name an Ambassador to South Korea and staff the senior State Department positions that deal with North Korea. Overall, Trump’s decision to gut the State Department and leave dozens of key posts unfilled has sent a chilling message to the world that the United States is engaged in a massive, unprecedented withdrawal from its position of global leadership. Nowhere are the consequences of this policy more disastrous than on the Korean peninsula. The failure of Trump to appoint an Ambassador to Seoul has symbolic ramifications – the North Koreans take it as a sign that the United States is disengaging from political and security questions on the peninsula. But no top diplomat in South Korea also makes it much harder for the United States to work hand in hand with Seoul to counteract the increased saber rattling from Kim Jong Un.
It’s just as unthinkable that we’re seven months into President Trump’s term and he has not nominated anyone for two of the most important posts in the State Department – the Assistant Secretary for East Asian and Pacific Affairs and the Assistant Secretary for Non-Proliferation. If there is a diplomatic path out of this crisis, it’s unlikely that a president with no diplomatic experience and a Secretary of State with no diplomatic experience can find it alone. These two key positions are vitally necessary to find a path to peace. You simply cannot solve this problem if you have no one to solve it. Unbelievably, that’s the position we are in right now.
2. Make a run at Iran-style multilateral economic and political sanctions.Yes, North Korea is not Iran. When the early framework of the Iran Deal was being negotiated, the Supreme Leader didn’t yet have a bomb and was deeply worried about the viability of his regime. He was willing to deal in order to gain economic security. Kim Jong Un appears to have decided that his greater threat lies from outside and is willing to economically starve his people in order to get a weapon that keeps his foreign enemies at bay. But why not test this assumption and see how domestically secure Kim Jong Un really is? Quietly, during President Obama’s first term, he sat across the table from world leader after world leader and asked for one thing – sanctions on the Iranian regime. Without fanfare, countries slowly obliged, and Obama built up a backbreaking multilateral network of sanctions that ultimately forced the Iranians to the negotiating table. Admittedly, this seems to be an impossibility for Trump, who has alienated world leaders at a blinding pace since being sworn in. And with nobody home at the State Department, it’s hard to imagine this kind of effort working. But it’s not too late. Kim Jong Un may not respond to crippling sanctions like Iran did. But North Korea has come to the negotiating table before, and with time running out, the cost of trying this path does not outweigh the risk of it failing.
3. Drop the demand that negotiations must come with preconditions.Holding the position that you won’t negotiate until certain steps are taken by your adversary is a strategy that rarely works. It makes for good tough guy talk, but unless you have the upper hand, it just telegraphs that you’re afraid of sitting down. Again, the Iran nuclear negotiations offer a template. Dropping preconditions doesn’t mean you don’t drive a tough bargain when the talks begin. And it allows for small incremental agreements like the one that started the Iran negotiations. Tie an economic noose around North Korea and then extend an open offer to talk. The administration might be surprised at the answer they receive.
4. Ramp up the information campaign inside North Korea. Trump’s deconstruction of the State Department comes with a myriad of costs to U.S. national security. But at the top of that list is the lost opportunity to tip the political balance in North Korea away from the oppressive regime. My friend Tom Malinowski, former Assistant Secretary of State for Human Rights, made a compelling case for how the United States, through increased State Department capacities, could ramp up the flow of information to the North Korean citizenry, giving them the ability to see the fraud that is being perpetrated on them by Kim Jong Un. Malinowski makes the case that the fall of Kim Jong Un is frankly more likely than the leader willingly giving up nuclear weapons, and though “regime change” doesn’t have to be official U.S. policy, helping set the conditions for the fall of Kim might be our best long-term strategy. The North Korean regime’s only hope of maintaining complete control over the country rests on its ability to keep the citizenry in the dark and strictly control what they see and hear about the world. Senator Rob Portman and I passed legislation last year establishing a new counter-propaganda office at the State Department. Inexplicably, Secretary of State Tillerson is now refusing to accept funding to help stand it up. This is the kind of center that could help counteract the Pyongyang’s misinformation campaign inside North Korea and provide the North Korean people with the truth ― that they don’t need to live the way they do.
5. Put someone other than Jared Kushner in charge of China. Foreign policy pundits often overstate how much influence China has over Kim Jong Un, but it’s not unfair to say that without China on board, no North Korea strategy will work. Right now, President Trump has no one on his senior national security leadership team with any experience in China, or for that matter, in the entire Pacific region. The three generals close to Trump – Mattis, McMaster, and Kelly – all earned their stars through Middle Eastern combat. None of the three ever served in a senior role in Asia. Neither Tillerson nor his new Deputy Secretary have any China experience. Trump is right to prioritize North Korea with the Chinese right now, but it appears that Trump’s main lines of communication with the Chinese are his Twitter feed and his son-in-law, who has zero foreign policy experience. This is outrageous and dangerous. An experienced China hand who can order our priorities with China and guide a more functional relationship is more necessary now than ever.
North Korea isn’t an easy problem to solve. There is no policy that has a high likelihood of success. But failure is virtually guaranteed if all you have in your arsenal is Hollywood western style threats and a policy that is dictated by the impromptu whims of the president’s Twitter finger. This problem requires a thoughtful approach with unity and clarity among the United States, diplomats, and our allies. Keeping our country safe must be Washington’s top priority. America needs be tough and smart at the same time – before it’s too late.
Thanks for reading,
U.S. Senator, Connecticut
P.S. I am up for re-election this cycle and I know that my outspoken approach on many issues will make me a target of the White House. But as long as you continue to have my back, I am not worried. Chip in $3 to my campaign today: http://action.chrismurphy.com/re_election
Immediate annuities can be a useful tool to protect the spouse of a nursing home resident who applies for Medicaid. These types of annuities allow the nursing home resident to spend down assets and give the spouse a guaranteed income. But immediate annuities may not work in every state, so be sure to check with your attorney.
Medicaid is the primary source of payment for long-term care services in the United States. To qualify for Medicaid, a nursing home resident must become impoverished under Medicaid’s complicated asset rules. In most states, this means the applicant can have only $2,000 in “countable” assets. Virtually everything is countable except for the home (with some limitations) and personal belongings. The spouse of a nursing home resident–called the “community spouse” — is limited to one half of the couple’s joint assets up to $120,900 (in 2017) in “countable” assets. The least that a state may allow a community spouse to retain is $24,180 (in 2017). While a nursing home resident must pay his or her excess income to the nursing home, there is no limit on the amount of income a spouse can have.
An immediate annuity is a contract with an insurance company under which the annuitant pays the insurance company a sum of money in exchange for a stream of income. This income stream may be payable for life or for a specific number of years, or a combination of both — i.e., for life with a certain number of years of payment guaranteed. In the Medicaid planning context, most annuities are for a specific number of years.
The spouse of a nursing home resident may spend down his or her excess assets by using them to purchase an immediate annuity. But if Medicaid applicants or their spouses transfer assets within five years of applying for Medicaid, the applicants may be subject to a period of ineligibility, also called a transfer penalty. To avoid a transfer penalty, the annuity must meet the following criteria:
The annuity must pay back the entire investment. When interest rates were higher, it was possible to purchase annuities for as short as two years, but now short annuities usually don’t pay back the full purchase price.
The payment period must be shorter than the owner’s actuarial life expectancy. For instance, if the spouse’s life expectancy is only four years, the purchase of an annuity with a five-year payback period would be deemed a transfer of assets.
The annuity must be irrevocable and nontransferrable, meaning that the owner may not have the option of cashing it out and selling it to a third party.
The annuity has to name the state as the beneficiary if the annuitant dies before all the payments have been made.
Here’s an example of how an immediate annuity might work: John and Jane live in a state that allows the community spouse to keep $120,900 of the couple’s assets. If John moves to a nursing home and John and Jane have $320,000 in countable assets (savings, investments, and retirement accounts), Jane can take $200,000 in excess assets and purchase an immediate annuity for her own benefit. After reducing their countable assets to $120,000, John will be eligible for Medicaid. If the annuity pays her $3,500 a month for five years, by the end of that time, Jane will have received back her investment plus $10,000 of income. If she accumulates these funds, at the end of five years she will be right back where she started before John needed nursing home care.
Given this planning opportunity, many spouses of nursing home residents use immediate annuities to preserve their own financial security. But it’s not a slam-dunk for a number of reasons, including:
Some states either do not allow spousal annuities or put additional restrictions on them.
Other planning options may be preferable, such as spending down assets in a way that preserves them, transferring assets to exempt beneficiaries or into trust for their benefit, seeking an increased resource allowance, purchasing non-countable assets, using spousal refusal, or bringing the nursing home spouse home and qualifying for community Medicaid.
The purchase of an annuity might require the liquidation of IRAs owned by the nursing home spouse, causing a large tax liability.
The non-nursing home spouse may be ill herself, meaning that she may need nursing home care soon, in which case the annuity payments would simply go to her nursing home.
The savings may be small due to a high income or the short life expectancy of the nursing home spouse, and the process of liquidating assets and applying for Medicaid might not be worth the considerable trouble.
In short, the use of this powerful planning strategy depends on each couple’s particular circumstances and should be undertaken only after consultation with a qualified elder law attorney. In addition, those who do purchase immediate annuities need to shop around to make sure they are purchasing them from reliable companies paying the best return.
Finally, couples need to beware of deferred annuities. Some brokers will attempt to sell deferred annuities for Medicaid planning purposes, but these can cause problems. While a deferred annuity can be “annuitized” (meaning it can be turned into an immediate annuity), if the nursing home resident owns the annuity, the income stream will be payable to the nursing home instead of to the healthy spouse. Often, the annuity will charge a penalty for early withdrawal, so it is difficult to transfer it to the healthy spouse. In short, while immediate annuities can be great tools for Medicaid planning, deferred annuities should be avoided by anyone contemplating the need for care in the near future.
It was June 3, 1800. PresidentJohn Adamsarrived in Washington, D.C., for the first time. The capital city, which had been chosen by George Washington as the seat of government for the United States, was still under construction. There were no schools or churches, only a few stores and hotels, and some shacks for the workers who were building the White House and the Capitol. The area was swampy and full of mosquitoes, and the ground covered with tree stumps and rubble. Adams might have been depressed by the dismal site of the country’s new capital, but he wrote to his wife, Abigail, “I like the seat of government very well.”
It was several months before Adams was able to live in the White House, then known as the President’s House. On the day he moved in, he entered the house with just a few of his staff. There was no honor guard or entourage or any kind of ceremony. The house was still unfinished, still smelling of wet paint and wet plaster. The furniture had been shipped down from Philadelphia, but it didn’t quite fit the enormous rooms of the new house. The only painting that had been hung on the wall was a portrait of George Washington in a black velvet suit.
It had been a difficult period in Adams’s life. He’d had a hard time filling the shoes of George Washington as president. He’d been struggling with debts ever since his election, as the presidential salary was rather meager. He’d barely prevented a war with France. He’d been plagued with political infighting among his cabinet, and in the upcoming presidential election, it looked like he might lose to Thomas Jefferson.
So Adams might have been thinking about all his troubles when he went to bed that night as the first president to sleep in the White House. He had left Abigail in Philadelphia, so he had to sleep alone. The following morning, he sat down at his desk, and in a letter to his wife he wrote: “I pray heaven to bestow the best of blessings on this house and on all that shall hereafter inhabit it. May none but honest and wise men ever rule under this roof.”
Adams only lived in the White House for a few more months, since he lost the election to Jefferson that year. But about 150 years later, Franklin Roosevelt had the words from Adams’s letter to Abigail carved into the mantel in the State Dining Room.
Geochemical signals from deep inside Earth are beginning to shed light on the planet’s first 50 million years, a formative period long viewed as inaccessible to science.
In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine.
A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!”
The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there — before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest.
Other modern findings about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy — a period that until recently was inaccessible to science.
After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents.
The anomalies “are giving us information about some of the earliest Earth processes,” Walker said. “It’s an alternative universe from what geochemists have been working with for the past 50 years.”
The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation — and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years.
“You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.”
Pieces of the Puzzle
On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes — places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly.
A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas — a necessity when the goal is understanding a system as complex as Earth.
Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face — some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics.
Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism — well, he did, but it was crazy,” she said.
A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge — the underwater mountain range that lines the world ocean like a seam — and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.)
The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere.
Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up — perhaps these are the blobs? — and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them — all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise.
The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized.
Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history — very early on.”
Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when.
A Sketchy Timeline
Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery.
The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules — colorful, beautiful — under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing — just regular things.”
In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation, though still hotly debated in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place.
Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began.
First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star — our sun — surrounded by a placenta of hot debris.
Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars — the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets.
As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers — core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks.
The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”
Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.
Wasserburg dubbed the event the “lunar cataclysm.” Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said.
As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden.
In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment — and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans.
With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost.
Extending the Rock Record
More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred.
Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived.
In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system — hafnium-tungsten — in ancient rocks, which would point back to even earlier times in Earth’s history.
The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182.
That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around — processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t?
Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old.
Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average — the first ever detection of a “positive” tungsten anomaly on the face of the Earth.
The paper scooped Richard Walker of Maryland and his colleagues, who months later reported a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia.
Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history.
The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.”
Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks — 62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks.
More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s group reported a negative tungsten anomaly — that is, a deficit of tungsten-182 relative to tungsten-184 — in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation.
Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass — delivered to the planet in a long tail of minor impacts — mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies.
Other evidence complicates this hypothesis, however — namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize.
Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material.
“These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.”
According to our best theories of physics, the universe is a fixed block where time only appears to pass. Yet a number of physicists hope to replace this “block universe” with a physical theory of time.
Einstein once described his friend Michele Besso as “the best sounding board in Europe” for scientific ideas. They attended university together in Zurich; later they were colleagues at the patent office in Bern. When Besso died in the spring of 1955, Einstein — knowing that his own time was also running out — wrote a now-famous letter to Besso’s family. “Now he has departed this strange world a little ahead of me,” Einstein wrote of his friend’s passing. “That signifies nothing. For us believing physicists, the distinction between past, present and future is only a stubbornly persistent illusion.”
Many physicists have made peace with the idea of a block universe, arguing that the task of the physicist is to describe how the universe appears from the point of view of individual observers. To understand the distinction between past, present and future, you have to “plunge into this block universe and ask: ‘How is an observer perceiving time?’” said Andreas Albrecht, a physicist at the University of California, Davis, and one of the founders of the theory of cosmic inflation.
Others vehemently disagree, arguing that the task of physics is to explain not just how time appears to pass, but why. For them, the universe is not static. The passage of time is physical. “I’m sick and tired of this block universe,” said Avshalom Elitzur, a physicist and philosopher formerly of Bar-Ilan University. “I don’t think that next Thursday has the same footing as this Thursday. The future does not exist. It does not! Ontologically, it’s not there.”
Last month, about 60 physicists, along with a handful of philosophers and researchers from other branches of science, gathered at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, to debate this question at the Time in Cosmology conference. The conference was co-organized by the physicist Lee Smolin, an outspoken critic of the block-universe idea (among other topics). His position is spelled out for a lay audience in Time Reborn and in a more technical work, The Singular Universe and the Reality of Time, co-authored with the philosopher Roberto Mangabeira Unger, who was also a co-organizer of the conference. In the latter work, mirroring Elitzur’s sentiments about the future’s lack of concreteness, Smolin wrote: “The future is not now real and there can be no definite facts of the matter about the future.” What is real is “the process by which future events are generated out of present events,” he said at the conference.
Those in attendance wrestled with several questions: the distinction between past, present and future; why time appears to move in only one direction; and whether time is fundamental or emergent. Most of those issues, not surprisingly, remained unresolved. But for four days, participants listened attentively to the latest proposals for tackling these questions — and, especially, to the ways in which we might reconcile our perception of time’s passage with a static, seemingly timeless universe.
Time Swept Under the Rug
There are a few things that everyone agrees on. The directionality that we observe in the macroscopic world is very real: Teacups shatter but do not spontaneously reassemble; eggs can be scrambled but not unscrambled. Entropy — a measure of the disorder in a system — always increases, a fact encoded in the second law of thermodynamics. As the Austrian physicist Ludwig Boltzmann understood in the 19th century, the second law explains why events are more likely to evolve in one direction rather than another. It accounts for the arrow of time.
But things get trickier when we step back and ask why we happen to live in a universe where such a law holds. “What Boltzmann truly explained is why the entropy of the universe will be larger tomorrow than it is today,” said Sean Carroll, a physicist at the California Institute of Technology, as we sat in a hotel bar after the second day of presentations. “But if that was all you knew, you’d also say that the entropy of the universe was probably larger yesterday than today — because all the underlying dynamics are completely symmetric with respect to time.” That is, if entropy is ultimately based on the underlying laws of the universe, and those laws are the same going forward and backward, then entropy is just as likely to increase going backward in time. But no one believes that entropy actually works that way. Scrambled eggs always come after whole eggs, never the other way around.
To make sense of this, physicists have proposed that the universe began in a very special low-entropy state. In this view, which the Columbia University philosopher of physics David Albert named the “past hypothesis,” entropy increases because the Big Bang happened to produce an exceptionally low-entropy universe. There was nowhere to go but up. The past hypothesis implies that every time we cook an egg, we’re taking advantage of events that happened nearly 14 billion years ago. “What you need the Big Bang to explain is: ‘Why were there ever unbroken eggs?’” Carroll said.
Some physicists are more troubled than others by the past hypothesis. Taking things we don’t understand about the physics of today’s universe and saying the answer can be found in the Big Bang could be seen, perhaps, as passing the buck — or as sweeping our problems under the carpet. Every time we invoke initial conditions, “the pile of things under the rug gets bigger,” said Marina Cortes, a cosmologist at the Royal Observatory in Edinburgh and a co-organizer of the conference.
To Smolin, the past hypothesis feels more like an admission of failure than a useful step forward. As he puts it in The Singular Universe: “The fact to be explained is why the universe, even 13.8 billion years after the Big Bang, has not reached equilibrium, which is by definition the most probable state, and it hardly suffices to explain this by asserting that the universe started in an even less probable state than the present one.”
Other physicists, however, point out that it’s normal to develop theories that can describe a system given certain initial conditions. A theory needn’t strive to explain those conditions.
Another set of physicists thinks that the past hypothesis, while better than nothing, is more likely to be a placeholder than a final answer. Perhaps, if we’re lucky, it will point the way to something deeper. “Many people say that the past hypothesis is just a fact, and there isn’t any underlying way to explain it. I don’t rule out that possibility,” Carroll said. “To me, the past hypothesis is a clue to help us develop a more comprehensive view of the universe.”
The Alternative Origins of Time
Can the arrow of time be understood without invoking the past hypothesis? Some physicists argue that gravity — not thermodynamics — aims time’s arrow. In this view, gravity causes matter to clump together, defining an arrow of time that aligns itself with growth of complexity, said Tim Koslowski, a physicist at the National Autonomous University of Mexico (he described the idea in a 2014 paper co-authored by the British physicist Julian Barbour and Flavio Mercati, a physicist at Perimeter). Koslowski and his colleagues developed simple models of universes made up of 1,000 pointlike particles, subject only to Newton’s law of gravitation, and found that there will always be a moment of maximum density and minimum complexity. As one moves away from that point, in either direction, complexity increases. Naturally, we — complex creatures capable of making observations — can only evolve at some distance from the minimum. Still, wherever we happen to find ourselves in the history of the universe, we can point to an era of less complexity and call it the past, Koslowski said. The models are globally time-symmetric, but every observer will experience a local arrow of time. It’s significant that the low-entropy starting point isn’t an add-on to the model. Rather, it emerges naturally from it. “Gravity essentially eliminates the need for a past hypothesis,” Koslowski said.
The idea that time moves in more than one direction, and that we just happen to inhabit a section of the cosmos with a single, locally defined arrow of time, isn’t new. Back in 2004, Carroll, along with his graduate student Jennifer Chen, put forward a similar proposal based on eternal inflation, a relatively well-known model of the beginning of the universe. Carroll sees the work of Koslowski and his colleagues as a useful step, especially since they worked out the mathematical details of their model (he and Chen did not). Still, he has some concerns. For example, he said it’s not clear that gravity plays as important a role as their paper claims. “If you just had particles in empty space, you’d get exactly the same qualitative behavior,” he said.
Increasing complexity, Koslowski said, has one crucial side effect: It leads to the formation of certain arrangements of matter that maintain their structure over time. These structures can store information; Koslowski calls them “records.” Gravity is the first and primary force that makes record formation possible; other processes then give rise to everything from fossils and tree rings to written documents. What all of these entities have in common is that they contain information about some earlier state of the universe. I asked Koslowski if memories stored in brains are another kind of record. Yes, he said. “Ideally we would be able to build ever more complex models, and come eventually to the memory in my phone, the memory in my brain, in history books.” A more complex universe contains more records than a less complex universe, and this, Koslowski said, is why we remember the past but not the future.
But perhaps time is even more fundamental than this. For George Ellis, a cosmologist at the University of Cape Town in South Africa, time is a more basic entity, one that can be understood by picturing the block universe as itself evolving. In his “evolving block universe”model, the universe is a growing volume of space-time. The surface of this volume can be thought of as the present moment. The surface represents the instant where “the indefiniteness of the future changes to the definiteness of the past,” as he described it. “Space-time itself is growing as time passes.” One can discern the direction of time by looking at which part of the universe is fixed (the past) and which is changing (the future). Although some colleagues disagree, Ellis stresses that the model is a modification, not a radical overhaul, of the standard view. “This is a block universe with dynamics covered by the general-relativity field equations — absolutely standard — but with a future boundary that is the ever-changing present,” he said. In this view, while the past is fixed and unchangeable, the future is open. The model “obviously represents the passing of time in a more satisfactory way than the usual block universe,” he said.
Unlike the traditional block view, Ellis’s picture appears to describe a universe with an open future — seemingly in conflict with a law-governed universe in which past physical states dictate future states. (Although quantum uncertainty, as Ellis pointed out, may be enough to sink such a deterministic view.) At the conference, someone asked Ellis if, given enough information about the physics of a sphere of a certain radius centered on the British Midlands in early June, one could have predicted the result of the Brexit vote. “Not using physics,” Ellis replied. For that, he said, we’d need a better understanding of how minds work.
Another approach that aims to reconcile the apparent passage of time with the block universe goes by the name of causal set theory. First developed in the 1980s as an approach to quantum gravity by the physicist Rafael Sorkin — who was also at the conference — the theory is based on the idea that space-time is discrete rather than continuous. In this view, although the universe appears continuous at the macroscopic level, if we could peer down to the so-called Planck scale (distances of about 10–35 meters) we’d discover that the universe is made up of elementary units or “atoms” of space-time. The atoms form what mathematicians call a “partially ordered set” — an array in which each element is linked to an adjacent element in a particular sequence. The number of these atoms (estimated to be a whopping 10240 in the visible universe) gives rise to the volume of space-time, while their sequence gives rise to time. According to the theory, new space-time atoms are continuously coming into existence. Fay Dowker, a physicist at Imperial College London, referred to this at the conference as “accretive time.” She invited everyone to think of space-time as accreting new space-time atoms in way roughly analogous to a seabed depositing new layers of sediment over time. General relativity yields only a block, but causal sets seem to allow a “becoming,” she said. “The block universe is a static thing — a static picture of the world — whereas this process of becoming is dynamical.” In this view, the passage of time is a fundamental rather than an emergent feature of the cosmos. (Causal set theory has made at least one successful prediction about the universe, Dowker pointed out, having been used to estimate the value of the cosmological constantbased only on the space-time volume of the universe.)
The Problem With the Future
In the face of these competing models, many thinkers seem to have stopped worrying and learned to love (or at least tolerate) the block universe.
Perhaps the strongest statement made at the conference in favor of the block universe’s compatibility with everyday experience came from the philosopher Jenann Ismael of the University of Arizona. The way Ismael sees it, the block universe, properly understood, holds within it the explanation for our experience of time’s apparent passage. A careful look at conventional physics, supplemented by what we’ve learned in recent decades from cognitive science and psychology, can recover “the flow, the whoosh, of experience,” she said. In this view, time is not an illusion — in fact, we experience it directly. She cited studies that show that each moment we experience represents a finite interval of time. In other words, we don’t infer the flow of time; it’s part of the experience itself. The challenge, she said, is to frame this first-person experience within the static block offered by physics — to examine “how the world looks from the evolving frame of reference of an embedded perceiver” whose history is represented by a curve within the space-time of the block universe.
Ismael’s presentation drew a mixed response. Carroll said he agreed with everything she had said; Elitzur said he “wanted to scream” during her talk. (He later clarified: “If I bang my head against the wall, it’s because I hate the future.”) An objection voiced many times during the conference was that the block universe seems to imply, in some important way, that the future already exists, yet statements about, say, next Thursday’s weather are neither true nor false. For some, this seems like an insurmountable problem with the block-universe view. Ismael had heard these objections many times before. Future events exist, she said, they just don’t exist now. “The block universe is not a changing picture,” she said.“It’s a picture of change.” Things happen when they happen. “This is a moment — and I know everybody here is going to hate this — but physics could do with some philosophy,” she said. “There’s a long history of discussion about the truth-values of future contingent statements — and it really has nothing to do with the experience of time.” And for those who wanted to read more? “I recommend Aristotle,” she said.
Correction: A photo caption was revised on July 25, 2016, to correct the spelling of Jenann Ismael’s name.
Inequality goes back to the Stone Age. Thirty thousand years ago, bands of hunter-gatherers in Russia buried some members in sumptuous graves replete with thousands of ivory beads, bracelets, jewels and art objects, while other members had to settle for a bare hole in the ground.
Nevertheless, ancient hunter-gatherer groups were still more egalitarian than any subsequent human society, because they had very little property. Property is a pre-requisite for long-term inequality.
Following the agricultural revolution, property multiplied and with it inequality. As humans gained ownership of land, animals, plants and tools, rigid hierarchical societies emerged, in which small elites monopolised most wealth and power for generation after generation.
Humans came to accept this arrangement as natural and even divinely ordained. Hierarchy was not just the norm, but also the ideal. How could there be order without a clear hierarchy between aristocrats and commoners, between men and women, or between parents and children?
Priests, philosophers and poets all over the world patiently explained that, just as in the human body not all members are equal – the feet must obey the head – so also in human society, equality will bring nothing but chaos.
In the late modern era, however, equality rapidly became the dominant value in human societies almost everywhere. This was partly due to the rise of new ideologies like humanism, liberalism and socialism. But it was also due to the industrial revolution, which made the masses more important than ever before.
Industrial economies relied on masses of common workers, while industrial armies relied on masses of common soldiers. Governments in both democracies and dictatorships invested heavily in the health, education and welfare of the masses, because they needed millions of healthy labourers to work in the factories, and millions of loyal soldiers to serve in the armies.
Consequently, the history of the 20th century revolved to a large extent around the reduction of inequality between classes, races and genders. The world of the year 2000 was a far more equal place than the world of 1900. With the end of the cold war, people became ever-more optimistic, and expected that the process would continue and accelerate in the 21st century.
In particular, they hoped globalisation would spread economic prosperity and democratic freedom throughout the world, and that as a result, people in India and Egypt would eventually come to enjoy the same rights, privileges and opportunities as people in Sweden and Canada. An entire generation grew up on this promise.
Now it seems that this promise was a lie.
Globalisation has certainly benefited large segments of humanity, but there are signs of growing inequality both between and within societies. As some groups increasingly monopolise the fruits of globalisation, billions are left behind.
Even more ominously, as we enter the post-industrial world, the masses are becoming redundant. The best armies no longer rely on millions of ordinary recruits, but rather on a relatively small number of highly professional soldiers using very high-tech kit and autonomous drones, robots and cyber-worms. Already today, most people are militarily useless.
Humanoid robots work side-by-side with employees onan assembly line in Kazo, Japan. Photograph: Issei Kato/Reuters
The same thing might soon happen in the civilian economy, too. As artificial intelligence (AI) outperforms humans in more and more skills, it is likely to replace humans in more and more jobs. True, many new jobs might appear, but that won’t necessarily solve the problem.
Humans basically have just two types of skills – physical and cognitive – and if computers outperform us in both, they might outperform us in the new jobs just as in the old ones. Consequently, billions of humans might become unemployable, and we will see the emergence of a huge new class: the useless class.
This is one reason why human societies in the 21st century might be the most unequal in history. And there are other reasons to fear such a future.
With rapid improvements in biotechnology and bioengineering, we may reach a point where, for the first time in history, it becomes possible to translate economic inequality into biological inequality. Biotechnology will soon make it possible to engineer bodies and brains, and to upgrade our physical and cognitive abilities. However, such treatments are likely to be expensive, and available only to the upper crust of society. Humankind might consequently split into biological castes.
Throughout history, the rich and the aristocratic always imagined they had superior skills to everybody else, which is why they were in control. As far as we can tell, this wasn’t true. The average duke wasn’t more talented than the average peasant: he owed his superiority only to unjust legal and economic discrimination. However, by 2100, the rich might really be more talented, more creative and more intelligent than the slum-dwellers. Once a real gap in ability opens between the rich and the poor, it will become almost impossible to close it.
The two processes together – bioengineering coupled with the rise of AI – may result in the separation of humankind into a small class of superhumans, and a massive underclass of “useless” people.
Here’s a concrete example: the transportation market. Today there are many thousands of truck, taxi and bus drivers in the UK. Each of them commands a small share of the transportation market, and they gain political power because of that. They can unionise, and if the government does something they don’t like, they can go on strike and shut down the entire transportation system.
The jobs market could be irrevocably transformed by the development of self-driving vehicles. Photograph: Justin Tallis/AFP/Getty Images
Now fast-forward 30 years. All vehicles are self-driving. One corporation controls the algorithm that controls the entire transport market. All the economic and political power which was previously shared by thousands is now in the hands of a single corporation, owned by a handful of billionaires.
Once the masses lose their economic importance and political power, the state loses at least some of the incentive to invest in their health, education and welfare. It’s very dangerous to be redundant. Your future depends on the goodwill of a small elite. Maybe there is goodwill for a few decades. But in a time of crisis – like climate catastrophe – it would be very tempting, and easy, to toss you overboard.
In countries such as the UK, with a long tradition of humanist beliefs and welfare state practices, perhaps the elite will go on taking care of the masses even when it doesn’t really need them. The real problem will be in large developing countries like India, China, South Africa or Brazil.
These countries resemble a long train: the elites in the first-class carriages enjoy healthcare, education and income levels on a par with the most developed nations in the world. But the hundreds of millions of ordinary citizens who crowd the third-class cars still suffer from widespread diseases, ignorance and poverty.
What would the Indian, Chinese, South African or Brazilian elite prefer to do in the coming century? Invest in fixing the problems of hundreds of millions of useless poor – or in upgrading a few million rich?
In the 20th century, the elites had a stake in fixing the problems of the poor, because they were militarily and economically vital. Yet in the 21st century, the most efficient (and ruthless) strategy may be to let go of the useless third-class cars, and dash forward with the first class only. In order to compete with South Korea, Brazil might need a handful of upgraded superhumans far more than millions of healthy but useless labourers.
Consequently, instead of globalisation resulting in prosperity and freedom for all, it might actually result in speciation: the divergence of humankind into different biological castes or even different species. Globalisation will unite the world on a vertical axis and abolish national differences, but it will simultaneously divide humanity on a horizontal axis.
From this perspective, current populist resentment of “the elites” is well-founded. If we are not careful, the grandchildren of Silicon Valley tycoons might become a superior biological caste to the grandchildren of hillbillies in Appalachia.
There is one more possible step on the road to previously unimaginable inequality. In the short-term, authority might shift from the masses to a small elite that owns and controls the master algorithms and the data which feed them. In the longer term, however, authority could shift completely from humans to algorithms. Once AI is smarter even than the human elite, all humanity could become redundant.
What would happen after that? We have absolutely no idea – we literally can’t imagine it. How could we? A super-intelligent computer will by definition have a far more fertile and creative imagination than that which we possess.
Of course, technology is never deterministic. We can use the same technological breakthroughs to create very different kinds of societies and situations. For example, in the 20th century, people could use the technology of the industrial revolution – trains, electricity, radio, telephone – to create communist dictatorships, fascist regimes or liberal democracies. Just think about North and South Korea: they have had access to exactly the same technology, but they have chosen to employ it in very different ways.
In the 21st century, the rise of AI and biotechnology will certainly transform the world – but it does not mandate a single, deterministic outcome. We can use these technologies to create very different kinds of societies. How to use them wisely is the most important question facing humankind today. If you don’t like some of the scenarios I have outlined here, you can still do something about it.
We are misnamed. We call ourselves Homo sapiens, the “wise man,” but that’s more of a boast than a description. What makes us wise? What sets us apart from other animals? Various answers have been proposed — language, tools, cooperation, culture, tasting bad to predators — but none is unique to humans.
What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.
A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.
Behaviorists thought of animal learning as the ingraining of habit by repetition. Psychoanalysts believed that treating patients was a matter of unearthing and confronting the past. Even when cognitive psychology emerged, it focused on the past and present — on memory and perception.
But it is increasingly clear that the mind is mainly drawn to the future, not driven by the past. Behavior, memory, and perception can’t be understood without appreciating the central role of prospection. We learn not by storing static records but by continually retouching memories and imagining future possibilities. Our brain sees the world not by processing every pixel in a scene but by focusing on the unexpected.
Our emotions are less reactions to the present than guides to future behavior. Therapists are exploring new ways to treat depression now that they see it as primarily not because of past traumas and present stresses but because of skewed visions of what lies ahead.
Prospection enables us to become wise not just from our own experiences but also by learning from others. We are social animals like no others, living and working in very large groups of strangers because we have jointly constructed the future. Human culture — our language, our division of labor, our knowledge, our laws, and technology — is possible only because we can anticipate what fellow humans will do in the distant future. We make sacrifices today to earn rewards tomorrow, whether in this life or in the afterlife promised by so many religions.
Some of our unconscious powers of prospection are shared by animals, but hardly any other creatures are capable of thinking more than a few minutes ahead. Squirrels bury nuts by instinct, not because they know winter is coming. Ants cooperate to build dwellings because they’re genetically programmed to do so, not because they’ve agreed on a blueprint. Chimpanzees have sometimes been known to exercise short-term foresight, like the surly male at a Swedish zoo who was observed stockpiling rocks to throw at gawking humans, but they are nothing like Homo prospectus.
If you’re a chimp, you spend much of the day searching for your next meal. If you’re a human, you can usually rely on the foresight of your supermarket’s manager, or you can make a restaurant reservation for Saturday evening thanks to a remarkably complicated feat of collaborative prospection. You and the restaurateur both imagine a future time — “Saturday” exists only as a collective fantasy — and anticipate each other’s actions. You trust the restaurateur to acquire food and cook it for you. She trusts you to show up and give her money, which she will accept only because she expects her landlord to accept it in exchange for occupying his building.
The central role of prospection has emerged in recent studies of both conscious and unconscious mental processes, like one in Chicago that pinged nearly 500 adults during the day to record their immediate thoughts and moods. If traditional psychological theory had been correct, these people would have spent a lot of time ruminating. But they actually thought about the future three times more often than the past, and even those few thoughts about a past event typically involved consideration of its future implications.
When making plans, they reported higher levels of happiness and lower levels of stress than at other times, presumably because planning turns a chaotic mass of concerns into an organized sequence. Although they sometimes feared what might go wrong, on average there were twice as many thoughts of what they hoped would happen.
While most people tend to be optimistic, those suffering from depression and anxiety have a bleak view of the future — and that in fact seems to be the chief cause of their problems, not their past traumas nor their view of the present. While traumas do have a lasting impact, most people actually emerge stronger afterward. Others continue struggling because they over-predict failure and rejection. Studies have shown depressed people are distinguished from the norm by their tendency to imagine fewer positive scenarios while overestimating future risks.
They withdraw socially and become paralyzed by exaggerated self-doubt. A bright and accomplished student imagines: If I flunk the next test, then I’ll let everyone down and show what a failure I really am. Researchers have begun successfully testing therapies designed to break this pattern by training sufferers to envision positive outcomes (imagine passing the test) and to see future risks more realistically (think of the possibilities remaining even if you flunk the test).
Most prospection occurs at the unconscious level as the brain sifts information to generate predictions. Our systems of vision and hearing, like those of animals, would be overwhelmed if we had to process every pixel in a scene or every sound around us. Perception is manageable because the brain generates its own scene so that the world remains stable even though your eyes move three times a second. This frees the perceptual system to heed features it didn’t predict, which is why you’re not aware of a ticking clock unless it stops. It’s also why you don’t laugh when you tickle yourself: You already know what’s coming next.
Behaviorists used to explain learning as the ingraining of habits by repetition and reinforcement, but their theory couldn’t explain why animals were more interested in unfamiliar experiences than familiar ones. It turned out that even the behaviorists’ rats, far from being creatures of habit, paid special attention to unexpected novelties because that was how they learned to avoid punishment and win rewards.
The brain’s long-term memory has often been compared to an archive, but that’s not its primary purpose. Instead of faithfully recording the past, it keeps rewriting history. Recalling an event in a new context can lead to new information being inserted in the memory. Coaching of eyewitnesses can cause people to reconstruct their memory so that no trace of the original is left.
The fluidity of memory may seem like a defect, especially to a jury, but it serves a larger purpose. It’s a feature, not a bug because the point of memory is to improve our ability to face the present and the future. To exploit the past, we metabolize it by extracting and recombining relevant information to fit novel situations.
This link between memory and prospection has emerged in research showing that people with damage to the brain’s medial temporal lobe lose memories of past experiences as well as the ability to construct rich and detailed simulations of the future. Similarly, studies of children’s development show that they’re not able to imagine future scenes until they’ve gained the ability to recall personal experiences, typically somewhere between the ages of 3 and 5.
Perhaps the most remarkable evidence comes from recent brain imaging research. When recalling a past event, the hippocampus must combine three distinct pieces of information — what happened, when it happened and where it happened — that are each stored in a different part of the brain. Researchers have found that the same circuitry is activated when people imagine a novel scene. Once again, the hippocampus combines three kinds of records (what, when and where), but this time it scrambles the information to create something new.
Even when you’re relaxing, your brain is continually recombining information to imagine the future, a process that researchers were surprised to discover when they scanned the brains of people doing specific tasks like mental arithmetic. Whenever there was a break in the task, there were sudden shifts to activity in the brain’s “default” circuit, which is used to imagine the future or retouch the past.
This discovery explains what happens when your mind wanders during a task: It’s simulating future possibilities. That’s how you can respond so quickly to unexpected developments. What may feel like a primitive intuition, a gut feeling, is made possible by those previous simulations.
Suppose you get an email invitation to a party from a colleague at work. You’re momentarily stumped. You vaguely recall turning down a previous invitation, which makes you feel obliged to accept this one, but then you imagine having a bad time because you don’t like him when he’s drinking. But then you consider you’ve never invited him to your place, and you uneasily imagine that turning this down would make him resentful, leading to problems at work.
Methodically weighing these factors would take a lot of time and energy, but you’re able to make a quick decision by using the same trick as the Google search engine when it replies to your query in less than a second. Google can instantly provide a million answers because it doesn’t start from scratch. It’s continually predicting what you might ask.
Your brain engages in the same sort of prospection to provide its own instant answers, which come in the form of emotions. The main purpose of emotions is to guide future behavior and moral judgments, according to researchers in a new field called prospective psychology. Emotions enable you to empathize with others by predicting their reactions. Once you imagine how both you and your colleague will feel if you turn down his invitation, you intuitively know you’d better reply, “Sure, thanks.”
If Homo prospectus takes the really long view, does he become morbid? That was a longstanding assumption in psychologists’ “terror management theory,” which held that humans avoid thinking about the future because they fear death. The theory was explored in hundreds of experiments assigning people to think about their own deaths. One common response was to become more assertive about one’s cultural values, like becoming more patriotic.
But there’s precious little evidence that people actually spend much time outside the lab thinking about their deaths or managing their terror of mortality. It’s certainly not what psychologists found in the study tracking Chicagoans’ daily thoughts. Less than 1 percent of their thoughts involved death, and even those were typically about other people’s deaths.
Homo prospectus is too pragmatic to obsess on death for the same reason that he doesn’t dwell on the past: There’s nothing he can do about it. He became Homo sapiens by learning to see and shape his future, and he is wise enough to keep looking straight ahead.