The Use of Immediate Annuities in Medicaid Planning for Married Couples

7c657e73d1.png

Immediate annuities can be a useful tool to protect the spouse of a nursing home resident who applies for Medicaid. These types of annuities allow the nursing home resident to spend down assets and give the spouse a guaranteed income. But immediate annuities may not work in every state, so be sure to check with your attorney.

Medicaid is the primary source of payment for long-term care services in the United States. To qualify for Medicaid, a nursing home resident must become impoverished under Medicaid’s complicated asset rules. In most states, this means the applicant can have only $2,000 in “countable” assets. Virtually everything is countable except for the home (with some limitations) and personal belongings. The spouse of a nursing home resident–called the “community spouse” — is limited to one half of the couple’s joint assets up to $120,900 (in 2017) in “countable” assets. The least that a state may allow a community spouse to retain is $24,180 (in 2017). While a nursing home resident must pay his or her excess income to the nursing home, there is no limit on the amount of income a spouse can have.

An immediate annuity is a contract with an insurance company under which the annuitant pays the insurance company a sum of money in exchange for a stream of income. This income stream may be payable for life or for a specific number of years, or a combination of both — i.e., for life with a certain number of years of payment guaranteed. In the Medicaid planning context, most annuities are for a specific number of years.

The spouse of a nursing home resident may spend down his or her excess assets by using them to purchase an immediate annuity. But if Medicaid applicants or their spouses transfer assets within five years of applying for Medicaid, the applicants may be subject to a period of ineligibility, also called a transfer penalty. To avoid a transfer penalty, the annuity must meet the following criteria:

  • The annuity must pay back the entire investment. When interest rates were higher, it was possible to purchase annuities for as short as two years, but now short annuities usually don’t pay back the full purchase price.
  • The payment period must be shorter than the owner’s actuarial life expectancy. For instance, if the spouse’s life expectancy is only four years, the purchase of an annuity with a five-year payback period would be deemed a transfer of assets.
  • The annuity must be irrevocable and nontransferrable, meaning that the owner may not have the option of cashing it out and selling it to a third party.
  • The annuity has to name the state as the beneficiary if the annuitant dies before all the payments have been made.

Here’s an example of how an immediate annuity might work: John and Jane live in a state that allows the community spouse to keep $120,900 of the couple’s assets.  If John moves to a nursing home and John and Jane have $320,000 in countable assets (savings, investments, and retirement accounts), Jane can take $200,000 in excess assets and purchase an immediate annuity for her own benefit. After reducing their countable assets to $120,000, John will be eligible for Medicaid. If the annuity pays her $3,500 a month for five years, by the end of that time, Jane will have received back her investment plus $10,000 of income. If she accumulates these funds, at the end of five years she will be right back where she started before John needed nursing home care.

Given this planning opportunity, many spouses of nursing home residents use immediate annuities to preserve their own financial security. But it’s not a slam-dunk for a number of reasons, including:

  • Some states either do not allow spousal annuities or put additional restrictions on them.
  • Other planning options may be preferable, such as spending down assets in a way that preserves them, transferring assets to exempt beneficiaries or into trust for their benefit, seeking an increased resource allowance, purchasing non-countable assets, using spousal refusal, or bringing the nursing home spouse home and qualifying for community Medicaid.
  • The purchase of an annuity might require the liquidation of IRAs owned by the nursing home spouse, causing a large tax liability.
  • The non-nursing home spouse may be ill herself, meaning that she may need nursing home care soon, in which case the annuity payments would simply go to her nursing home.
  • The savings may be small due to a high income or the short life expectancy of the nursing home spouse, and the process of liquidating assets and applying for Medicaid might not be worth the considerable trouble.

In short, the use of this powerful planning strategy depends on each couple’s particular circumstances and should be undertaken only after consultation with a qualified elder law attorney. In addition, those who do purchase immediate annuities need to shop around to make sure they are purchasing them from reliable companies paying the best return.

Finally, couples need to beware of deferred annuities. Some brokers will attempt to sell deferred annuities for Medicaid planning purposes, but these can cause problems. While a deferred annuity can be “annuitized” (meaning it can be turned into an immediate annuity), if the nursing home resident owns the annuity, the income stream will be payable to the nursing home instead of to the healthy spouse. Often, the annuity will charge a penalty for early withdrawal, so it is difficult to transfer it to the healthy spouse. In short, while immediate annuities can be great tools for Medicaid planning, deferred annuities should be avoided by anyone contemplating the need for care in the near future.

 

Couretsy of:

Chayet & Danzo, LLC
650 S. Cherry St. | Suite 710 | Denver 80246
Phone: (303) 355-8500

THE FIRST OCCUPANT OF THE WHITE HOUSE

th

It was June 3, 1800. President John Adams arrived in Washington, D.C., for the first time. The capital city, which had been chosen by George Washington as the seat of government for the United States, was still under construction. There were no schools or churches, only a few stores and hotels, and some shacks for the workers who were building the White House and the Capitol. The area was swampy and full of mosquitoes, and the ground covered with tree stumps and rubble. Adams might have been depressed by the dismal site of the country’s new capital, but he wrote to his wife, Abigail, “I like the seat of government very well.”

It was several months before Adams was able to live in the White House, then known as the President’s House. On the day he moved in, he entered the house with just a few of his staff. There was no honor guard or entourage or any kind of ceremony. The house was still unfinished, still smelling of wet paint and wet plaster. The furniture had been shipped down from Philadelphia, but it didn’t quite fit the enormous rooms of the new house. The only painting that had been hung on the wall was a portrait of George Washington in a black velvet suit.

It had been a difficult period in Adams’s life. He’d had a hard time filling the shoes of George Washington as president. He’d been struggling with debts ever since his election, as the presidential salary was rather meager. He’d barely prevented a war with France. He’d been plagued with political infighting among his cabinet, and in the upcoming presidential election, it looked like he might lose to Thomas Jefferson.

So Adams might have been thinking about all his troubles when he went to bed that night as the first president to sleep in the White House. He had left Abigail in Philadelphia, so he had to sleep alone. The following morning, he sat down at his desk, and in a letter to his wife he wrote: “I pray heaven to bestow the best of blessings on this house and on all that shall hereafter inhabit it. May none but honest and wise men ever rule under this roof.”

Adams only lived in the White House for a few more months, since he lost the election to Jefferson that year. But about 150 years later, Franklin Roosevelt had the words from Adams’s letter to Abigail carved into the mantel in the State Dining Room.

SOURCE:  THE WRITER’S ALMANAC, JUNE 3, 2017

{02ae1eef-da15-4993-843e-9e167972b037}_TWA-email-header-400x66

Explorers Find Passage to Earth’s Dark Age

Geochemical signals from deep inside Earth are beginning to shed light on the planet’s first 50 million years, a formative period long viewed as inaccessible to science.

Earth scientists hope that their growing knowledge of the planet’s early history will shed light on poorly understood features seen today, from continents to geysers.

Earth scientists hope that their growing knowledge of the planet’s early history will shed light on poorly understood features seen today, from continents to geysers.

Eric King

In August, the geologist Matt Jackson left California with his wife and 4-year-old daughter for the fjords of northwest Iceland, where they camped as he roamed the outcrops and scree slopes by day in search of little olive-green stones called olivine.

A sunny young professor at the University of California, Santa Barbara, with a uniform of pearl-snap shirts and well-utilized cargo shorts, Jackson knew all the best hunting grounds, having first explored the Icelandic fjords two years ago. Following sketchy field notes handed down by earlier geologists, he covered 10 or 15 miles a day, past countless sheep and the occasional farmer. “Their whole lives they’ve lived in these beautiful fjords,” he said. “They look up to these black, layered rocks, and I tell them that each one of those is a different volcanic eruption with a lava flow. It blows their minds!” He laughed. “It blows my mind even more that they never realized it!”

The olivine erupted to Earth’s surface in those very lava flows between 10 and 17 million years ago. Jackson, like many geologists, believes that the source of the eruptions was the Iceland plume, a hypothetical upwelling of solid rock that may rise, like the globules in a lava lamp, from deep inside Earth. The plume, if it exists, would now underlie the active volcanoes of central Iceland. In the past, it would have surfaced here at the fjords, back in the days when here was there — before the puzzle-piece of Earth’s crust upon which Iceland lies scraped to the northwest.

Other modern findings about olivine from the region suggest that it might derive from an ancient reservoir of minerals at the base of the Iceland plume that, over billions of years, never mixed with the rest of Earth’s interior. Jackson hoped the samples he collected would carry a chemical message from the reservoir and prove that it formed during the planet’s infancy — a period that until recently was inaccessible to science.

After returning to California, he sent his samples to Richard Walker to ferret out that message. Walker, a geochemist at the University of Maryland, is processing the olivine to determine the concentration of the chemical isotope tungsten-182 in the rock relative to the more common isotope, tungsten-184. If Jackson is right, his samples will join a growing collection of rocks from around the world whose abnormal tungsten isotope ratios have completely surprised scientists. These tungsten anomalies reflect processes that could only have occurred within the first 50 million years of the solar system’s history, a formative period long assumed to have been wiped from the geochemical record by cataclysmic collisions that melted Earth and blended its contents.

The anomalies “are giving us information about some of the earliest Earth processes,” Walker said. “It’s an alternative universe from what geochemists have been working with for the past 50 years.”

Matt Jackson and his family with a local farmer in northwest Iceland.

Matt Jackson and his family with a local farmer in northwest Iceland.

Courtesy of Matt Jackson

The discoveries are sending geologists like Jackson into the field in search of more clues to Earth’s formation — and how the planet works today. Modern Earth, like early Earth, remains poorly understood, with unanswered questions ranging from how volcanoes work and whether plumes really exist to where oceans and continents came from, and what the nature and origin might be of the enormous structures, colloquially known as “blobs,” that seismologists detect deep down near Earth’s core. All aspects of the planet’s form and function are interconnected. They’re also entangled with the rest of the solar system. Any attempt, for instance, to explain why tectonic plates cover Earth’s surface like a jigsaw puzzle must account for the fact that no other planet in the solar system has plates. To understand Earth, scientists must figure out how, in the context of the solar system, it became uniquely earthlike. And that means probing the mystery of the first tens of millions of years.

“You can think about this as an initial-conditions problem,” said Michael Manga, a geophysicist at the University of California, Berkeley, who studies geysers and volcanoes. “The Earth we see today evolved from something. And there’s lots of uncertainty about what that initial something was.”

Pieces of the Puzzle

On one of an unbroken string of 75-degree days in Santa Barbara the week before Jackson left for Iceland, he led a group of earth scientists on a two-mile beach hike to see some tar dikes — places where the sticky black material has oozed out of the cliff face at the back of the beach, forming flabby, voluptuous folds of faux rock that you can dent with a finger. The scientists pressed on the tar’s wrinkles and slammed rocks against it, speculating about its subterranean origin and the ballpark range of its viscosity. When this reporter picked up a small tar boulder to feel how light it was, two or three people nodded approvingly.

A mix of geophysicists, geologists, mineralogists, geochemists and seismologists, the group was in Santa Barbara for the annual Cooperative Institute for Dynamic Earth Research (CIDER) workshop at the Kavli Institute for Theoretical Physics. Each summer, a rotating cast of representatives from these fields meet for several weeks at CIDER to share their latest results and cross-pollinate ideas — a necessity when the goal is understanding a system as complex as Earth.

Earth’s complexity, how special it is, and, above all, the black box of its initial conditions have meant that, even as cosmologists map the universe and astronomers scan the galaxy for Earth 2.0, progress in understanding our home planet has been surprisingly slow. As we trudged from one tar dike to another, Jackson pointed out the exposed sedimentary rock layers in the cliff face — some of them horizontal, others buckled and sloped. Amazingly, he said, it took until the 1960s for scientists to even agree that sloped sediment layers are buckled, rather than having piled up on an angle. Only then was consensus reached on a mechanism to explain the buckling and the ruggedness of Earth’s surface in general: the theory of plate tectonics.

Projecting her voice over the wind and waves, Carolina Lithgow-Bertelloni, a geophysicist from University College London who studies tectonic plates, credited the German meteorologist Alfred Wegener for first floating the notion of continental drift in 1912 to explain why Earth’s landmasses resemble the dispersed pieces of a puzzle. “But he didn’t have a mechanism — well, he did, but it was crazy,” she said.

Earth scientists on a beach hike in Santa Barbara County, California.

Earth scientists on a beach hike in Santa Barbara County, California.

Natalie Wolchover/Quanta Magazine

A few years later, she continued, the British geologist Sir Arthur Holmes convincingly argued that Earth’s solid-rock mantle flows fluidly on geological timescales, driven by heat radiating from Earth’s core; he speculated that this mantle flow in turn drives surface motion. More clues came during World War II. Seafloor magnetism, mapped for the purpose of hiding submarines, suggested that new crust forms at the mid-ocean ridge — the underwater mountain range that lines the world ocean like a seam — and spreads in both directions to the shores of the continents. There, at “subduction zones,” the oceanic plates slide stiffly beneath the continental plates, triggering earthquakes and carrying water downward, where it melts pockets of the mantle. This melting produces magma that rises to the surface in little-understood fits and starts, causing volcanic eruptions. (Volcanoes also exist far from any plate boundaries, such as in Hawaii and Iceland. Scientists currently explain this by invoking the existence of plumes, which researchers like Walker and Jackson are starting to verify and map using isotope studies.)

The physical description of the plates finally came together in the late 1960s, Lithgow-Bertelloni said, when the British geophysicist Dan McKenzie and the American Jason Morgan separately proposed a quantitative framework for modeling plate tectonics on a sphere.

Other than their existence, almost everything about the plates remains in contention. For instance, what drives their lateral motion? Where do subducted plates end up — perhaps these are the blobs? — and how do they affect Earth’s interior dynamics? Why did Earth’s crust shatter into plates in the first place when no other planetary surface in the solar system did? Also completely mysterious is the two-tier architecture of oceanic and continental plates, and how oceans and continents came to ride on them — all possible prerequisites for intelligent life. Knowing more about how Earth became earthlike could help us understand how common earthlike planets are in the universe and thus how likely life is to arise.

The continents probably formed, Lithgow-Bertelloni said, as part of the early process by which gravity organized Earth’s contents into concentric layers: Iron and other metals sank to the center, forming the core, while rocky silicates stayed in the mantle. Meanwhile, low-density materials buoyed upward, forming a crust on the surface of the mantle like soup scum. Perhaps this scum accumulated in some places to form continents, while elsewhere oceans materialized.

Figuring out precisely what happened and the sequence of all of these steps is “more difficult,” Lithgow-Bertelloni said, because they predate the rock record and are “part of the melting process that happens early on in Earth’s history — very early on.”

Until recently, scientists knew of no geochemical traces from so long ago, and they thought they might never crack open the black box from which Earth’s most glorious features emerged. But the subtle anomalies in tungsten and other isotope concentrations are now providing the first glimpses of the planet’s formation and differentiation. These chemical tracers promise to yield a combination timeline-and-map of early Earth, revealing where its features came from, why, and when.

A Sketchy Timeline

Humankind’s understanding of early Earth took its first giant leap when Apollo astronauts brought back rocks from the moon: our tectonic-less companion whose origin was, at the time, a complete mystery.

The rocks “looked gray, very much like terrestrial rocks,” said Fouad Tera, who analyzed lunar samples at the California Institute of Technology between 1969 and 1976. But because they were from the moon, he said, they created “a feeling of euphoria” in their handlers. Some interesting features did eventually show up: “We found glass spherules — colorful, beautiful — under the microscope, green and yellow and orange and everything,” recalled Tera, now 85. The spherules probably came from fountains that gushed from volcanic vents when the moon was young. But for the most part, he said, “the moon is not really made out of a pleasing thing — just regular things.”

In hindsight, this is not surprising: Chemical analysis at Caltech and other labs indicated that the moon formed from Earth material, which appears to have gotten knocked into orbit when the 60 to 100 million-year-old proto-Earth collided with another protoplanet in the crowded inner solar system. This “giant impact” hypothesis of the moon’s formation, though still hotly debated in its particulars, established a key step on the timeline of the Earth, moon and sun that has helped other steps fall into place.

Panorama of the Taurus-Littrow valley created from photographs by Apollo 17 astronaut Eugene Cernan. Fellow astronaut Harrison Schmitt is shown using a rake to collect samples.

Panorama of the Taurus-Littrow valley created from photographs by Apollo 17 astronaut Eugene Cernan. Fellow astronaut Harrison Schmitt is shown using a rake to collect samples.

NASA

Chemical analysis of meteorites is helping scientists outline even earlier stages of our solar system’s timeline, including the moment it all began.

First, 4.57 billion years ago, a nearby star went supernova, spewing matter and a shock wave into space. The matter included radioactive elements that immediately began decaying, starting the clocks that isotope chemists now measure with great precision. As the shock wave swept through our cosmic neighborhood, it corralled the local cloud of gas and dust like a broom; the increase in density caused the cloud to gravitationally collapse, forming a brand-new star — our sun — surrounded by a placenta of hot debris.

Over the next tens of millions of years, the rubble field surrounding the sun clumped into bigger and bigger space rocks, then accreted into planet parts called “planetesimals,” which merged into protoplanets, which became Mercury, Venus, Earth and Mars — the four rocky planets of the inner solar system today. Farther out, in colder climes, gas and ice accreted into the giant planets.

As the infant Earth navigated the crowded inner solar system, it would have experienced frequent, white-hot collisions, which were long assumed to have melted the entire planet into a global “magma ocean.” During these melts, gravity differentiated Earth’s liquefied contents into layers — core, mantle and crust. It’s thought that each of the global melts would have destroyed existing rocks, blending their contents and removing any signs of geochemical differences left over from Earth’s initial building blocks.

The last of the Earth-melting “giant impacts” appears to have been the one that formed the moon; while subtracting the moon’s mass, the impactor was also the last major addition to Earth’s mass. Perhaps, then, this point on the timeline — at least 60 million years after the birth of the solar system and, counting backward from the present, at most 4.51 billion years ago — was when the geochemical record of the planet’s past was allowed to begin. “It’s at least a compelling idea to think that this giant impact that disrupted a lot of the Earth is the starting time for geochronology,” said Rick Carlson, a geochemist at the Carnegie Institution of Washington. In those first 60 million years, “the Earth may have been here, but we don’t have any record of it because it was just erased.”

Another discovery from the moon rocks came in 1974. Tera, along with his colleague Dimitri Papanastassiou and their boss, Gerry Wasserburg, a towering figure in isotope cosmochemistry who died in June, combined many isotope analyses of rocks from different Apollo missions on a single plot, revealing a straight line called an “isochron” that corresponds to time. “When we plotted our data along with everybody else’s, there was a distinct trend that shows you that around 3.9 billion years ago, something massive imprinted on all the rocks on the moon,” Tera said.

Wasserburg dubbed the event the “lunar cataclysm.” Now more often called the “late heavy bombardment,” it was a torrent of asteroids and comets that seems to have battered the moon 3.9 billion years ago, a full 600 million years after its formation, melting and chemically resetting the rocks on its surface. The late heavy bombardment surely would have rained down even more heavily on Earth, considering the planet’s greater size and gravitational pull. Having discovered such a momentous event in solar system history, Wasserburg left his younger, more reserved colleagues behind and “celebrated in Pasadena in some bar,” Tera said.

As of 1974, no rocks had been found on Earth from the time of the late heavy bombardment. In fact, Earth’s oldest rocks appeared to top out at 3.8 billion years. “That number jumps out at you,” said Bill Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. It suggests, Bottke said, that the late heavy bombardment might have melted whatever planetary crust existed 3.9 billion years ago, once again destroying the existing geologic record, after which the new crust took 100 million years to harden.

In 2005, a group of researchers working in Nice, France, conceived of a mechanism to explain the late heavy bombardment — and several other mysteries about the solar system, including the curious configurations of Jupiter, Saturn, Uranus and Neptune, and the sparseness of the asteroid and Kuiper belts. Their “Nice model” posits that the gas and ice giants suddenly destabilized in their orbits sometime after formation, causing them to migrate. Simulations by Bottke and others indicate that the planets’ migrations would have sent asteroids and comets scattering, initiating something very much like the late heavy bombardment. Comets that were slung inward from the Kuiper belt during this shake-up might even have delivered water to Earth’s surface, explaining the presence of its oceans.

With this convergence of ideas, the late heavy bombardment became widely accepted as a major step on the timeline of the early solar system. But it was bad news for earth scientists, suggesting that Earth’s geochemical record began not at the beginning, 4.57 billion years ago, or even at the moon’s beginning, 4.51 billion years ago, but 3.8 billion years ago, and that most or all clues about earlier times were forever lost.

Extending the Rock Record

More recently, the late heavy bombardment theory and many other long-standing assumptions about the early history of Earth and the solar system have come into question, and Earth’s dark age has started to come into the light. According to Carlson, “the evidence for this 3.9 [billion-years-ago] event is getting less clear with time.” For instance, when meteorites are analyzed for signs of shock, “they show a lot of impact events at 4.2, 4.4 billion,” he said. “This 3.9 billion event doesn’t show up really strong in the meteorite record.” He and other skeptics of the late heavy bombardment argue that the Apollo samples might have been biased. All the missions landed on the near side of the moon, many in close proximity to the Imbrium basin (the moon’s biggest shadow, as seen from Earth), which formed from a collision 3.9 billion years ago. Perhaps all the Apollo rocks were affected by that one event, which might have dispersed the melt from the impact over a broad swath of the lunar surface. This would suggest a cataclysm that never occurred.

Lucy Reading-Ikkanda/Quanta Magazine

Furthermore, the oldest known crust on Earth is no longer 3.8 billion years old. Rocks have been found in two parts of Canada dating to 4 billion and an alleged 4.28 billion years ago, refuting the idea that the late heavy bombardment fully melted Earth’s mantle and crust 3.9 billion years ago. At least some earlier crust survived.

In 2008, Carlson and collaborators reported the evidence of 4.28 billion-year-old rocks in the Nuvvuagittuq greenstone belt in Canada. When Tim Elliott, a geochemist at the University of Bristol, read about the Nuvvuagittuq findings, he was intrigued to see that Carlson had used a dating method also used in earlier work by French researchers that relied on a short-lived radioactive isotope system called samarium-neodymium. Elliott decided to look for traces of an even shorter-lived system — hafnium-tungsten — in ancient rocks, which would point back to even earlier times in Earth’s history.

The dating method works as follows: Hafnium-182, the “parent” isotope, has a 50 percent chance of decaying into tungsten-182, its “daughter,” every 9 million years (this is the parent’s “half-life”). The halving quickly reduces the parent to almost nothing; by 50 million years after the supernova that sparked the sun, virtually all the hafnium-182 would have become tungsten-182.

That’s why the tungsten isotope ratio in rocks like Matt Jackson’s olivine samples can be so revealing: Any variation in the concentration of the daughter isotope, tungsten-182, measured relative to tungsten-184 must reflect processes that affected the parent, hafnium-182, when it was around — processes that occurred during the first 50 million years of solar system history. Elliott knew that this kind of geochemical information was previously believed to have been destroyed by early Earth melts and billions of years of subsequent mantle convection. But what if it wasn’t?

Elliott contacted Stephen Moorbath, then an emeritus professor of geology at the University of Oxford and “one of the grandfather figures in finding the oldest rocks,” Elliott said. Moorbath “was keen, so I took the train up.” Moorbath led Elliott down to the basement of Oxford’s earth science building, where, as in many such buildings, a large collection of rocks shares the space with the boiler and stacks of chairs. Moorbath dug out specimens from the Isua complex in Greenland, an ancient bit of crust that he had pegged, in the 1970s, at 3.8 billion years old.

Elliott and his student Matthias Willbold powdered and processed the Isua samples and used painstaking chemical methods to extract the tungsten. They then measured the tungsten isotope ratio using state-of-the-art mass spectrometers. In a 2011 Nature paper, Elliott, Willbold and Moorbath, who died in October, reported that the 3.8 billion-year-old Isua rocks contained 15 parts per million more tungsten-182 than the world average — the first ever detection of a “positive” tungsten anomaly on the face of the Earth.

The paper scooped Richard Walker of Maryland and his colleagues, who months later reported a positive tungsten anomaly in 2.8 billion-year-old komatiites from Kostomuksha, Russia.

Although the Isua and Kostomuksha rocks formed on Earth’s surface long after the extinction of hafnium-182, they apparently derive from materials with much older chemical signatures. Walker and colleagues argue that the Kostomuksha rocks must have drawn from hafnium-rich “primordial reservoirs” in the interior that failed to homogenize during Earth’s early mantle melts. The preservation of these reservoirs, which must trace to the first 50 million years and must somehow have survived even the moon-forming impact, “indicates that the mantle may have never been well mixed,” Walker and his co-authors wrote. That raises the possibility of finding many more remnants of Earth’s early history.

The 60 million-year-old flood basalts of Baffin Bay, Greenland, sampled by the geochemist Hanika Rizo (center) and colleagues, contain isotope traces that originated more than 4.5 billion years ago.

The 60 million-year-old flood basalts of Baffin Bay, Greenland, sampled by the geochemist Hanika Rizo (center) and colleagues, contain isotope traces that originated more than 4.5 billion years ago.

Don Francis (left); courtesy of Hanika Rizo (center and right).

The researchers say they will be able to use tungsten anomalies and other isotope signatures in surface material as tracers of the ancient interior, extrapolating downward and backward into the past to map proto-Earth and reveal how its features took shape. “You’ve got the precision to look and actually see the sequence of events occurring during planetary formation and differentiation,” Carlson said. “You’ve got the ability to interrogate the first tens of millions of years of Earth’s history, unambiguously.”

Anomalies have continued to show up in rocks of various ages and provenances. In May, Hanika Rizo of the University of Quebec in Montreal, along with Walker, Jackson and collaborators, reported in Science the first positive tungsten anomaly in modern rocks — 62 million-year-old samples from Baffin Bay, Greenland. Rizo hypothesizes that these rocks were brought up by a plume that draws from one of the “blobs” deep down near Earth’s core. If the blobs are indeed rich in tungsten-182, then they are not tectonic-plate graveyards as many geophysicists suspect, but instead date to the planet’s infancy. Rizo speculates that they are chunks of the planetesimals that collided to form Earth, and that the chunks somehow stayed intact in the process. “If you have many collisions,” she said, “then you have the potential to create this patchy mantle.” Early Earth’s interior, in that case, looked nothing like the primordial magma ocean pictured in textbooks.

More evidence for the patchiness of the interior has surfaced. At the American Geophysical Union meeting earlier this month, Walker’s group reported a negative tungsten anomaly — that is, a deficit of tungsten-182 relative to tungsten-184 — in basalts from Hawaii and Samoa. This and other isotope concentrations in the rocks suggest the hypothetical plumes that produced them might draw from a primordial pocket of metals, including tungsten-184. Perhaps these metals failed to get sucked into the core during planet differentiation.

Tim Elliott collecting samples of ancient crust rock in Yilgarn Craton in Western Australia.

Tim Elliott collecting samples of ancient crust rock in Yilgarn Craton in Western Australia.

Tony Kemp

Meanwhile, Elliott explains the positive tungsten anomalies in ancient crust rocks like his 3.8 billion-year-old Isua samples by hypothesizing that these rocks might have hardened on the surface before the final half-percent of Earth’s mass — delivered to the planet in a long tail of minor impacts — mixed into them. These late impacts, known as the “late veneer,” would have added metals like gold, platinum and tungsten (mostly tungsten-184) to Earth’s mantle, reducing the relative concentration of tungsten-182. Rocks that got to the surface early might therefore have ended up with positive tungsten anomalies.

Other evidence complicates this hypothesis, however — namely, the concentrations of gold and platinum in the Isua rocks match world averages, suggesting at least some late veneer material did mix into them. So far, there’s no coherent framework that accounts for all the data. But this is the “discovery phase,” Carlson said, rather than a time for grand conclusions. As geochemists gradually map the plumes and primordial reservoirs throughout Earth from core to crust, hypotheses will be tested and a narrative about Earth’s formation will gradually crystallize.

Elliott is working to test his late-veneer hypothesis. Temporarily trading his mass spectrometer for a sledgehammer, he collected a series of crust rocks in Australia that range from 3 billion to 3.75 billion years old. By tracking the tungsten isotope ratio through the ages, he hopes to pinpoint the time when the mantle that produced the crust became fully mixed with late-veneer material.

“These things never work out that simply,” Elliott said. “But you always start out with the simplest idea and see how it goes.”

This article was reprinted on TheAtlantic.com.

A Debate Over the Physics of Time

According to our best theories of physics, the universe is a fixed block where time only appears to pass. Yet a number of physicists hope to replace this “block universe” with a physical theory of time.

 The physicist Tim Koslowski listens to the discussion at the Time in Cosmology conference.

The physicist Tim Koslowski listens to the discussion at the Time in Cosmology conference.

Philip Cheung for Quanta Magazine

Einstein once described his friend Michele Besso as “the best sounding board in Europe” for scientific ideas. They attended university together in Zurich; later they were colleagues at the patent office in Bern. When Besso died in the spring of 1955, Einstein — knowing that his own time was also running out — wrote a now-famous letter to Besso’s family. “Now he has departed this strange world a little ahead of me,” Einstein wrote of his friend’s passing. “That signifies nothing. For us believing physicists, the distinction between past, present and future is only a stubbornly persistent illusion.”

Many physicists have made peace with the idea of a block universe, arguing that the task of the physicist is to describe how the universe appears from the point of view of individual observers. To understand the distinction between past, present and future, you have to “plunge into this block universe and ask: ‘How is an observer perceiving time?’” said Andreas Albrecht, a physicist at the University of California, Davis, and one of the founders of the theory of cosmic inflation.

Others vehemently disagree, arguing that the task of physics is to explain not just how time appears to pass, but why. For them, the universe is not static. The passage of time is physical. “I’m sick and tired of this block universe,” said Avshalom Elitzur, a physicist and philosopher formerly of Bar-Ilan University. “I don’t think that next Thursday has the same footing as this Thursday. The future does not exist. It does not! Ontologically, it’s not there.”

Last month, about 60 physicists, along with a handful of philosophers and researchers from other branches of science, gathered at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, to debate this question at the Time in Cosmology conference. The conference was co-organized by the physicist Lee Smolin, an outspoken critic of the block-universe idea (among other topics). His position is spelled out for a lay audience in Time Reborn and in a more technical work, The Singular Universe and the Reality of Time, co-authored with the philosopher Roberto Mangabeira Unger, who was also a co-organizer of the conference. In the latter work, mirroring Elitzur’s sentiments about the future’s lack of concreteness, Smolin wrote: “The future is not now real and there can be no definite facts of the matter about the future.” What is real is “the process by which future events are generated out of present events,” he said at the conference.

Those in attendance wrestled with several questions: the distinction between past, present and future; why time appears to move in only one direction; and whether time is fundamental or emergent. Most of those issues, not surprisingly, remained unresolved. But for four days, participants listened attentively to the latest proposals for tackling these questions — and, especially, to the ways in which we might reconcile our perception of time’s passage with a static, seemingly timeless universe.

Time Swept Under the Rug

There are a few things that everyone agrees on. The directionality that we observe in the macroscopic world is very real: Teacups shatter but do not spontaneously reassemble; eggs can be scrambled but not unscrambled. Entropy — a measure of the disorder in a system — always increases, a fact encoded in the second law of thermodynamics. As the Austrian physicist Ludwig Boltzmann understood in the 19th century, the second law explains why events are more likely to evolve in one direction rather than another. It accounts for the arrow of time.

But things get trickier when we step back and ask why we happen to live in a universe where such a law holds. “What Boltzmann truly explained is why the entropy of the universe will be larger tomorrow than it is today,” said Sean Carroll, a physicist at the California Institute of Technology, as we sat in a hotel bar after the second day of presentations. “But if that was all you knew, you’d also say that the entropy of the universe was probably larger yesterday than today — because all the underlying dynamics are completely symmetric with respect to time.” That is, if entropy is ultimately based on the underlying laws of the universe, and those laws are the same going forward and backward, then entropy is just as likely to increase going backward in time. But no one believes that entropy actually works that way. Scrambled eggs always come after whole eggs, never the other way around.

To make sense of this, physicists have proposed that the universe began in a very special low-entropy state. In this view, which the Columbia University philosopher of physics David Albert named the “past hypothesis,” entropy increases because the Big Bang happened to produce an exceptionally low-entropy universe. There was nowhere to go but up. The past hypothesis implies that every time we cook an egg, we’re taking advantage of events that happened nearly 14 billion years ago. “What you need the Big Bang to explain is: ‘Why were there ever unbroken eggs?’” Carroll said.

Some physicists are more troubled than others by the past hypothesis. Taking things we don’t understand about the physics of today’s universe and saying the answer can be found in the Big Bang could be seen, perhaps, as passing the buck — or as sweeping our problems under the carpet. Every time we invoke initial conditions, “the pile of things under the rug gets bigger,” said Marina Cortes, a cosmologist at the Royal Observatory in Edinburgh and a co-organizer of the conference.

To Smolin, the past hypothesis feels more like an admission of failure than a useful step forward. As he puts it in The Singular Universe: “The fact to be explained is why the universe, even 13.8 billion years after the Big Bang, has not reached equilibrium, which is by definition the most probable state, and it hardly suffices to explain this by asserting that the universe started in an even less probable state than the present one.”

Other physicists, however, point out that it’s normal to develop theories that can describe a system given certain initial conditions. A theory needn’t strive to explain those conditions.

Another set of physicists thinks that the past hypothesis, while better than nothing, is more likely to be a placeholder than a final answer. Perhaps, if we’re lucky, it will point the way to something deeper. “Many people say that the past hypothesis is just a fact, and there isn’t any underlying way to explain it. I don’t rule out that possibility,” Carroll said. “To me, the past hypothesis is a clue to help us develop a more comprehensive view of the universe.”

The Alternative Origins of Time

Can the arrow of time be understood without invoking the past hypothesis? Some physicists argue that gravity — not thermodynamics — aims time’s arrow. In this view, gravity causes matter to clump together, defining an arrow of time that aligns itself with growth of complexity, said Tim Koslowski, a physicist at the National Autonomous University of Mexico (he described the idea in a 2014 paper co-authored by the British physicist Julian Barbour and Flavio Mercati, a physicist at Perimeter). Koslowski and his colleagues developed simple models of universes made up of 1,000 pointlike particles, subject only to Newton’s law of gravitation, and found that there will always be a moment of maximum density and minimum complexity. As one moves away from that point, in either direction, complexity increases. Naturally, we — complex creatures capable of making observations — can only evolve at some distance from the minimum. Still, wherever we happen to find ourselves in the history of the universe, we can point to an era of less complexity and call it the past, Koslowski said. The models are globally time-symmetric, but every observer will experience a local arrow of time. It’s significant that the low-entropy starting point isn’t an add-on to the model. Rather, it emerges naturally from it. “Gravity essentially eliminates the need for a past hypothesis,” Koslowski said.

The idea that time moves in more than one direction, and that we just happen to inhabit a section of the cosmos with a single, locally defined arrow of time, isn’t new. Back in 2004, Carroll, along with his graduate student Jennifer Chen, put forward a similar proposal based on eternal inflation, a relatively well-known model of the beginning of the universe. Carroll sees the work of Koslowski and his colleagues as a useful step, especially since they worked out the mathematical details of their model (he and Chen did not). Still, he has some concerns. For example, he said it’s not clear that gravity plays as important a role as their paper claims. “If you just had particles in empty space, you’d get exactly the same qualitative behavior,” he said.

Increasing complexity, Koslowski said, has one crucial side effect: It leads to the formation of certain arrangements of matter that maintain their structure over time. These structures can store information; Koslowski calls them “records.” Gravity is the first and primary force that makes record formation possible; other processes then give rise to everything from fossils and tree rings to written documents. What all of these entities have in common is that they contain information about some earlier state of the universe. I asked Koslowski if memories stored in brains are another kind of record. Yes, he said. “Ideally we would be able to build ever more complex models, and come eventually to the memory in my phone, the memory in my brain, in history books.” A more complex universe contains more records than a less complex universe, and this, Koslowski said, is why we remember the past but not the future.

But perhaps time is even more fundamental than this. For George Ellis, a cosmologist at the University of Cape Town in South Africa, time is a more basic entity, one that can be understood by picturing the block universe as itself evolving. In his “evolving block universe”model, the universe is a growing volume of space-time. The surface of this volume can be thought of as the present moment. The surface represents the instant where “the indefiniteness of the future changes to the definiteness of the past,” as he described it. “Space-time itself is growing as time passes.” One can discern the direction of time by looking at which part of the universe is fixed (the past) and which is changing (the future). Although some colleagues disagree, Ellis stresses that the model is a modification, not a radical overhaul, of the standard view. “This is a block universe with dynamics covered by the general-relativity field equations — absolutely standard — but with a future boundary that is the ever-changing present,” he said. In this view, while the past is fixed and unchangeable, the future is open. The model “obviously represents the passing of time in a more satisfactory way than the usual block universe,” he said.

Unlike the traditional block view, Ellis’s picture appears to describe a universe with an open future — seemingly in conflict with a law-governed universe in which past physical states dictate future states. (Although quantum uncertainty, as Ellis pointed out, may be enough to sink such a deterministic view.) At the conference, someone asked Ellis if, given enough information about the physics of a sphere of a certain radius centered on the British Midlands in early June, one could have predicted the result of the Brexit vote. “Not using physics,” Ellis replied. For that, he said, we’d need a better understanding of how minds work.

Another approach that aims to reconcile the apparent passage of time with the block universe goes by the name of causal set theory. First developed in the 1980s as an approach to quantum gravity by the physicist Rafael Sorkin — who was also at the conference — the theory is based on the idea that space-time is discrete rather than continuous. In this view, although the universe appears continuous at the macroscopic level, if we could peer down to the so-called Planck scale (distances of about 10–35 meters) we’d discover that the universe is made up of elementary units or “atoms” of space-time. The atoms form what mathematicians call a “partially ordered set” — an array in which each element is linked to an adjacent element in a particular sequence. The number of these atoms (estimated to be a whopping 10240 in the visible universe) gives rise to the volume of space-time, while their sequence gives rise to time. According to the theory, new space-time atoms are continuously coming into existence. Fay Dowker, a physicist at Imperial College London, referred to this at the conference as “accretive time.” She invited everyone to think of space-time as accreting new space-time atoms in way roughly analogous to a seabed depositing new layers of sediment over time. General relativity yields only a block, but causal sets seem to allow a “becoming,” she said. “The block universe is a static thing — a static picture of the world — whereas this process of becoming is dynamical.” In this view, the passage of time is a fundamental rather than an emergent feature of the cosmos. (Causal set theory has made at least one successful prediction about the universe, Dowker pointed out, having been used to estimate the value of the cosmological constantbased only on the space-time volume of the universe.)

The Problem With the Future

In the face of these competing models, many thinkers seem to have stopped worrying and learned to love (or at least tolerate) the block universe.

Perhaps the strongest statement made at the conference in favor of the block universe’s compatibility with everyday experience came from the philosopher Jenann Ismael of the University of Arizona. The way Ismael sees it, the block universe, properly understood, holds within it the explanation for our experience of time’s apparent passage. A careful look at conventional physics, supplemented by what we’ve learned in recent decades from cognitive science and psychology, can recover “the flow, the whoosh, of experience,” she said. In this view, time is not an illusion — in fact, we experience it directly. She cited studies that show that each moment we experience represents a finite interval of time. In other words, we don’t infer the flow of time; it’s part of the experience itself. The challenge, she said, is to frame this first-person experience within the static block offered by physics — to examine “how the world looks from the evolving frame of reference of an embedded perceiver” whose history is represented by a curve within the space-time of the block universe.

Ismael’s presentation drew a mixed response. Carroll said he agreed with everything she had said; Elitzur said he “wanted to scream” during her talk. (He later clarified: “If I bang my head against the wall, it’s because I hate the future.”) An objection voiced many times during the conference was that the block universe seems to imply, in some important way, that the future already exists, yet statements about, say, next Thursday’s weather are neither true nor false. For some, this seems like an insurmountable problem with the block-universe view. Ismael had heard these objections many times before. Future events exist, she said, they just don’t exist now. “The block universe is not a changing picture,” she said.“It’s a picture of change.” Things happen when they happen. “This is a moment — and I know everybody here is going to hate this — but physics could do with some philosophy,” she said. “There’s a long history of discussion about the truth-values of future contingent statements — and it really has nothing to do with the experience of time.” And for those who wanted to read more? “I recommend Aristotle,” she said.

Correction: A photo caption was revised on July 25, 2016, to correct the spelling of Jenann Ismael’s name.

This article was reprinted on TheAtlantic.com.

THE FUTURE OF INEQUALITY

th

by 

Inequality goes back to the Stone Age. Thirty thousand years ago, bands of hunter-gatherers in Russia buried some members in sumptuous graves replete with thousands of ivory beads, bracelets, jewels and art objects, while other members had to settle for a bare hole in the ground.

Nevertheless, ancient hunter-gatherer groups were still more egalitarian than any subsequent human society, because they had very little property. Property is a pre-requisite for long-term inequality.

Following the agricultural revolution, property multiplied and with it inequality. As humans gained ownership of land, animals, plants and tools, rigid hierarchical societies emerged, in which small elites monopolised most wealth and power for generation after generation.

Humans came to accept this arrangement as natural and even divinely ordained. Hierarchy was not just the norm, but also the ideal. How could there be order without a clear hierarchy between aristocrats and commoners, between men and women, or between parents and children?

Priests, philosophers and poets all over the world patiently explained that, just as in the human body not all members are equal – the feet must obey the head – so also in human society, equality will bring nothing but chaos.

In the late modern era, however, equality rapidly became the dominant value in human societies almost everywhere. This was partly due to the rise of new ideologies like humanism, liberalism and socialism. But it was also due to the industrial revolution, which made the masses more important than ever before.

Industrial economies relied on masses of common workers, while industrial armies relied on masses of common soldiers. Governments in both democracies and dictatorships invested heavily in the health, education and welfare of the masses, because they needed millions of healthy labourers to work in the factories, and millions of loyal soldiers to serve in the armies.

Consequently, the history of the 20th century revolved to a large extent around the reduction of inequality between classes, races and genders. The world of the year 2000 was a far more equal place than the world of 1900. With the end of the cold war, people became ever-more optimistic, and expected that the process would continue and accelerate in the 21st century.

In particular, they hoped globalisation would spread economic prosperity and democratic freedom throughout the world, and that as a result, people in India and Egypt would eventually come to enjoy the same rights, privileges and opportunities as people in Sweden and Canada. An entire generation grew up on this promise.

Now it seems that this promise was a lie.

Globalisation has certainly benefited large segments of humanity, but there are signs of growing inequality both between and within societies. As some groups increasingly monopolise the fruits of globalisation, billions are left behind.

Even more ominously, as we enter the post-industrial world, the masses are becoming redundant. The best armies no longer rely on millions of ordinary recruits, but rather on a relatively small number of highly professional soldiers using very high-tech kit and autonomous drones, robots and cyber-worms. Already today, most people are militarily useless.

Humanoid robots work side-by-side with employees on an assembly line in Kazo, Japan.
Pinterest

Humanoid robots work side-by-side with employees onan assembly line in Kazo, Japan. Photograph: Issei Kato/Reuters

The same thing might soon happen in the civilian economy, too. As artificial intelligence (AI) outperforms humans in more and more skills, it is likely to replace humans in more and more jobs. True, many new jobs might appear, but that won’t necessarily solve the problem.

Humans basically have just two types of skills – physical and cognitive – and if computers outperform us in both, they might outperform us in the new jobs just as in the old ones. Consequently, billions of humans might become unemployable, and we will see the emergence of a huge new class: the useless class.

This is one reason why human societies in the 21st century might be the most unequal in history. And there are other reasons to fear such a future.

With rapid improvements in biotechnology and bioengineering, we may reach a point where, for the first time in history, it becomes possible to translate economic inequality into biological inequality. Biotechnology will soon make it possible to engineer bodies and brains, and to upgrade our physical and cognitive abilities. However, such treatments are likely to be expensive, and available only to the upper crust of society. Humankind might consequently split into biological castes.

Throughout history, the rich and the aristocratic always imagined they had superior skills to everybody else, which is why they were in control. As far as we can tell, this wasn’t true. The average duke wasn’t more talented than the average peasant: he owed his superiority only to unjust legal and economic discrimination. However, by 2100, the rich might really be more talented, more creative and more intelligent than the slum-dwellers. Once a real gap in ability opens between the rich and the poor, it will become almost impossible to close it.

The two processes together – bioengineering coupled with the rise of AI – may result in the separation of humankind into a small class of superhumans, and a massive underclass of “useless” people.

Here’s a concrete example: the transportation market. Today there are many thousands of truck, taxi and bus drivers in the UK. Each of them commands a small share of the transportation market, and they gain political power because of that. They can unionise, and if the government does something they don’t like, they can go on strike and shut down the entire transportation system.

The jobs market could be irrevocably transformed by the development of self-driving vehicles.

Pinterest

The jobs market could be irrevocably transformed by the development of self-driving vehicles. Photograph: Justin Tallis/AFP/Getty Images

Now fast-forward 30 years. All vehicles are self-driving. One corporation controls the algorithm that controls the entire transport market. All the economic and political power which was previously shared by thousands is now in the hands of a single corporation, owned by a handful of billionaires.

Once the masses lose their economic importance and political power, the state loses at least some of the incentive to invest in their health, education and welfare. It’s very dangerous to be redundant. Your future depends on the goodwill of a small elite. Maybe there is goodwill for a few decades. But in a time of crisis – like climate catastrophe – it would be very tempting, and easy, to toss you overboard.

In countries such as the UK, with a long tradition of humanist beliefs and welfare state practices, perhaps the elite will go on taking care of the masses even when it doesn’t really need them. The real problem will be in large developing countries like India, China, South Africa or Brazil.

These countries resemble a long train: the elites in the first-class carriages enjoy healthcare, education and income levels on a par with the most developed nations in the world. But the hundreds of millions of ordinary citizens who crowd the third-class cars still suffer from widespread diseases, ignorance and poverty.

What would the Indian, Chinese, South African or Brazilian elite prefer to do in the coming century? Invest in fixing the problems of hundreds of millions of useless poor – or in upgrading a few million rich?

In the 20th century, the elites had a stake in fixing the problems of the poor, because they were militarily and economically vital. Yet in the 21st century, the most efficient (and ruthless) strategy may be to let go of the useless third-class cars, and dash forward with the first class only. In order to compete with South Korea, Brazil might need a handful of upgraded superhumans far more than millions of healthy but useless labourers.

Consequently, instead of globalisation resulting in prosperity and freedom for all, it might actually result in speciation: the divergence of humankind into different biological castes or even different species. Globalisation will unite the world on a vertical axis and abolish national differences, but it will simultaneously divide humanity on a horizontal axis.

From this perspective, current populist resentment of “the elites” is well-founded. If we are not careful, the grandchildren of Silicon Valley tycoons might become a superior biological caste to the grandchildren of hillbillies in Appalachia.

There is one more possible step on the road to previously unimaginable inequality. In the short-term, authority might shift from the masses to a small elite that owns and controls the master algorithms and the data which feed them. In the longer term, however, authority could shift completely from humans to algorithms. Once AI is smarter even than the human elite, all humanity could become redundant.

What would happen after that? We have absolutely no idea – we literally can’t imagine it. How could we? A super-intelligent computer will by definition have a far more fertile and creative imagination than that which we possess.

 

Of course, technology is never deterministic. We can use the same technological breakthroughs to create very different kinds of societies and situations. For example, in the 20th century, people could use the technology of the industrial revolution – trains, electricity, radio, telephone – to create communist dictatorships, fascist regimes or liberal democracies. Just think about North and South Korea: they have had access to exactly the same technology, but they have chosen to employ it in very different ways.

In the 21st century, the rise of AI and biotechnology will certainly transform the world – but it does not mandate a single, deterministic outcome. We can use these technologies to create very different kinds of societies. How to use them wisely is the most important question facing humankind today. If you don’t like some of the scenarios I have outlined here, you can still do something about it.

Yuval Noah Harari lectures at the Hebrew University of Jerusalem and is the author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow. He discusses the history of inequality for the BBC World Service here.

HOMO SAPIEN OR HOMO PROSPECTUS?

https://www.nytimes.com/2017/05/19/opinion/sunday/why-the-future-is-always-on-your-mind.html

blog_evolution-music1

We are misnamed. We call ourselves Homo sapiens, the “wise man,” but that’s more of a boast than a description. What makes us wise? What sets us apart from other animals? Various answers have been proposed — language, tools, cooperation, culture, tasting bad to predators — but none is unique to humans.

What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.

A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.

Behaviorists thought of animal learning as the ingraining of habit by repetition. Psychoanalysts believed that treating patients was a matter of unearthing and confronting the past. Even when cognitive psychology emerged, it focused on the past and present — on memory and perception.

But it is increasingly clear that the mind is mainly drawn to the future, not driven by the past. Behavior, memory, and perception can’t be understood without appreciating the central role of prospection. We learn not by storing static records but by continually retouching memories and imagining future possibilities. Our brain sees the world not by processing every pixel in a scene but by focusing on the unexpected.

Our emotions are less reactions to the present than guides to future behavior. Therapists are exploring new ways to treat depression now that they see it as primarily not because of past traumas and present stresses but because of skewed visions of what lies ahead.

Prospection enables us to become wise not just from our own experiences but also by learning from others. We are social animals like no others, living and working in very large groups of strangers because we have jointly constructed the future. Human culture — our language, our division of labor, our knowledge, our laws, and technology — is possible only because we can anticipate what fellow humans will do in the distant future. We make sacrifices today to earn rewards tomorrow, whether in this life or in the afterlife promised by so many religions.

Some of our unconscious powers of prospection are shared by animals, but hardly any other creatures are capable of thinking more than a few minutes ahead. Squirrels bury nuts by instinct, not because they know winter is coming. Ants cooperate to build dwellings because they’re genetically programmed to do so, not because they’ve agreed on a blueprint. Chimpanzees have sometimes been known to exercise short-term foresight, like the surly male at a Swedish zoo who was observed stockpiling rocks to throw at gawking humans, but they are nothing like Homo prospectus.

If you’re a chimp, you spend much of the day searching for your next meal. If you’re a human, you can usually rely on the foresight of your supermarket’s manager, or you can make a restaurant reservation for Saturday evening thanks to a remarkably complicated feat of collaborative prospection. You and the restaurateur both imagine a future time — “Saturday” exists only as a collective fantasy — and anticipate each other’s actions. You trust the restaurateur to acquire food and cook it for you. She trusts you to show up and give her money, which she will accept only because she expects her landlord to accept it in exchange for occupying his building.

The central role of prospection has emerged in recent studies of both conscious and unconscious mental processes, like one in Chicago that pinged nearly 500 adults during the day to record their immediate thoughts and moods. If traditional psychological theory had been correct, these people would have spent a lot of time ruminating. But they actually thought about the future three times more often than the past, and even those few thoughts about a past event typically involved consideration of its future implications.

When making plans, they reported higher levels of happiness and lower levels of stress than at other times, presumably because planning turns a chaotic mass of concerns into an organized sequence. Although they sometimes feared what might go wrong, on average there were twice as many thoughts of what they hoped would happen.

While most people tend to be optimistic, those suffering from depression and anxiety have a bleak view of the future — and that in fact seems to be the chief cause of their problems, not their past traumas nor their view of the present. While traumas do have a lasting impact, most people actually emerge stronger afterward. Others continue struggling because they over-predict failure and rejection. Studies have shown depressed people are distinguished from the norm by their tendency to imagine fewer positive scenarios while overestimating future risks.

They withdraw socially and become paralyzed by exaggerated self-doubt. A bright and accomplished student imagines: If I flunk the next test, then I’ll let everyone down and show what a failure I really am. Researchers have begun successfully testing therapies designed to break this pattern by training sufferers to envision positive outcomes (imagine passing the test) and to see future risks more realistically (think of the possibilities remaining even if you flunk the test).

Behaviorists used to explain learning as the ingraining of habits by repetition and reinforcement, but their theory couldn’t explain why animals were more interested in unfamiliar experiences than familiar ones. It turned out that even the behaviorists’ rats, far from being creatures of habit, paid special attention to unexpected novelties because that was how they learned to avoid punishment and win rewards.

The brain’s long-term memory has often been compared to an archive, but that’s not its primary purpose. Instead of faithfully recording the past, it keeps rewriting history. Recalling an event in a new context can lead to new information being inserted in the memory. Coaching of eyewitnesses can cause people to reconstruct their memory so that no trace of the original is left.

The fluidity of memory may seem like a defect, especially to a jury, but it serves a larger purpose. It’s a feature, not a bug because the point of memory is to improve our ability to face the present and the future. To exploit the past, we metabolize it by extracting and recombining relevant information to fit novel situations.

This link between memory and prospection has emerged in research showing that people with damage to the brain’s medial temporal lobe lose memories of past experiences as well as the ability to construct rich and detailed simulations of the future. Similarly, studies of children’s development show that they’re not able to imagine future scenes until they’ve gained the ability to recall personal experiences, typically somewhere between the ages of 3 and 5.

Perhaps the most remarkable evidence comes from recent brain imaging research. When recalling a past event, the hippocampus must combine three distinct pieces of information — what happened, when it happened and where it happened — that are each stored in a different part of the brain. Researchers have found that the same circuitry is activated when people imagine a novel scene. Once again, the hippocampus combines three kinds of records (what, when and where), but this time it scrambles the information to create something new.

Even when you’re relaxing, your brain is continually recombining information to imagine the future, a process that researchers were surprised to discover when they scanned the brains of people doing specific tasks like mental arithmetic. Whenever there was a break in the task, there were sudden shifts to activity in the brain’s “default” circuit, which is used to imagine the future or retouch the past.

This discovery explains what happens when your mind wanders during a task: It’s simulating future possibilities. That’s how you can respond so quickly to unexpected developments. What may feel like a primitive intuition, a gut feeling, is made possible by those previous simulations.

Suppose you get an email invitation to a party from a colleague at work. You’re momentarily stumped. You vaguely recall turning down a previous invitation, which makes you feel obliged to accept this one, but then you imagine having a bad time because you don’t like him when he’s drinking. But then you consider you’ve never invited him to your place, and you uneasily imagine that turning this down would make him resentful, leading to problems at work.

Methodically weighing these factors would take a lot of time and energy, but you’re able to make a quick decision by using the same trick as the Google search engine when it replies to your query in less than a second. Google can instantly provide a million answers because it doesn’t start from scratch. It’s continually predicting what you might ask.

Your brain engages in the same sort of prospection to provide its own instant answers, which come in the form of emotions. The main purpose of emotions is to guide future behavior and moral judgments, according to researchers in a new field called prospective psychology. Emotions enable you to empathize with others by predicting their reactions. Once you imagine how both you and your colleague will feel if you turn down his invitation, you intuitively know you’d better reply, “Sure, thanks.”

If Homo prospectus takes the really long view, does he become morbid? That was a longstanding assumption in psychologists’ “terror management theory,” which held that humans avoid thinking about the future because they fear death. The theory was explored in hundreds of experiments assigning people to think about their own deaths. One common response was to become more assertive about one’s cultural values, like becoming more patriotic.

But there’s precious little evidence that people actually spend much time outside the lab thinking about their deaths or managing their terror of mortality. It’s certainly not what psychologists found in the study tracking Chicagoans’ daily thoughts. Less than 1 percent of their thoughts involved death, and even those were typically about other people’s deaths.

Homo prospectus is too pragmatic to obsess on death for the same reason that he doesn’t dwell on the past: There’s nothing he can do about it. He became Homo sapiens by learning to see and shape his future, and he is wise enough to keep looking straight ahead.


THE ILLUSION OF COMPETENCE

 

s-l1000

What a Know-It-All Does Not Know

One day in 1995, a large, heavy middle-aged man robbed two Pittsburgh banks in broad daylight. He didn’t wear a mask or any sort of disguise. And he smiled at surveillance cameras before walking out of each bank. Later that night, police arrested a surprised McArthur Wheeler. When they showed him the surveillance tapes, Wheeler stared in disbelief. ‘But I wore the juice,’ he mumbled. Apparently, Wheeler thought that rubbing lemon juice on his skin would render him invisible to videotape cameras. After all, lemon juice is used as invisible ink so, as long as he didn’t come near a heat source, he should have been completely invisible.

Police concluded that Wheeler was not crazy or on drugs – just incredibly mistaken.

The saga caught the eye of the psychologist David Dunning at Cornell University, who enlisted his graduate student, Justin Kruger, to see what was going on. They reasoned that, while almost everyone holds favourable views of their abilities in various social and intellectual domains, some people mistakenly assess their abilities as being much higher than they actually are. This ‘illusion of confidence’ is now called the ‘Dunning-Kruger effect’, and describes the cognitive bias to inflate self-assessment.

To investigate this phenomenon in the lab, Dunning and Kruger designed some clever experiments. In one study, they asked undergraduate students a series of questions about grammar, logic and jokes, and then asked each student to estimate his or her score overall, as well as their relative rank compared to the other students. Interestingly, students who scored the lowest in these cognitive tasks always overestimated how well they did – by a lot. Students who scored in the bottom quartile estimated that they had performed better than two-thirds of the other students!

This ‘illusion of confidence’ extends beyond the classroom and permeates everyday life. In a follow-up study, Dunning and Kruger left the lab and went to a gun range, where they quizzed gun hobbyists about gun safety. Similar to their previous findings, those who answered the fewest questions correctly wildly overestimated their knowledge about firearms. Outside of factual knowledge, though, the Dunning-Kruger effect can also be observed in people’s self-assessment of a myriad of other personal abilities. If you watch any talent show on television today, you will see the shock on the faces of contestants who don’t make it past auditions and are rejected by the judges. While it is almost comical to us, these people are genuinely unaware of how much they have been misled by their illusory superiority.

Sure, it’s typical for people to overestimate their abilities. One study found that 80 per cent of drivers rate themselves as above average – a statistical impossibility. And similar trends have been found when people rate their relative popularity and cognitive abilities. The problem is that when people are incompetent, not only do they reach wrong conclusions and make unfortunate choices but, also, they are robbed of the ability to realise their mistakes. In a semester-long study of college students, good students could better predict their performance on future exams given feedback about their scores and relative percentile. However, the poorest performers showed no recognition, despite clear and repeated feedback that they were doing badly. Instead of being confused, perplexed or thoughtful about their erroneous ways, incompetent people insist that their ways are correct. As Charles Darwin wrote in The Descent of Man (1871): ‘Ignorance more frequently begets confidence than does knowledge.’

Interestingly, really smart people also fail to accurately self-assess their abilities. As much as D- and F-grade students overestimate their abilities, A-grade students underestimate theirs. In their classic study, Dunning and Kruger found that high-performing students, whose cognitive scores were in the top quartile, underestimated their relative competence. These students presumed that if these cognitive tasks were easy for them, then they must be just as easy or even easier for everyone else. This so-called ‘imposter syndrome’ can be likened to the inverse of the Dunning-Kruger effect, whereby high achievers fail to recognise their talents and think that others are equally competent. The difference is that competent people can and do adjust their self-assessment given appropriate feedback, while incompetent individuals cannot.

And therein lies the key to not ending up like the witless bank robber. Sometimes we try things that lead to favourable outcomes, but other times – like the lemon juice idea – our approaches are imperfect, irrational, inept or just plain stupid. The trick is to not be fooled by illusions of superiority and to learn to accurately reevaluate our competence. After all, as Confucius reportedly said, real knowledge is knowing the extent of one’s ignorance.Aeon counter – do not remove

Kate Fehlhaber

This article was originally published at Aeon and has been republished under Creative Commons.

THE WORLD’S TALLEST MAN

http://listverse.com/2017/05/12/top-10-freaky-facts-about-the-tallest-man/

BY ANTHONY SFARRA MAY 12, 2017

 

Guinness World Records recognizes Robert Wadlow as the tallest human ever for whom there are indisputable, documented measurements. The Alton Giant, as he was called, stood 272 centimeters (8’11″) tall. Robert’s stature led to a very unique life, and he became something of a celebrity for his height. Being the tallest person who ever lived, however, wasn’t always easy.

10 He Was Normal-Sized At Birth

 width=

Robert Pershing Wadlow was born on February 22, 1918, in Alton, Illinois, a town along the Mississippi River not far from St. Louis. The first child of parents Harold and Addie Wadlow appeared completely normal at birth, weighing 3.8 kilograms (8.4 lb). He was a normal-sized baby, and why not? Harold and Addie were of average height, and their subsequent four children after Robert were also of normal height.[1]

After birth, however, Robert began to grow far faster than normal. When he was six months old, he weighed 14 kilograms (30 lb), roughly twice as heavy as a typical six-month-old. At 12 months, he’d grown to 20 kilograms (45 lb), and at 18 months, he’d reached 30 kilograms (67 lb). By the time Robert was a toddler, he was 91 centimeters (3’) tall.

9 A Very Big Kid

 width=

When Robert was five years old, he stood 163 centimeters (5’4″) and weighed 48 kilograms (105 lb). When he entered kindergarten, he wore clothes sized for a 17-year-old. Robert was said to be well behaved in school and smart for his age. His main difficulty was finding desks that he could sit in.[2]

By the time Robert was eight, few stores had clothes that would fit him properly, and he became a regular customer for tailors. At age nine, Robert was able to carry his father up a flight of stairs. At 10, he’d reached a height of 196 centimeters (6’5″) and a weight of 95 kilograms (210 lb). His shoe size was 17.5. Try finding shoes of that size at a local store in 1928.

Robert joined the Boy Scouts at 13, becoming the world’s tallest Boy Scout at 224 centimeters (7’4″). He required a specially made uniform, sleeping bag, and tent. At this time, the ever-growing teen was consuming five times as many calories per day compared to typical boys his age. When Robert graduated high school, he towered over his classmates at 254 centimeters (8’4″). He tipped the scales at 177 kilograms (391 lb).

8 The Cause

 width=

Not long before his 12th birthday, Robert and his family visited Barnes Hospital in St. Louis. There, the 211-centimeter (6’11″) boy finally learned just why he was growing so prodigiously. He had an overactive pituitary gland that was releasing far too much growth hormone.

The most common cause of an overactive pituitary gland is a benign tumor (called an adenoma) forcing excess hormone to be secreted. More often than not, such growths will cause too much prolactin to be secreted, but in some cases, excess growth hormone is released. It is also possible for pituitary tumors to simply be present without affecting hormone release at all. Such tumors are referred to as non-secreting.[3]

Today, several medical interventions can deal with adenomas, such as surgery or medication. In 1930, however, no treatments were available. Robert would only continue to grow.

7 The Difficulties Of Being The Tallest

 width=

Being a literal giant isn’t easy. For one, Robert required a lot of food. A typical breakfast during his teen years consisted of eight eggs, 12 slices of toast, several glasses of orange juice, five cups of coffee, and plenty of cereal.[4]

Robert’s size began to take its toll on his body from a young age. His heart didn’t have an easy job, having to circulate blood to the ends of his long limbs. At age 10, his feet were cyanotic, so they had a blue color due to poor circulation. Robert had little sensation in his feet. He would not feel minor injuries to them until they were aggravated and raw.

When he was 14, Robert stumbled only slightly while pushing a boy on a tricycle, but doing so still broke two bones in his feet. Afterward, he had to wear an iron brace on that leg. As an adult, Robert needed leg braces to walk and also used a cane nearly as tall as an average adult. However, he never required a wheelchair.

Physical ailments aside, Robert lived in a world that was too small for him. To sit at tables, he had to keep his legs straight, causing his feet to stick out from the other end, where people would inevitably trip over them. There was always the danger of chairs breaking under his weight. He had to stoop considerably to get through most doors, and most ceilings weren’t high enough for him to stand up straight. If Robert stayed at a hotel, he needed multiple beds pushed together. Common activities such as going to the movies were unfeasible for him.

6 Adult Life

 width=

As Robert entered his adult years, he continued to grow. At age 19, he’d reached 262 centimeters (8’7″) and was declared the tallest man in the world. His hands were 32.4 centimeters (12.8 in) from the tip of his middle fingers to his wrists. He had an arm span of about 2.9 meters (9.5 ft), and his shoes were size 37, which is 47 centimeters (19 in) long. At the time of his death, he had a pair of 39s under construction. At his heaviest, Robert weighed 223 kilograms (492 lb). He ate around 8,000 calories a day.

In 1936, Robert did a tour of the US with the Ringling Brothers Circus. In general, his main source of income was public appearances. When he was 20, he found a useful arrangement as a paid spokesman for the International Shoe Company, which also agreed to build his $100 shoes for free. He visited more than 800 towns in 41 states, traveling over 500,000 kilometers (300,000 mi) for the company. Robert also became a Freemason, presumably the tallest ever.[5]

5 Giant-Sized Items

 width=

Robert was a very big man, and he needed some very big items for day-to-day life. All of his clothes had to be custom-made and required three times the normal amount of material to assemble. As previously mentioned, his shoes were both big and expensive. Unlike Robert’s other clothes, however, building his shoes was more than a question of extra leather. To withstand the stress they’d be placed under, the footwear had to be constructed with special materials and reinforced with metal parts. Robert also had a 2.9-meter (9.5 ft) bed at home.

For Robert’s shoe spokesman travels, his father modified the family car. The front passenger seat was removed, allowing Robert to sit in the back and stretch his legs out. The seven-person automobile effectively became a three-seater.[6]

In 1937, Robert’s parents made plans to build him a house scaled for his tremendous size. The doors would have been 2.7 meters (9 ft) high, the ceilings would have been 3.4 meters (11 ft), and even the stairs would have measured 0.6 meters (2 ft). All the furniture would have been fitted to Robert, and he would have had a 3-meter (10 ft) bathtub. The bathtub alone provided quite a snag, as it would have had to have been cast at a foundry, at a cost of up to $1,000. The plans for the house never came to fruition.

4 ‘The Gentle Giant’

 width=

Robert was known for being quiet, good-natured, and friendly and was nicknamed the “Gentle Giant.” In his spare time, he was said to enjoy photography and stamp collecting. Many of his days, however, were spent facing crowds.

Outside his hometown, he couldn’t walk far without a group of onlookers accumulating and invariably stepping on his feet as they drew close. On top of that, his job also involved facing large assemblages of people. Robert generally didn’t mind. When asked if it bothered him when people would stare, he said, “No, I just overlook them.”[7]

Some crowd antics could get on Robert’s nerves, though. He didn’t like questions on how much he ate, and he especially grew tired of being asked, “How’s the weather up there?” When he was 18, Robert told a newspaper reporter that he hadn’t heard a new joke about his height in three years. Robert would sometimes bring his photography into his public appearances, surreptitiously photographing the crowds without their realizing it. His hands were large enough to completely conceal his camera, with the lens poking out between his thumb and forefinger.

3 The Court Case

 width=

On June 2, 1936, a medical doctor named Charles Humberd visited Robert and his family. According to the Wadlows, he was very rude, and Robert ultimately refused to cooperate with Humberd or be examined by him. Humberd then trashed Robert in a paper published in The Journal of the American Medical Association (JAMA).

The JAMA write-up described Robert as “surly,” “apathetic,” “antagonistic,” and “vapid,” among other things. Humberd painted Robert as a bitter introvert, angry with his physical state. He also expressed doubt on various teachers’ claims that Robert was intelligent. Humberd derided Robert as having “defective attention” and stated, “All functions that we attribute to the highest centers in the frontal lobes are languid and blurred.”

The Wadlows decided to sue Humberd for libel. He had made all these insulting claims about their son based on cursory observations. In court, several witnesses refuted Humberd’s description of Wadlow. Nevertheless, the Wadlows lost the suit, as the judge concluded that it could not be proven that Humberd’s description of Robert wasn’t accurate the day he met him. The family chose not to appeal.[8]

2 Robert’s Death

 width=

On June 27, 1940, Robert’s height was measured. The Alton Giant, 22 years old and still growing, had reached 272 centimeters (8’11″). It would be the last time his height was measured.

On July 4, Robert appeared at the National Forest Festival in Manistee, Michigan, and walked in a parade. The next morning, he had a temperature of 41 degrees Celsius (106 °F). A doctor was called. Upon examining Robert, he discovered that his new iron brace, which had been fitted a week earlier, had created a large blister on his right ankle that would normally have been painful, if not for Robert’s poor sensation in his lower extremities. The blister was infected and septic.[9]

Robert had to sleep at a hotel, as the hospital had nothing large enough to accommodate him. He remained bedridden for days. Despite doctors’ best efforts, the infection would not abate. Robert Wadlow died in his sleep at 1:30 AM on July 15, 1940.

1 Aftermath

 width=

Over 40,000 people attended Robert’s funeral in Alton. All businesses in town closed for the service as a show of respect. Robert was buried in Alton’s Oakwood Cemetery. Two grave plots were needed for his interment.

Robert’s coffin was 3.3 meters (10.8 ft) long. The 450-kilogram (1,000 lb) casket required 18 pallbearers and stuck out from the back of the hearse carrying it, necessitating a black cloth to be draped over the vehicle’s doors to cover the remainder. The Wadlows feared that Robert’s body might be dug up, perhaps by doctors for research, so his coffin was placed in a solid concrete vault. His tombstone has a simple epitaph of “At Rest.”[10]

After Robert’s death, his family destroyed most of his possessions to prevent them from being stolen and exhibited as “freak memorabilia.” Some items survive, however. In 1985, a life-size statue was constructed at the Southern Illinois University of Dental Medicine in Alton.

The Most Dangerous Species On The Planet

http://www.greeningz.com/interesting/most-dangerous-species-on-the-planet/7/

, Apr 23, 2017

Although many creatures on Earth are docile and harmless, there are many who are just the opposite. Both large and small, the earth is home to some very ferocious animals. Here are the most dangerous species on the planet.

Blue-Ringed Octopus

The Blue-Ringed Octopus lives in tide pools and coral reefs in the Pacific and Indian Oceans. They are considered one of the most venomous marine animals. They are normally rather passive, unless they feel threatened. If handled or provoked, they will sting humans with a neurotoxin that is powerful enough to kill. If stung, artificial respiration on the victim is required, as the venom causes paralysis of the respiratory muscles.

maxresdefault

Portuguese Man O’ War

The Portuguese Man O’ War is found in the Atlantic, Indian and Pacific Oceans. It is commonly identified as a jellyfish, however it is is a siphonophore (a colonial organism). The stinging venom for the man o’ war leaves humans in severe pain and normally large red welts. The venom can also travel to the lymph nodes and cause symptoms similar to an allergic reaction. Other symptoms can include fever and shock. Not only can these species sting you while alive but detached tentacles can also cause just as painful of a sting.

2B7C80EA00000578-3203167-The_Portuguese_man_o_war_is_recognisable_to_having_giant_tentacl-a-3_1439974281828

Great White Shark

The Great White Shark is a predator that can be found all over the Earth, more specifically coastal oceans. Both male and females can grow to a very large animal, however, the females can grow slightly larger than the males. Females can grow up to 20 feet (6.1 m) in length and weigh up to 4,300 pounds (1,950 kg). The Great White is responsible for the largest number of recorded shark bites on human beings. Although Great Whites do not normally seek out humans as food, humans are normally just “test-bitten” by the shark when attacked.

Great white shark breaking the surface in South Africa.

Cape Buffalo

The Cape Buffalo is found in sub-saharan Africa. They are normally relatively calm and they travel in herds. However, if an individual buffalo is injured, they become insane killers. Perfectly nicknamed, the Black Death, Cape Buffalo kill more hunters in Africa than any other animal. They will continue to attack even if you injure them. These behemoths can grow up to 6 feet long and weight nearly a ton.

cape-buffalo-08

Deathstalker

This scorpion certainly has a fitting name, the Deathstalker. This highly venomous scorpion is found in North Africa and the Middle East. The venom of this guy contains a high level of neurotoxins. One sting to a grown adult, although extremely painful, will most likely not kill. However, a sting to a child, or the elderly, may be lethal. There is an anti-venom, however, sometimes the venom is resistant to the treatment.

Giant Pacific Octopus

One of the largest octopi on earth is the Giant Pacific Octopus. This octopus has 8 arms, all of which are lined with two rows of suckers. The suckers are then lined with hooks to capture its prey. In the center of the arms is a mouth that contains a beak and a toothed-tongue. Although not known to attack humans, this octopus is strong enough to feed on tiny sharks.

giant-pacific-octopus

Cone Snail

The Cone Snail can be found in warm waters near the equator. They are normally seen in shallow depths close to the shore around rock formations and coral reefs. Although tempting, if you see one do not touch it! The snails have harpoon-like teeth which contain a venom called conotoxin. The toxin stops nerve cells from communicating and can cause paralysis almost immediately. There is no antivenin.

conus-textile

Siafu Ant

Don’t underestimate these guy because of their sizes. They are the true definition of strength in numbers. Also known as driver ants, if they feel attacked or threatened, the whole swarm will come after you. The swarms can contain up to 50 million and all will bite you. Their bites are extremely hard to remove once they attach to their prey. Even after death, their jaws will remain clamped. Although they may not be the most deadly, they are certainly very dangerous.

Driver-Ants

Saw-Scaled Viper

The Saw-Scaled Viper kills more people than any other snake each year. Although it only grows to 1-3 feet long, its venomous bite can do lots of damage. Their venom contains hemotoxins and cytotoxins, which leads to multiple bleeding disorders including the possibility of an intracranial hemorrhage. Many of these snakes are found in areas where modern medicine is not found. Therefore, victims sometimes suffer a long, painful death.

Saw-scaled_Viper_(Echis_carinatus)_Photographed_By_Shantanu_Kuveskar

African Lion

The African Lion lives in groups called prides and can weigh 265 to 420 pounds. These animals are very territorial. The males will protect the land and the pride, while the females hunt for food. Although rare, there are accounts of lions eating man. Due to its cunning hunting skills, speed and strength, if targeted by an African Lion a person stands little chance of survival.

Botswana_Lion_3000865b_vDcbwGA

Inland Taipan

The Inland Taipan is the most venomous of all the snakes in the world. What also separates this snakes from many others is its prey. The snake is an expert in hunting mammals, therefore, its venom is adapted to kill warm-blooded species. It normally does not strike unless provoked. Its venom contains neurotoxins which affect the nervous system, hemotoxins which affect the blood, and myotoxins which affect the the muscles. If untreated the venom can be lethal.

Inland-taipan-4

Assassin Bug

The Assassin bug is perfectly named as it kills around 12,000 people each year. Although its bite does not directly kill, the disease it carries does. The assassin bug, also known as the kissing bug, carries the Chagas Disease. Chagas Disease is a parasitic infection, and if left untreated can be fatal. However, there is no vaccine for the disease. Prevention is focused on decreasing the bugs contact with humans by using sprays and paints that contain insecticides, as well as improving sanitary conditions.

kissing-bug-chagras

Flower Urchin

The flower urchin, or scientifically known as the Toxopneustes pileolus, is commonly found in the Indo-West Pacific. The name was given to the creature because if its numerous and distinctively flower-like, which are normally pinkish or yellowish white. The urchin normally inhabits coral reefs, sea grass beds or rocky environments. Although they may look pretty, do not touch. When touched they deliver a very painful sting causing debilitating pain.

flower-urchin

Africanized Honey Bee

The Africanized Honey Bee, also known as the Killer Bee, was created by man, not by nature. The bee is a cross-breeding of the African Honey bee and various European Honey bees.  The new breed of bee was taken to Brazil in the 1950s in hopes to increase honey production. However, several swarms escaped, and has since spread throughout the Americas. They are a very defensive species and chase humans long distances. They have killed over 1,000 humans, as well as many other animals such as horses.

africanized-honey-bee

Mosquito

Many people see mosquitoes just as tiny annoyances. However, they are actually much more dangerous than most people perceive. The World Health Organization has reported that close to 725,000 people each year are killed by mosquito-born diseases. Hundreds of millions have been affected with malaria, many of which die for the disease. The bug also carries deadly diseases such as dengue fever, yellow fever, and encephalitis.

stopmosquitoes_456px

Black Mamba

The Black Mamba is found in the savannas and rocky areas in southern and eastern Africa. It can grow up to 14 feet long and can slither up to 12.5 mph, making it the fastest snake in all the planet. Although it only attacks when it is provoked, when it does attack beware. The Black Mamba will bite several times, delivering enough toxins to kill 10 people. There is a antivenin but it must be received within 20 minutes.

blackmamba

Tsetse Fly

The Tsetse Fly is found in Sub-Saharan African countries. The flies, like mosquitoes, feed other other animal’s blood. However, it’s not the bite that will harm you, it’s the parasites they spread that are so harmful. The parasite known as Trypanosomes are the direct cause for African Sleeping Sickness. The sickness leads to behavioral changes, poor coordination, trouble sleeping, and if not treated, death. The only way to prevent a bite is to wear neutral colors, avoid bushes during the day, and use permethrin-treated gear.

Stomach

Stonefish

The Stonefish is one of the most venomous fish. Mainly found on the coasts in the Indo-Pacific oceans, stonefish get their name from its ability to camouflage itself amongst the rocks. Due to its camouflage, swimmers may not see the fish and accidentally step on it. Unfortunately, this normally does not end well for the swimmer. The Stonefish has needle-like dorsal fin spines which secrete neurotoxins when disturbed. There is an anti-venom, and if the sting is minimal, hot water may also destroy the venom.

Stone fish.

Saltwater Crocodiles

The Saltwater Crocodile inhabits the Indo-Pacific Ocean waters. This croc can grow up to 23 feet long and weigh more than a ton. Although contradictory to their name, they can swim in both salt and freshwater, and can strike a bite delivering 3,700 pounds per square inch (psi) of pressure. That amount of pressure is close to the strength of a T. Rex! Crocodiles are responsible for more human deaths than sharks.

47e01aa7f63bbf4eecd7de802bcea316

Dogs

Dogs truly are a man’s best friend. They love you unconditionally no matter what your faults are. However, they can also be very dangerous. Dogs kill roughly 25,000 people each year, the majority of which died from rabies. The prevalence of infection where rabies is well contained, such as North America and Western Europes, is very low. However, there are other countries that have a high rate of stray dogs, like India, where 20,000 people die from rabies each year.

dangerous-dog_2113544a

Tarantula Hawk

The Tarantula Hawk wasp has the most painful sting of any insect on earth. However, humans do not normally have to worry about these bugs. They hunt the tarantula. However, human stings are possible if the wasp feels provoked. But, no medical attention is necessary. The pain will last for about 5 minutes and then dissipate. Due to their large stingers, most predators avoid these bugs. Therefore, even though they are tiny they are thriving predators.

667648da3b0298e46d90d6eb62a7e71f

Hippopotamus

Although the Hippopotamus is mostly a herbivorous mammal, they can be very dangerous. The Hippo is very aggressive and territorial. Due to its large stature (it is the third-largest land mammal), sharp teeth and good mobility, it can be a deadly creature. Males can average around 3,300 pounds (1,497 kg). Many reports have been made about Hippos attacking people both in the water and on land. Therefore, its best to stay away if you see one, in the wild.

d0150d5d11f816bed51f9f49b822bc53

Polar Bear

From zoos and media, people have become to know polar bears as cute and cuddly creatures. However, their natural instinct is just the opposite. The are the most carnivorous species in the bear family, and the most likely to attack humans. However, unless you plan to take a trip to the Arctic, you do not have to worry about becoming a polar bear’s dinner. Polar bears can weigh up to 1,750 pounds (800 kg).

ab64f32e15763fe7848eb2a58ca7ceb6

King Cobra

The King Cobra is the world’s longest venomous snake. It is predominantly found in India and other parts of Southeast Asia. The King Cobra’s venom’s toxins attack the victim’s central nervous system resulting in pain, vertigo and eventually paralysis. It has been reported that death can occur as short as 30 minutes without the antivenin. The toxin is so deadly, it could even kill a large elephant.

king-cobra-flicks-tongue-820x547

Pufferfish

Better known as blowfish, Pufferfish are found in tropical seas all over the world. They are the second most poisonous vertebrae in the world. Their poison, called tetrodoxin, is found in the fish’s skin, muscle tissue, liver, kidneys and gonads. Tetrodoxin is over 1,000 times more poisonous than cyanide. However, chefs have still found ways to cook the fish, and it is considered a delicacy in places like Japan. Chefs must be licensed to do this, but accidental deaths from eating it still happen.

PDRF-00251839-001

Box Jellyfish

Box Jellyfish are considered by the National Oceanic and Atmospheric Administration one of the most venomous marine animals in the world. Found in the Indo-Pacific waters north of Australia, the nearly invisible jellyfish contains up to 15 tentacles each growing up to 10 feet long. Each tentacle is lined with stingers that contain toxins that can attack the heart, nervous system and skin cells. There is an antivenin, however, many of its victims go into shock and drown.

2013-07-04-boxangel1

Golden Poison Dart Frog

The Golden Poison Dart Frogs are found on Colombia’s Pacific coast, and only grow to approximately the size of a paperclip. However, don’t let its small size fool you. This tiny frog has enough poison in it’s body to kill 10 grown men. It only takes 2 micrograms to kill one individual. That amount of liquid would fit onto the head of a pin.  The frog releases the poison out of glands beneath its skin. Therefore, one touch can kill you.

goldendartfrog2-600

Hyena

The Hyena is a very intelligent animal. Hyenas can weigh up to 190 pounds and their bite is capable of breaking bones. Although they normally do not attack people, they will if they perceive the human as hurt, sick or incapacitated. Their ability to coordinate hunts enables them to easily capture and kill their prey. However, there are many African people who have learned to live peacefully amongst the hyenas, and even keeping some as pets.

hyena-017

Bullet Ant

The bullet ant, which was named for its powerful sting is found in humid lowland rainforests in parts of South America. The sting from one ant causes immediately and extreme pain. Their stings attack the central nervous system and can cause paralysis. One sting could incapacitate a full grown man. However, some South American tribes use the sting as initiation rites to become warriors.

Gray Wolf

The Gray Wolf is amongst one of Eurasia’s and North America’s most feared predators. They are about the size of a medium-to-large-sized dog, and travel in packs. What makes them such good predators is their sense of smell. They can smell their prey from a far distance, and then coordinate attacks with their pack. Wolf attacks on humans are rare, but when they do attack, they can be deadly and are normally directed towards small children.

Gray-Wolf-15

ENDING THE DEPARTMENT OF EDUCATION

images-1

Report: Plan To Force God Into Public Schools Released

New Education Reform Report written for the Trump administration would put the Christian God in public schools.

An alarming report, written by a Christian conservative group with ties to Education Secretary Betsy Devos, plans for the promotion of Christianity in public schools and putting an end to the Department of Education.

The Washington Post reports:

A policy manifesto from an influential conservative group with ties to the Trump administration, including Education Secretary Betsy DeVos, urges the dismantling of the Education Department and bringing God into American classrooms.

The “Education Reform Report” released by the Center for National Policy, a front for radical conservative Christians with “ties to several top White House officials — including Education Secretary Betsy Devos,” states:

We submit this report to the Donald Trump/Betsy DeVos administration with the hope that our organization may be of assistance with the restoration of education in America, in accordance with historic Judeo-Christian principles.

According to the report, education reform under the Trump administration should be based on the following assumptions:

  1. All knowledge and facts have a source, a Creator; they are not self-existent.
  2. Religious neutrality is a myth perpetrated by secularists who destroy their own claim the moment they attempt to enforce it.
  3. Parents and guardians bear final responsibility for their children’s education, with the inherent right to teach, or to choose teachers and schools, whether institutional or not.
  4. No civil government possesses the right to overrule the educational choices of parents and guardians.

The committee responsible for the report adds the following pledge:

The CNP Education Committee pledges itself to work toward achievable goals based on uncompromised principles, so that their very success will provoke a popular return to the Judeo-Christian principles of America’s Founders who, along with America’s pioneers, believed that God belonged in the classroom.

The report calls for the dismantling of the Department of Education, claiming it is “unconstitutional, illegal and contrary to America’s education practice for 300 years from early 17th century to Colonial times.”

The Education Department is to be replaced with “Presidency’s Advisory Council on Public Education Reform.” The Council would:

  1. Restore Ten Commandments posters to all K-12 public schools.
  2. Clearly post America’s Constitution and Declaration of Independence.
  3. Encourage K-12 schools to recognize traditional holidays (e.g., Easter, Thanksgiving, Christmas) as celebrations of our Judeo-Christian heritage.
  4. Implement select bible classes, such as Chuck Stetson’s Bible Literacy Project.
  5. Encourage instruction on U.S. and world history from the Judeo-Christian perspective for middle school and high school history and civics classes.
  6. Develop and recommend in-service training on philosophy of education for K-12 faculty based on historical Judeo-Christian philosophy of education.
  7. Strongly push states to remove secular-based sex education materials from school facilities, and emphasize parental instruction.

The Washington Post sums it up:

The five-page document produced by the Council for National Policy calls for a “restoration of education in America” that would minimize the federal role, promote religious schools and home schooling and enshrine “historic Judeo-Christian principles” as a basis for instruction.

Bottom line: This is what theocracy looks like.