by Maria Konnikova, 2/4/15

R. T. first heard about the Challenger explosion as she and her roommate sat watching television in their Emory University dorm room. A news flash came across the screen, shocking them both. R. T., visibly upset, raced upstairs to tell another friend the news. Then she called her parents. Two and a half years after the event, she remembered it as if it were yesterday: the TV, the terrible news, the call home. She could say with absolute certainty that that’s precisely how it happened. Except, it turns out, none of what she remembered was accurate.

R. T. was a student in a class taught by Ulric Neisser, a cognitive psychologist who had begun studying memory in the seventies. Early in his career, Neisser became fascinated by the concept of flashbulb memories—the times when a shocking, emotional event seems to leave a particularly vivid imprint on the mind. William James had described such impressions, in 1890, as “so exciting emotionally as almost to leave a scar upon the cerebral tissues.”

The day following the explosion of the Challenger, in January, 1986, Neisser, then a professor of cognitive psychology at Emory, and his assistant, Nicole Harsch, handed out a questionnaire about the event to the hundred and six students in their ten o’clock psychology 101 class, “Personality Development.” Where were the students when they heard the news? Whom were they with? What were they doing? The professor and his assistant carefully filed the responses away.

In the fall of 1988, two and a half years later, the questionnaire was given a second time to the same students. It was then that R. T. recalled, with absolute confidence, her dorm-room experience. But when Neisser and Harsch compared the two sets of answers, they found barely any similarities. According to R. T.’s first recounting, she’d been in her religion class when she heard some students begin to talk about an explosion. She didn’t know any details of what had happened, “except that it had exploded and the schoolteacher’s students had all been watching, which I thought was sad.” After class, she went to her room, where she watched the news on TV, by herself, and learned more about the tragedy.

R. T. was far from alone in her misplaced confidence. When the psychologists rated the accuracy of the students’ recollections for things like where they were and what they were doing, the average student scored less than three on a scale of seven. A quarter scored zero. But when the students were asked about their confidence levels, with five being the highest, they averaged 4.17. Their memories were vivid, clear—and wrong. There was no relationship at all between confidence and accuracy.

At the time of the Challenger explosion, Elizabeth Phelps was a graduate student at Princeton University. After learning about the Challenger study, and other work on emotional memories, she decided to focus her career on examining the questions raised by Neisser’s findings. Over the past several decades, Phelps has combined Neisser’s experiential approach with the neuroscience of emotional memory to explore how such memories work, and why they work the way they do. She has been, for instance, one of the lead collaborators of an ongoing longitudinal study of memories from the attacks of 9/11, where confidence and accuracy judgments have, over the years, been complemented by a neuroscientific study of the subjects’ brains as they make their memory determinations. Her hope is to understand how, exactly, emotional memories behave at all stages of the remembering process: how we encode them, how we consolidate and store them, how we retrieve them. When we met recently in her New York University lab to discuss her latest study, she told me that she has concluded that memories of emotional events do indeed differ substantially from regular memories. When it comes to the central details of the event, like that the Challenger exploded, they are clearer and more accurate. But when it comes to peripheral details, they are worse. And our confidence in them, while almost always strong, is often misplaced.

Within the brain, memories are formed and consolidated largely due to the help of a small seahorse-like structure called the hippocampus; damage the hippocampus, and you damage the ability to form lasting recollections. The hippocampus is located next to a small almond-shaped structure that is central to the encoding of emotion, the amygdala. Damage that, and basic responses such as fear, arousal, and excitement disappear or become muted.

A key element of emotional-memory formation is the direct line of communication between the amygdala and the visual cortex. That close connection, Phelps has shown, helps the amygdala, in a sense, tell our eyes to pay closer attention at moments of heightened emotion. So we look carefully, we study, and we stare—giving the hippocampus a richer set of inputs to work with. At these moments of arousal, the amygdala may also signal to the hippocampus that it needs to pay special attention to encoding this particular moment. These three parts of the brain work together to insure that we firmly encode memories at times of heightened arousal, which is why emotional memories are stronger and more precise than other, less striking ones. We don’t really remember an uneventful day the way that we remember a fight or a first kiss. In one study, Phelps tested this notion in her lab, showing people a series of images, some provoking negative emotions, and some neutral. An hour later, she and her colleagues tested their recall for each scene. Memory for the emotional scenes was significantly higher, and the vividness of the recollection was significantly greater.

When we met, Phelps had just published her latest work, an investigation into how we retrieve emotional memories, which involved collaboration with fellow N.Y.U. neuroscientist Lila Davachi and post-doctoral student Joseph Dunsmoor. In the experiment, the results of which appeared in Nature in late January, a group of students was shown a series of sixty images that they had to classify as either animals or tools. All of the images—ladders, kangaroos, saws, horses—were simple and unlikely to arouse any emotion. After a short break, the students were shown a different sequence of animals and tools. This time, however, some of the pictures were paired with an electric shock to the wrist: two out of every three times you saw a tool, for instance, you would be shocked. Next, each student saw a third set of animals and tools, this time without any shocks. Finally, each student received a surprise memory test. Some got the test immediately after the third set of images, some, six hours later, and some, a day later.

What Dunsmoor, Phelps, and Davachi found came as a surprise: it wasn’t just the memory of the “emotional” images (those paired with shocks) that received a boost. It was also the memory of all similar images—even those that had been presented in the beginning. That is, if you were shocked when you saw animals, your memory of the earlier animals was also enhanced. And, more important, the effect only emerged after six or twenty-four hours: the memory needed time to consolidate. “It turns out that emotion retroactively enhances memory,” Davachi said. “Your mind selectively reaches back in time for other, similar things.” That would mean, for instance, that after the Challenger explosion people would have had better memory for all space-related news in the prior weeks.

The finding was surprising, but also understandable. Davachi gave me an example from everyday life. A new guy starts working at your company. A week goes by, and you have a few uninteresting interactions. He seems nice enough, but you’re busy and not paying particularly close attention. On Friday, in the elevator, he asks you out. Suddenly, the details of all of your prior encounters resurface and consolidate in your memory. They have retroactively gone from unremarkable to important, and your brain has adjusted accordingly. Or, in a more negative guise, if you’re bitten by a dog in a new neighborhood, your memory of all the dogs that you had seen since moving there might improve.

So, if memory for events is strengthened at emotional times, why does everyone forget what they were doing when the Challenger exploded? While the memory of the event itself is enhanced, Phelps explains, the vividness of the memory of the central event tends to come at the expense of the details. We experience a sort of tunnel vision, discarding all the details that seem incidental to the central event.

In the same 2011 study in which Phelps showed people either emotionally negative or neutral images, she also included a second element: each scene was presented within a frame, and, from scene to scene, the color of the frames would change. When it came to the emotional images, memory of color ended up being significantly worse than memory of neutral scenes. Absent the pull of a central, important event, the students took in more peripheral details. When aroused, they blocked the minor details out.

The strength of the central memory seems to make us confident of all of the details when we should only be confident of a few. Because the shock or other negative emotion helps us to remember the animal (or the explosion), we think we also remember the color (or the call to our parents). “You just feel you know it better,” Phelps says. “And even when we tell them they’re mistaken people still don’t buy it.”

Our misplaced confidence in recalling dramatic events is troubling when we need to rely on a memory for something important—evidence in court, for instance. For now, juries tend to trust the confident witness: she knows what she saw. But that may be changing. Phelps was recently asked to sit on a committee for the National Academy of Sciences to make recommendations about eyewitness testimony in trials. After reviewing the evidence, the committee made several concrete suggestions to changes in current procedures, including “blinded” eyewitness identification (that is, the person showing potential suspects to the witness shouldn’t know which suspect the witness is looking at at any given moment, to avoid giving subconscious cues), standardized instructions to witnesses, along with extensive police training in vision and memory research as it relates to eyewitness testimony, videotaped identification, expert testimony early on in trials about the issues surrounding eyewitness reliability, and early and clear jury instruction on any prior identifications (when and how prior suspects were identified, how confident the witness was at first, and the like). If the committee’s conclusions are taken up, the way memory is treated may, over time, change from something unshakeable to something much less valuable to a case. “Something that is incredibly adaptive normally may not be adaptive somewhere like the courtroom,” Davachi says. “The goal of memory isn’t to keep the details. It’s to be able to generalize from what you know so that you are more confident in acting on it.” You run away from the dog that looks like the one that bit you, rather than standing around questioning how accurate your recall is.

“The implications for trusting our memories, and getting others to trust them, are huge,” Phelps says. “The more we learn about emotional memory, the more we realize that we can never say what someone will or won’t remember given a particular set of circumstances.” The best we can do, she says, is to err on the side of caution: unless we are talking about the most central part of the recollection, assume that our confidence is misplaced. More often than not, it is.




Shigellosis is an infectious disease caused by a group of bacteria called Shigella (shih-GEHL-uh). Most who are infected with Shigella develop diarrhea, fever, and stomach cramps starting a day or two after they are exposed to the bacteria. Shigellosis usually resolves in 5 to 7 days. Feb 10, 2016

Six confirmed cases of shigellosis possibly linked to Deer Creek State Park in Ohio

PICKAWAY CO, OH (WCMH) – Health investigators are working to figure out how six people in Pickaway county became sick and tested positive for Shigellosis bacteria.

Health officials say Shigellosis or shigella, is a bacterial infection.

A lot of people claim to have never heard of the infection, but officials say it’s easy to spread and fairly common.

“It causes diarrhea. Individuals who have it might have some stomach cramps and usually the person gets dehydrated,” said Dr. Mysheika Roberts.

It’s passed from person to person and most often found in daycares.

“Kids come into the daycare center, we know how kids are with their hands and their fingers. They touch everything,” said Roberts.

Beaches, pools and other outdoor areas also prove to be popular breeding grounds.

“People are out there they are having fun and are swimming. They think they are in the water so their hands are clean when they go to consume something.Not realizing that their hands could still be contaminated.”

Dr. Mysheika Roberts says outbreaks can be in the hundreds if not caught in time. In 2008, there were more than 500 confirmed cases. But there is no specific reason why the number of cases fluctuate from year to year.  “It can be spread to other people before one person has diarrhea and realizes that they’re sick.”

The best way to prevent it? Washing your hands, and not just after using the bathroom.

“Before you consume anything, put something in your eyes, your nose or your mouth, make sure that you wash your hands,” said Dr. Roberts. Especially when handling dirty diapers.

Dr. Roberts says a lot of times adults aren’t washing their hands enough in between changing diapers. Despite it not being your average household bacteria, Roberts says it’s common and can spread fast. When the health department gets word of cases, they track them down. “If we identify a case that is ecoli or shigella and we find out they’re a food worker, we would call their employer and tell them they were not allowed to work until they are cleared.”

Other tips:

– shower before swimming. Chlorine doesn’t always kill all bacteria.

– don’t cook for anyone else if you’ve had diarrhea recently.

– stay home from work and don’t go out into public places.


What are some of Donald Trump’s overused phrases and turns of speech


By Tom Higgins, who followed Trump’s campaign since the beginning

273.4k Views · Most Viewed Write in Donald Trump

Let me tell you something, this is one of the most pathetic, stupid questions I have ever seen on Quora! Sad! I kid you not, we need to report this person to Quora moderation, and no one does that better than me, believe me.

I have gotten so fed up with Quora lately, I kid you not. I tell you what, I’m going to make a new site. It will be a very good site, and I will deport people like you from that site. And honestly, if you don’t like it… I’m sorry. And quite frankly, Quirky Quorans like you are just as bad as Crooked Hillary.

I tell you what, if we can’t make Quora great again, I will make a new site. It will be so good -and let me tell you something- people will say it is a very good site. I will have the best site, I kid you not. It will be so good.

And you know what? I will build a great, great firewall around this site. It will keep all the hackers out – won’t even have to deport them! I tell you what, this site will be so amazing. It will be the best site. In fact- It will be so great, I’m going to name it after myself. I’m going to call it Trump. com and it will be the greatest. Isn’t it the greatest? Am I right?


Ariel WilliamsDreamer, Writer, Artist and lifelong US resident.

17.6k Views · Most Viewed Writer in Phrases

The 60 most commonly used words by Trump are….

  1. Going 2271 Times
  2. Know 1608 Times
  3. People 1504 Times
  4. Want 911 Times
  5. Think 753 Times
  6. Great 728 Times
  7. Right 608 Times
  8. Country 556 Times
  9. Lot 453 Times
  10. Money 438 Times
  11. Look 435 Times
  12. Good 407 Times
  13. Mean 395 Times
  14. Way 391 Times
  15. Make 375 Times
  16. Really 367 Times
  17. Love 339 Times
  18. Time 331 Times
  19. Doing 331 Times
  20. Trump 321 Times
  21. Tell 315 Times
  22. Win 314 Times
  23. Big 304 Times
  24. Thing 280 Times
  25. Things 273 Times
  26. Believe 271 Times
  27. World 257 Times
  28. Okay 256 Times
  29. Come 255 Times
  30. Deal 249 Times
  31. Everybody 246 Times
  32. Guy 246 Times
  33. China 243 Times
  34. Years 226 Times
  35. Million 225 Times
  36. Thank 220 Times
  37. President 211 Times
  38. Wall 211 Times
  39. Happen 199 Times
  40. Talk 199 Times
  41. Number 190 Times
  42. Actually 186 Times
  43. Talking 182 Times
  44. America 181 Times
  45. Mexico 177 Times
  46. Little 167 Times
  47. Saying 166 Times
  48. Trade 164 Times
  49. Hillary 160 Times
  50. States 159 Times
  51. Better 155 Times
  52. Incredible 147 Times
  53. Remember 147 Times
  54. Person 147 Times
  55. Problem 143 Times
  56. Amazing 142 Times
  57. Probably 140 Times
  58. Billion 140 Times
  59. Tremendous 136 Times
  60. Somebody 135 Times

And all of that gives us what I think people in this country are going to want to know….



Quote: “A former NASA astronaut and family doctor, says the 100+ age guideline for “normal” blood pressure was around for decades.”

Everyone has different blood pressure readings, some are high, some low and most are in the middle. Many doctors have long held the belief that an acceptable systolic reading of blood pressure is 100 plus your age.

Modern physicians say normal blood pressure takes no account of age. A reading of 120mm/80mm is normal regardless, according to Mayo Clinic staff. But early 20th century blood pressure cuff users followed a “100-plus-age” rule of thumb to determine what was normal for age. Early 21st century doctors accepted increased “normal rates” as patients age, but within a much more limited range, according to the Mayo Clinic.

So now the question arises, just what is high blood pressure

Despite accepting the 100 plus your age blood pressure reading in the past, today’s medical textbooks are arguing over exact values and new blood pressure standards come out every few years placing the desirable blood pressure target values ever lower. Is medicine fueling this war or might it just be the multi-billion dollar pharmaceutical industry? To lower blood pressure readings just 5 points on the blood pressure scale can mean billions of dollars. Among the top ten drugs prescribed in the U.S, blood pressure (hypertension) medications ensnare millions into the prescription drug trap.

A former NASA astronaut and family doctor, says the 100+ age guideline for “normal” blood pressure was around for decades. He also wonders if modern views on what is normal blood pressure arise from drug company involvement. “In the 1970s, the target limit for initiating drug treatment was 160/95. This has now become 140/90 (and more recently, 115/75), with a large number of organizations listed as in agreement,” he says.

Blood pressures tend to rise naturally with age in both men and women so that a 130 systolic blood pressure of a 30-year old (roughly 100 plus the age) becomes 150 in a fifty year old and 160 in a 60 year old with male blood pressure readings exceeding female by around 10 mm Hg. The systolic pressure is the pressure peak with each beat of the heart (systole) and the diastolic pressure is the basal pressure that is in the blood vessels during relaxation of the heart (diastole).

In a dramatic reversal in policy, on May 4, 2000, an expert committee announced that systolic pressure is the most accurate blood pressure measurement for older adults. The new guidelines hold true for all those with hypertension who are over age 40 – a group that makes up the majority of 50 million Americans with the disease.

Since blood pressure elevation is associated with increased all-cause death rates, lowering of blood pressures by whatever means can only be good for humanity – can’t it? Well, the pharmaceutical industry loves it – this focus makes them billions of dollars. The medical community loves it – it’s good for business and seems ethically correct, and the public likes it. So began the worldwide focus on lowering blood pressure, the evolution of thousands of drugs designed to lower blood pressure, and of course, the beginning of a still growing multi-billion dollar business.

High blood pressure, as defined by the drug industry and medical doctors, is not an instant death sentence. The goal of maintaining a blood pressure at or near 140/80 (now 115/75) is based on drug company hype, not science. These numbers are designed to sell drugs by converting healthy people into patients. If high blood pressure were dangerous, then lowering it with hypertension drugs would increase lifespan. Yet, clinical trials involving hypertension medication show no increased lifespan among users when compared to non-users.

2. A “normal” systolic blood pressure equals 100 plus your age.

Until a decade ago, this statement was a commonly quoted assumption regarding the natural history of blood pressure with aging. It suggested that a systolic blood pressure of 170 was normal in a 70-year-old individual. Today, we know this to be absolutely false. We now have sufficient clinical evidence for older Americans with both systolic and diastolic hypertension, and particularly in isolated systolic hypertension (ISH), that verify the benefits of aggressive treatment of blood pressure in older Americans. There is very firm data supporting treatment of systolic blood pressure down to 150 mm Hg in older hypertensive people. Therefore, this marker is an appropriate initial goal in most older patients with ISH. For many, a blood pressure below 140/90 mm Hg represents an appropriate treatment goal today.

Blood Pressure and Heart Disease

By Duane Graveline, M.D., M.P.H.

Only a century ago the blood pressure cuff was added to every doctor’s black bag. Only then did the terms systolic and diastolic blood pressure begin to become commonplace in our society.

Systolic pressure is the maximum blood pressure — achieved when the heart is contracting — and diastolic is the minimum pressure. Usually expressed as a number over a number as in 125 over 75, the systolic number is first followed by the diastolic number.

As a child of the depression I remember my grandmother’s blood pressure of some 300 over 100 “maxing out” old Doctor Piette’s new fangled sphygmomanometer, much to his amusement for she lived well into her nineties, finally passing of cancer.

The medical community then was of the belief that elevations of blood pressure were a normal part of the aging process and did not consider these elevations a significant risk to health.

Back in those days, if you survived infections and accidents, most of you were going to die from heart attacks and strokes and the rest from cancer and that was it. Then along came the Framingham study and the promise that much of this tendency for death was preventable if we could just control our blood pressures.

Although this focus obviously was on just premature deaths this truism was hardly noted in the uproar of a society noisily responsive to the possibility of stealing years of life from the grim reaper. The drug industry liked it, the doctors liked it and the public liked it.

So began our national focus on blood pressure, the evolution of thousands of drugs designed to lower blood pressures and the beginning of a still growing, multi-billion dollar business.

The one thing that old Doctor Piette found when he started using his new blood pressure cuff was that everyone had a different pressure. It was just like heights and weights and about every other thing he measured in people, even their blood sugars – no one had the same value. Some would be high, some would be low and most would be in the middle.

Along came a fellow good with numbers by the name of Gauss who even had this tendency named for him – the so-called Gausserian frequency distribution pattern.

It looked just like the outline of the bell in the church tower, hence the name bell shaped curve. Plot the measurement (blood pressure, height, weight – whatever) on the horizontal axis and the percent of people with that measurement on the vertical axis and out comes a bell shaped curve.

A few really busy guys with not much else to do, apparently, took the blood pressures of 350,000 adult males and plotted the frequency distribution of their systolic and diastolic pressures separately. The systolic pressure is the pressure peak with each beat of the heart (systole) and the diastolic pressure is the basal pressure that is in the blood vessels even during relaxation of the heart (diastole).

The result was the bell shaped frequency distribution curves. The third of a million participants in this study were not pre-selected, representing your run of the mill, off the street, normal systolic and diastolic blood pressure values of fellow human beings.

Additional research revealed that blood pressure tends to rise naturally with age in both men and women so that a 120 systolic blood pressure of a 20-year old (roughly 100 plus the age) becomes 140 in a forty year old and 160 in a 60 year old with male values exceeding female by some 10 mm Hg.

This reality for both systolic and diastolic pressures has been known to all practicing physicians for many years. Even Dr. Piette soon learned about the 100 plus your age rule and it took much more than a 180 / 95 in an octogenarian to get his attention. He didn’t have any blood pressure pills then so it didn’t make much difference anyhow.

Meanwhile our scientists involved with our famous Framingham study had made the astounding correlation that the higher the systolic blood pressures of their Framingham community study patients the higher their all cause death rates.The data they reported to the medical and scientific community, would soon radically change medical philosophy with respect to the treatment of high blood pressure.

Their thinking went something like this: Since blood pressure elevation is associated with increased all-cause death rates, lowering of blood pressure by whatever means could only be good for humanity and the race was on. The pharmaceutical industry loved it – this focus was to make them billions. The medical community loved it – it was good for business and seemed ethically correct. The patients sort of loved it – they were stealing time from the grim reaper.

So now the question arises, just what is high blood pressure? Despite our past many decades of accepting the 100 plus your age, more or less, today’s medical textbooks are warring over exact values and new national standards come out every few years placing the desirable blood pressure target values ever lower.

Is medicine fueling this war or might it just be the multi-billion dollar pharmaceutical industry? For to lower the target value 5 points on the blood pressure scale can mean billions.

A recent study by a group of UCLA researchers came perilously close to corroborating Dr. Piette’s guiding rule of 100 plus your age for men, subtracting 10 for women and this is after this “Golden rule” had been in use for five or more decades. Goodman and Gilman without considering age advises that robust evidence supports the treatment of diastolic hypertension of 95 or above, yet only 15 % of men are in this category.

The Public Citizen’s Health Research Group strongly advises consideration of age in determining whether or not treatment is justified, suggesting that in the elderly only pressures equal to or above 180/100 might be treated beneficially with drugs. In the 1970s the target limits for initiating drug treatment was 160/95. This has now become 140/90 with a large number of organizations listed as in agreement.

Obviously there are no hard and fast guidelines for the practicing physician – fully 30% of his patient population will require drug treatment for high blood pressure by today’s standards. And many of his patients will be 70 and 80 year olds whose systolic pressures of 179 and 180 are completely normal for their age.

There is much food for thought here. Who is really benefiting from this trend? Certainly the pharmaceutical industry is reaping ever-growing rewards. I wish I could be certain that the patients are benefiting at all.

When one considers the side effect profile of most of the anti-hypertensive medications, even if we add a few months of life what are we as a society really doing? And on this subject of thwarting the grim reaper, the results of long-term studies are far from reassuring. Why am I not surprised?

Duane Graveline MD MPH
Former USAF Flight Surgeon
Former NASA Astronaut
Retired Family Doctor

Updated February 2016

Poor at 20, Poor for Life

Jeff Topping / Reuters

That’s the conclusion of a new paper by Michael D. Carr and Emily E. Wiemers, two economists at the University of Massachusetts in Boston. In the paper, Carr and Wiemers used earnings data to measure how fluidly people move up and down the income ladder over the course of their careers. “It is increasingly the case that no matter what your educational background is, where you start has become increasingly important for where you end,” Carr told me. “The general amount of movement around the distribution has decreased by a statistically significant amount.”

Carr and Wiemers used data from the Census Bureau’s Survey of Income and Program Participation, which tracks individual workers’ earnings, to examine how earnings mobility changed between 1981 and 2008. They ranked people into deciles, meaning that one group fell below the 10th percentile of earnings, another between the 10th and 20th, and so on; then they measured someone’s chances of moving from one decile to another. But the researchers wanted to see not just the probability of moving to a different income bracket over the course of a career, but also how that probability has changed over time. So they measured a given worker’s chances of moving between deciles during two periods, one from 1981 to 1996 and another from 1993 to 2008.

They found quite a disparity. “The probability of ending where you start has gone up, and the probability of moving up from where you start has gone down,” Carr said. For instance, the chance that someone starting in the bottom 10 percent would move above the 40th percentile decreased by 16 percent. The chance that someone starting in the middle of the earnings distribution would reach one of the top two earnings deciles decreased by 20 percent. Yet people who started in the seventh decile are 12 percent more likely to end up in the fifth or sixth decile—a drop in earnings—than they used to be.

Overall, the probability of someone starting and ending their career in the same decile has gone up for every income rank. “For whatever reason, there was a path upward in the earnings distribution that has been blocked for some people, or is not as steep as it used to be,” Carr said.

Carr and Wiemers’ findings highlight a defining aspect of being middle class today, says Elisabeth Jacobs, the senior director for policy and academic programs at the Washington Center for Equitable Growth, the left-leaning think tank that published Carr and Wiemers’ paper. “If you’re in the middle, you’re stuck in the middle, which means there’s less space for others to move into the middle,” she said. “That suggests there’s just a whole bunch of insecurity going on in terms of what it means to be a worker. You can’t educate your way up.”

This lack of mobility holds even for people with a college degree, the researchers found. Many college-educated workers started their careers at higher earnings deciles than those before them did, but also tended to end their careers in a lower decile than their predecessors. Women with college degrees also started off their careers earning at a higher decile than they used to, and the presence of more college-educated women in the workforce could be making it harder for men to move up the ranks.

Carr and Wiemers aren’t sure exactly why the American economy has become less conducive to economic mobility. The decline in unions may play a role: Organized labor was once better able to negotiate pay raises for their members, whatever their career stage. Carr and Wiemers also cite the work of the economist David Autor, who has found that the number of jobs at the bottom and the top of the pay scale is increasing, while the number of jobs in the middle isn’t. If there were more employment growth in the middle, those who start out at the bottom might have a better shot at moving up.

Increasing income inequality may play a role, too. Carr and Wiemers found that the earnings of the people in the top decile are much higher than they used to be, compared to the overall population. That means it is increasingly harder to reach those top ranks. “In the presence of increasing inequality,” they conclude, “falling mobility implies that as the rungs of the ladder have moved farther apart, moving between them has become more difficult.


  • ALANA SEMUELS is a staff writer at The Atlantic. She was previously a national correspondent for the Los Angeles Times.


Only a new economic system can save us.

Barrow in Furness: coastal erosion reveals old landfill rubbish buried in sea cliffs.

 ‘That 30% chunk of greenhouse gases that comes from non-fossil fuel sources isn’t static. It is adding more to the atmosphere each year.’ Photograph: Ashley Cooper/Global Warming Images/Alamy


Friday 15 July 2016 Last modified on Friday 15 July 2016 

It’s time to pour our creative energies into imagining a new global economy. Infinite growth is a dangerous illusion

Earlier this year media outlets around the world announced that February had broken global temperature records by a shocking amount. March broke all the records, too. In June our screens were covered with surreal images of Paris flooding, the Seine bursting its banks and flowing into the streets. In London, the floods sent water pouring into the tube system right in the heart of Covent Garden. Roads in south-east London became rivers two metres deep.

With such extreme events becoming more commonplace, few deny climate change any longer. Finally, a consensus is crystallising around one all-important fact: fossil fuels are killing us. We need to switch to clean energy, and fast.

But while this growing awareness about the dangers of fossil fuels represents a crucial shift in our consciousness, I can’t help but fear we’ve missed the point. As important as clean energy might be, the science is clear: it won’t save us from climate change.

Let’s imagine, just for argument’s sake, that we are able to get off fossil fuels and switch to 100% clean energy. There is no question this would be a vital step in the right direction, but even this best-case scenario wouldn’t be enough to avert climate catastrophe.

Why? Well, first, the burning of fossil fuels only accounts for about 70% of all anthropogenic greenhouse gas emissions. The other 30% comes from a number of causes, including deforestation, and industrial livestock farming, which produces 90m tonnes of methane per year and most of the world’s anthropogenic nitrous oxide. Both of these gases are vastly more potent than CO2 when it comes to global warming. Livestock farming alone contributes more to global warming than all the cars, trains, planes and ships in the world. There are also a number of industrial processes that contribute significantly, and then there are our landfills, which pump out huge amounts of methane – 16% of the world’s total.

Jeffrey’s Bay wind farm in South Africa


 Jeffrey’s Bay wind farm in South Africa. Photograph: Nic Bothma/EPA

But when it comes to climate change, the problem is not just the type of energy we are using, it’s what we’re doing with it. What would we do with 100% clean energy? Exactly what we are doing with fossil fuels: raze more forests, build more meat farms, expand industrial agriculture, produce more cement, and fill more landfill sites, all of which will pump deadly amounts of greenhouse gas into the air. We will do these things because our economic system demands endless compound growth, and for some reason we have not thought to question this.

Think of it this way. That 30% chunk of greenhouse gases that comes from non-fossil fuel sources isn’t static. It is adding more to the atmosphere each year. Scientists project that our tropical forests will be completely destroyed by 2050, releasing a 200bn tonne carbon bomb into the air. The world’s topsoils could be depleted within just 60 years, releasing more still. Emissions from the cement industry are growing at more than 9% per year. And our landfills are multiplying at an eye-watering pace: the by 2100 we will be producing 11m tonnes of solid waste per day, three times more than we do now. Switching to clean energy will do nothing to slow this down.

The climate movement made an enormous mistake. We focused all our attention on fossil fuels, when we should have been pointing to something much deeper: the basic logic of our economic operating system. After all, we’re only using fossil fuels in the first place to fuel the broader imperative of GDP growth.

The root problem is the fact that our economic system demands ever-increasing levels of extraction, production and consumption. Our politicians tell us that we need to keep the global economy growing at more than 3% each year – the minimum necessary for large firms to make aggregate profits. That means every 20 years we need to double the size of the global economy – double the cars, double the fishing, double the mining, double the McFlurries and double the iPads. And then double them again over the next 20 years from their already doubled state.


Toy car factory in China

 Current projections show that by 2040 we will more than double the world’s shipping miles, air miles, and trucking miles. Photograph: Feature China/Barcroft Images


Our more optimistic pundits claim that technological innovations will help us to decouple economic growth from material throughput. But sadly there is no evidence that this is happening. Global material extraction and consumption has grown by 94% since 1980, and is still going up. Current projections show that by 2040 we will more than double the world’s shipping miles, air miles, and trucking miles – along with all the material stuff that those vehicles transport – almost exactly in keeping with the rate of GDP growth.

Clean energy, important as it is, won’t save us from this nightmare. But rethinking our economic system might. GDP growth has been sold to us as the only way to create a better world. But we now have robust evidence that it doesn’t make us any happier, it doesn’t reduce poverty, and its “externalities” produce all sorts of social ills: debt, overwork, inequality, and climate change. We need to abandon GDP growth as our primary measure of progress, and we need to do this immediately – as part and parcel of the climate agreement that will be ratified in Morocco later this year.

It’s time to pour our creative power into imagining a new global economy – one that maximises human well-being while actively shrinking our ecological footprint. This is not an impossible task. A number of countries have already managed to achieve high levels of human development with very low levels of consumption. And Daniel O’Neill, an economist at the University of Leeds, has demonstrated that even material de-growth is not incompatible with high levels of human well-being.

Our focus on fossil fuels has lulled us into thinking we can continue with the status quo so long as we switch to clean energy, but this is a dangerously simplistic assumption. If we want to stave off disaster, we need to confront its underlying cause.

It’s time to pour our creative energies into imagining a new global economy. Infinite growth is a dangerous illusion.

Join our community of development professionals and humanitarians.

Follow @GuardianGDP on Twitter.


An Old Drug Might Treat Zika



Zika Virus in Blood

The antibiotic azithromycin stopped the virus from multiplying. This drug has the added bonus of being widely considered safe for pregnant women.

By Jeanette Kazmierczak, July 11, 2016

Few drugs are known to be safe for pregnant women and their unborn children. So there are limited options available for scientists trying to protect pregnant women from the Zika virus — a disease that can lead to microcephaly, or an underdeveloped brain, in fetuses. “The chemical space we’re allowed to play in is crazy small,” said Joseph DeRisi, a biochemist at the University of California, San Francisco, in our recent article “New Insights Into How Zika Harms the Brain.”
DeRisi recently co-authored a paper that highlights a promising option. He and his colleagues introduced the Zika virus into dish-grown cells and then treated them with a variety of different drugs. They found that the antibiotic azithromycin stopped the virus from multiplying. This drug has the added bonus of being widely considered safe for pregnant women. The paper was posted on the preprint server and has not yet been peer-reviewed.

Many other groups are taking the same approach — testing drugs known to be safe to see if they work against Zika. Any drug that shows promise will need to be further tested in animals, where many will fail. A recent study showed that a certain malaria drug limited the growth of Zika in cells, but subsequent unpublished tests in mice have dashed researchers’ hopes.

Biologists working to understand Zika’s basic biology are keenly aware that there’s a desperate need for a treatment. “This team worked around the clock 24/7, practically, no weekends off, nothing,” DeRisi said in the article. “Just flat-out pedal to the metal.”

The Second Amendment Was Ratified to Preserve Slavery

By Thom Hartmann, Truthout | News Analysis

(Photo: Birmingham Museum and Art Gallery)(Photo: Birmingham Museum and Art Gallery)

The real reason the Second Amendment was ratified, and why it says “State” instead of “Country” (the framers knew the difference — see the 10th Amendment), was to preserve the slave patrol militias in the southern states, which was necessary to get Virginia’s vote. Founders Patrick Henry, George Mason and James Madison were totally clear on that… and we all should be too.

In the beginning, there were the militias. In the South, they were also called the “slave patrols,” and they were regulated by the states.

In Georgia, for example, a generation before the American Revolution, laws were passed in 1755 and 1757 that required all plantation owners or their male white employees to be members of the Georgia Militia, and for those armed militia members to make monthly inspections of the quarters of all slaves in the state. The law defined which counties had which armed militias and even required armed militia members to keep a keen eye out for slaves who may be planning uprisings.

See more news and opinion from Thom Hartmann at Truthout here.

As Dr. Carl T. Bogus wrote for the University of California Law Review in 1998, “The Georgia statutes required patrols, under the direction of commissioned militia officers, to examine every plantation each month and authorized them to search ‘all Negro Houses for offensive Weapons and Ammunition’ and to apprehend and give twenty lashes to any slave found outside plantation grounds.”

It’s the answer to the question raised by the character played by Leonardo DiCaprio in Django Unchained when he asks, “Why don’t they just rise up and kill the whites?” If the movie were real, it would have been a purely rhetorical question, because every southerner of the era knew the simple answer: Well regulated militias kept the slaves in chains.

Sally E. Haden, in her book Slave Patrols: Law and Violence in Virginia and the Carolinas, notes that, “Although eligibility for the Militia seemed all-encompassing, not every middle-aged white male Virginian or Carolinian became a slave patroller.” There were exemptions so “men in critical professions” like judges, legislators and students could stay at their work. Generally, though, she documents how most southern men between ages 18 and 45 — including physicians and ministers — had to serve on slave patrol in the militia at one time or another in their lives.

And slave rebellions were keeping the slave patrols busy.

By the time the US Constitution was ratified, hundreds of substantial slave uprisings had occurred across the South. Blacks outnumbered whites in large areas, and the state militias were used to both prevent and to put down slave uprisings. As Dr. Bogus points out, slavery can only exist in the context of a police state, and the enforcement of that police state was the explicit job of the militias.

If the anti-slavery folks in the North had figured out a way to disband — or even move out of the state — those southern militias, the police state of the South would collapse. And, similarly, if the North were to invite into military service the slaves of the South, then they could be emancipated, which would collapse the institution of slavery, and the southern economic and social systems, altogether.

These two possibilities worried southerners like James Monroe, George Mason (who owned more than 300 slaves) and the southern Christian evangelical, Patrick Henry (who opposed slavery on principle, but also opposed freeing slaves).

Their main concern was that Article 1, Section 8 of the newly-proposed US Constitution, which gave the federal government the power to raise and supervise a militia, could also allow that federal militia to subsume their state militias and change them from slavery-enforcing institutions into something that could even, one day, free the slaves.

This was not an imagined threat. Famously, 12 years earlier, during the lead-up to the Revolutionary War, Lord Dunsmore offered freedom to slaves who could escape and join his forces. “Liberty to Slaves” was stitched onto their jacket pocket flaps. During the war, British General Henry Clinton extended the practice in 1779. Numerous freed slaves served in General Washington’s army.

Thus, southern legislators and plantation owners lived not just in fear of their own slaves rebelling, but also in fear that their slaves could be emancipated through military service.

At the ratifying convention in Virginia in 1788, Henry laid it out:

“Let me here call your attention to that part [Article 1, Section 8 of the proposed Constitution] which gives the Congress power to provide for organizing, arming, and disciplining the militia, and for governing such part of them as may be employed in the service of the United States….

“By this, sir, you see that their control over our last and best defence is unlimited. If they neglect or refuse to discipline or arm our militia, they will be useless: the states can do neither … this power being exclusively given to Congress. The power of appointing officers over men not disciplined or armed is ridiculous; so that this pretended little remains of power left to the states may, at the pleasure of Congress, be rendered nugatory.”

George Mason expressed a similar fear:

“The militia may be here destroyed by that method which has been practised in other parts of the world before; that is, by rendering them useless, by disarming them. Under various pretences, Congress may neglect to provide for arming and disciplining the militia; and the state governments cannot do it, for Congress has an exclusive right to arm them [under this proposed Constitution]…. “

Henry then bluntly laid it out:

“If the country be invaded, a state may go to war, but cannot suppress [slave] insurrections [under this new Constitution]. If there should happen an insurrection of slaves, the country cannot be said to be invaded. They cannot, therefore, suppress it without the interposition of Congress…. Congress, and Congress only [under this new Constitution], can call forth the militia.”

And why was that such a concern for Patrick Henry?

“In this state,” he said, “there are two hundred and thirty-six thousand blacks, and there are many in several other states. But there are few or none in the Northern States…. May Congress not say, that every black man must fight? Did we not see a little of this last war? We were not so hard pushed as to make emancipation general; but acts of Assembly passed that every slave who would go to the army should be free.”

Patrick Henry was also convinced that the power over the various state militias given the federal government in the new Constitution could be used to strip the slave states of their slave-patrol militias. He knew the majority attitude in the North opposed slavery, and he worried they’d use the Constitution to free the South’s slaves (a process then called “Manumission”).

The abolitionists would, he was certain, use that power (and, ironically, this is pretty much what Abraham Lincoln ended up doing):

“[T]hey will search that paper [the Constitution], and see if they have power of manumission,” said Henry. “And have they not, sir? Have they not power to provide for the general defence and welfare? May they not think that these call for the abolition of slavery? May they not pronounce all slaves free, and will they not be warranted by that power?

“This is no ambiguous implication or logical deduction. The paper speaks to the point: they have the power in clear, unequivocal terms, and will clearly and certainly exercise it.”

He added: “This is a local matter, and I can see no propriety in subjecting it to Congress.”

James Madison, the “Father of the Constitution” and a slaveholder himself, basically called Patrick Henry paranoid.

“I was struck with surprise,” Madison said, “when I heard him express himself alarmed with respect to the emancipation of slaves…. There is no power to warrant it, in that paper [the Constitution]. If there be, I know it not.”

But the southern fears wouldn’t go away.

Patrick Henry even argued that southerner’s “property” (slaves) would be lost under the new US Constitution, and the resulting slave uprising would be less than peaceful or tranquil:

“In this situation,” Henry said to Madison, “I see a great deal of the property of the people of Virginia in jeopardy, and their peace and tranquility gone.”

So Madison, who had (at Jefferson’s insistence) already begun to prepare proposed amendments to the US Constitution, changed his first draft of one that addressed the militia issue to make sure it was unambiguous that the southern states could maintain their slave patrol militias.

His first draft for what became the Second Amendment had said: “The right of the people to keep and bear arms shall not be infringed; a well armed, and well regulated militia being the best security of a free country [emphasis mine]: but no person religiously scrupulous of bearing arms, shall be compelled to render military service in person.”

But Henry, Mason and others wanted southern states to preserve their slave-patrol militias independent of the federal government. So Madison changed the word “country” to the word “state,” and redrafted the Second Amendment into today’s form:

“A well regulated Militia, being necessary to the security of a free State[emphasis mine], the right of the people to keep and bear Arms, shall not be infringed.”

Little did Madison realize that one day in the future weapons-manufacturing corporations, newly defined as “persons” by a Supreme Court some have called dysfunctional, would use his slave patrol militia amendment to protect their “right” to manufacture and sell assault weapons used to murder schoolchildren.

Copyright, Truthout. May not be reprinted without permission of the author.


Thom Hartmann is a New York Times-bestselling, Project Censored-award-winning author and host of a nationally syndicated progressive radio talk show. You can learn more about Thom Hartmann at his website and find out what stations broadcast his radio program. He also now has a daily independent television program, “The Big Picture,” syndicated by FreeSpeech TV, RT TV, and 2oo community TV stations. You can also listen or watch Thom on the internet.