ALISAN PORTER WINS TENTH SEASON OF THE VOICE

160525091530-the-voice-alisan-porter-exlarge-169

Alisan Porter, a former child star, beat out Adam Wakefield, Hannah Huston and Laith Al-Saadi, who placed second, third and fourth. She performed “Somewhere” from “West Side Story” for her finale. Alisan Ann (born June 20, 1981) is an American singer, blogger, and former actress and dancer. She is known for her lead role in the 1991 film Curly Sue. In 2016, she became the winner of season 10 of NBC’s The Voice as a member of Team Christina.

Born to Jewish parents in Worcester, Massachusetts, Porter is the great-granddaughter of the late, prominent Worcester rabbi Joseph Klein. Her maternal grandmother ran the Charlotte Klein Dance Center in Worcester. Porter’s mother, Laura Klein, also taught dance, coaching Diane Klimaszewski and Elaine Klimaszewski, who appeared on Star Search in 1987, in the junior dance category, before they became better known as the Coors Light Twins. Her father, Ric Porter, was co-founder, lead singer, and songwriter of the Worcester Band Zonkaraz. While in Los Angeles for the twins’ appearance on the show, the show’s producer heard Porter singing in the hotel lobby and had her on the very next episode. Alisan has been singing and performing since the age of three; at the age of five, she became the youngest Star Search participant ever to win the competition.

As an actress, she is known for her lead role as Curly Sue in the 1991 movie Curly Sue starring Jim Belushi.

Porter continued acting at Staples High School in Westport, Connecticut, where she joined a theatre group and participated in various shows. When she was eighteen, she moved to New York to audition for Broadway shows. She was cast as Urleen in the show Footloose.

Porter is based in Los Angeles, where she has been working on her music career. During this time she starred as Miriam in the smash The Ten Commandments: The Musical at the Kodak Theater in Los Angeles alongside Val Kilmer and Adam Lambert and Broadway star Lauren Kennedy. In 2003, the band The Raz was formed, with Porter as the vocalist and main songwriter. The Raz split up in 2004. In March 2005, she announced the birth of her new band, The Alisan Porter Project. During 2006, she performed in the 2006 revival of A Chorus Line as Bebe Bensonheimer at the Gerald Schoenfeld Theatre in New York City.

On October 9, 2009, Porter’s independent debut self-titled album was released.

The Voice (2016)
On February 29, 2016, she became a contestant on season 10 of The Voice and sang “Blue Bayou” for her blind audition performance. All four coaches – Christina Aguilera, Adam Levine, Blake Shelton, and Pharrell Williams – turned their chairs for her and she chose Aguilera as her coach. In the battle rounds, Porter was up against Lacy Mandigo with the song “California Dreamin’” by The Mamas & the Papas as a rock version. In the knockout rounds, Porter beat Daniel Passino. Alisan Porter’s vocal style is uniquely her own with a skillset that seems to please the voting masses. On May 24, 2016, Porter became the winner of The Voice in season 10.

https://en.wikipedia.org/wiki/Alisan_Porter

Advertisements

What The Amish Can Teach Us About Modern Medicine

by Sarah Talpos


When healthcare is expensive, the Amish culture of autonomy and thrift may be a way to balance communal support and individual responsibility. Sara Talpos finds out more.


The Allegheny Plateau, sprawling across northern Pennsylvania and beyond, is an ecosystem of forested hills, with land that supports black bears, bald eagles and wandering turkeys, as well as a patchwork of wild herbs: burdock, jewelweed, chamomile and sheep sorrel. Cellphone reception is spotty and gas stations are few and far between. Tucked away among the streams branching from the Cowanesque river is a cluster of small white and tan buildings, including the office of John Keim, an Amish elder and community healer.

In the 1980s, Keim’s young son was scalded by a pot of boiling water, burning off his skin from collarbone to waist. Hospital care was out of the question. Previously, two of Keim’s cousins had been burned in a fire and spent three months in an Indiana hospital. Every week, relatives had sent letters describing how the children screamed as their wounds were cleaned and their bandages changed. Reflecting on that, Keim says, “I just felt it was so inhumane. I would not ever take a child to a burn unit.” He wanted to be autonomous of what he viewed as a brutal system.

Keim and his wife treated their son at home. Initially, they applied a salve of herbs and wrapped the wounds with gauze, but the gauze sunk into the boy’s flesh. They needed a dressing that wouldn’t stick.

In his book Comfort for the Burned and Wounded Keim writes, “I thought of how God created the Earth. I honestly felt He kept the poor in mind while Earth was being created.” He tried to think of things in nature that might help a poor person treat burns. Hitting upon waxy plantain leaves, he gathered a hatful from a nearby field, scalded them so they would be pliable, and used them to wrap his son’s wounds with a layer of herbal salve. Within five days, new skin covered the boy’s body. He had survived.

 Flickr: Shinya Suzuki

When you think of the Amish, you don’t necessarily think solar panels, but here they are — six of them — on the roof of a horse barn in Holmes County, Ohio, home to the world’s largest Amish settlement. The barn, and the office above it, belong to Marvin Wengerd, who is Amish and serves as a liaison between his community and their non-Amish healthcare providers.

“If you ask the average Amishman on the street, ‘Why don’t you have electricity?’” says Wengerd, “he would say something like, ‘It connects me to the larger world and makes me dependent on the larger world in ways that I find troubling.’” Many further object to television and the internet because they promote vanity and sexual impurities, rather than Biblical values. For his part, Wengerd uses electricity in a limited capacity — for example, to power his office lights and phone. But thanks to the solar panels, which feed a battery, he’s off the grid, not dependent on the government or the oil industry for power.

The Amish and other groups such as Old Order Mennonites refer to themselves as “Plain” because they choose to live a modest lifestyle centred on their faith and separated from the rest of the world. There is some diversity between Plain groups, as each community creates its own rules for everything from clothing to technology use. In general, though, Plain people complete formal education in eighth grade (aged 14), use horse and carriage for daily travel, reject mains electricity, and interact with outsiders in a limited capacity. In most Plain communities, individual families and businesses sell furniture, produce or handmade quilts to the wider population, whom they turn to for services such as banking and emergency taxi rides.

The biggest and most complicated cultural intersection is the modern healthcare system. Plain people often advocate for more freedom in deciding when to go to a hospital, how to get there, and what interventions will be used. In short, they want greater autonomy.

“Patient autonomy” is a relatively new concept in Western medicine, and its significance depends on your perspective. On the one hand, patients report feeling lost in the system − stripped down to a gown and underwear and pressured to follow doctors’ orders. On the other hand, doctors can face demands for unwarranted treatments. With their unique cultural traditions, Plain communities might point the way towards a better concept of autonomy, one that balances patient choice with patient responsibility. One that we might all learn from.

 Flickr: Shinya Suzuki

For nearly two-and-a-half millennia, the doctor−patient relationship in Western medicine was defined by doctors’ ethical obligation to act on behalf of their patients. The Hippocratic tradition established what came to be called the “beneficence model,” in which doctors are expected to seek to prevent and treat injury and illness while “doing no harm” to their patients. This tradition provides the ethical basis for everything from prescribing vaccinations to advising patients to wear a helmet while riding a motorcycle.

Following World War II, Western medicine began to shift toward an “autonomy model” of care. In 1966, the New England Journal of Medicine published an article outlining nearly two dozen instances of experiments that had been conducted on humans without their informed consent. This was followed by news of the Tuskegee Syphilis Study, a 40-year research project conducted by the US Public Health Service, in which treatment was withheld from poor African-American men with syphilis. In the 1970s, advances in medical technology also raised a host of new ethical questions. Increasingly, the public wanted a say in matters that were once the purview of doctors and researchers alone.

In 1979, a federal commission released the influential Belmont Report, which put forth three foundational principles for experimentation on human subjects. These were incorporated into subsequent guidelines for clinical practice: autonomy (including respect for the individual’s right to make informed choices), beneficence, and justice (the fair treatment of all). Notably, the Belmont Report did not specify how these principles should be weighed and prioritised against one another.

If a patient wants to decline standard care or use an untested remedy, should a doctor grant this autonomy? And in the case of sick or injured children, who gets to decide: parents or health professionals?

For Plain communities, autonomy in healthcare — and in life more broadly — is deeply tied to personal responsibility. This is perhaps best exemplified by their choice not to have insurance. Rather, when someone gets sick, the church collects alms to help the patient cover expenses. Marvin Wengerd estimates that, collectively, the 30,000 Amish in Holmes County spend $20–30 million a year on healthcare.

“Personal responsibility is still huge among us,” he says, adding that Plain people “think there’s a lot of harm in divorcing the cost from the patient”. He describes communities in which individuals are beholden to their brothers and sisters in the church to make wise healthcare decisions that don’t cost the community more money than necessary. As a result, Plain communities are highly interested in health education and disease prevention.

 Flickr: Ralf Peter Reimann

“Welcome to the clinic,” says Susan Jones, a “twice-retired” nurse with short blond hair and cobalt-blue glasses, who has worked with a community of Old Order Mennonites in southern Kentucky for 20 years. Our van has stopped at the top of a dirt trail adjacent to an unadorned two-story home with grey siding. Just a few feet from us, a horse stands idly, hitched to a black carriage. This particular group is conservative even by Plain standards, and before my visit Jones gently instructed me to leave my voice recorder in the vehicle.

Health Promotion Day, the reason I’m visiting, includes a one-hour talk on a topic chosen by the Mennonites. Today’s theme: heart arrhythmias. Several health professionals are in attendance, including Steven House, a doctor who treats Plain patients in his primary care clinic in rural Glasgow, Kentucky. They, and roughly a dozen Mennonites, sit in the living room, listening intently while a medical student describes the intricacies of heart anatomy.

Since 2001, Health Promotion Day has been held once a month in the home of a local Mennonite family. The community actively shapes the program by deciding what kind of information and services they want. The aim is to improve the community’s health by providing a one-hour educational session, followed by a primary care clinic where people can receive tests including ear exams and blood pressure readings that might determine whether they need to visit a hospital. Following the talk, House and the medical student field questions. “What percentage of people have a skipped or delayed heartbeat?” asks one Mennonite woman seated on a chair by the home’s wood-burning stove. The second question addresses blood clots and fibrillation. Before long, my notes are a muddle: defibrillators, warfarin, hawthorn berry (which the Mennonites use to regulate heart rate), and pacemakers. I’m lost, but the Mennonites press on. Among the final questions is, “Where is the line when you know you need to see a doctor?”

In a long navy dress and a white bonnet, a Mennonite mother sits on the bed in a small room off the kitchen, describing her family’s encounters with the healthcare system. She describes how once she visited a gastroenterologist seeking a diagnosis, but not treatment. Depending on the case, the community might prefer to spend its money on a farm for a young married couple, rather than on medication or testing, she explains. “We give doctors headaches,” she says, apologetically. “I feel compassion for them.”

House says that non-Plain Americans “are finally figuring out that in our healthcare system resources are finite and everything costs somebody something”. Plain communities, he says, understand that because they pay for their care. In his experience, autonomy to the general American public means, “I get whatever interventions I want or need, and I get however much I want or need, regardless of the cost.” Plain communities on the other hand “are very independent, which is part of their autonomy”. They want to know how diseases develop and what they can do themselves to prevent a disease or its progression.

“They’re like dream diabetic patients,” says House, “because they want to do whatever they can” — whether it’s eating better or exercising more − to improve their condition and lessen their reliance upon medication.

 Flickr: Alonso Javier Torres

After successfully treating his son’s burns, John Keim wanted to help his people. He went on to refine his therapy, eventually creating his own honey-based ointment called Burns and Wounds (B&W), which incorporates plant-based ingredients such as wheat germ oil, aloe vera and myrrh. He settled on wild burdock leaves as his preferred dressing, observing that they help relieve pain.

As word spread, Keim went on to care for hundreds of burn victims over the course of 25 years, eventually training other Plain people so they could work within their communities.

Today, Amish stores sell four-ounce jars of B&W for $7, and community healers collect and store boxes of dried burdock leaves. For non-Plain people accustomed to high medical bills, this low-cost approach to burn care may come as a revelation.

But health professionals have looked askance at this do-it-yourself approach, arguing, for example, that scalding the burdock leaves doesn’t fully sterilise them, theoretically putting the patient at risk of infection. Further, they maintain, in some cases skin grafting is absolutely necessary to save a patient’s life. When Plain families started coming to hospitals requesting treatment for dehydration and shock yet refusing skin grafting, conflict arose.

“There were five doctors who promised I would be behind bars,” says Keim. Roughly 15 years ago, he says, private detectives came to his home to talk with him and “it got into the prosecutor’s office”. Ultimately, the prosecutor decided not to make a case against him.

It wasn’t the first time that Plain communities have come under legal scrutiny. Over the years, some Amish parents have been challenged over the care of their children and even faced criminal charges for their choices. In some of these cases, the medical system has been wrong. In 2013, for example, an Amish family decided to halt their daughter’s chemotherapy, which they believed was killing her. Hospital doctors believed the girl would die without the treatment, so the hospital went to court. When the parents lost their power to make decisions about their daughter’s care, the family fled to Mexico. Two years later, they were all back in Ohio, where the daughter appeared active and healthy, according to a judge who visited the family farm.

Recently, a two-year-old boy was treated with B&W and died at home. His parents received probation after pleading no contest to charges of child endangerment. Wengerd, who was familiar with this case from newspaper reports, suggests that the parents — who had left the Amish and worked without the support of Amish burn dressers — likely didn’t recognize that the situation was “over their head.”

Wengerd and Keim both know that Plain people, like all people, are fallible. This is why they want to coordinate with hospitals. “We don’t want a casualty that puts B&W into a bad light just because we’re ignorant,” says Wengerd. “That’s one of the prime reasons for Pomerene [the local hospital] and their involvement. We need that medical oversight. We’re not opposed to them.”

Keim even acknowledges a role for skin grafting within the B&W protocol, saying, “I would be so happy if we could get together and discuss this. I know, when you’re highly educated, it’s hard to step down. I know pride has something to do with it. And, of course, finances also. That’s a block we’re not able to remove and we’ll have to deal with it.”

 Flickr: Shinya Suzuki

“A lot of folks think genetic testing is very expensive and can’t be done,” says Erik Puffenberger. “We’ve shown just the opposite.” He’s the lab director of the Clinic for Special Children in Pennsylvania. In a 2012 report in a scientific journal, Puffenberger and colleagues estimated that the pioneering genetics work at the clinic saves local Plain communities $20–25 million a year in medical costs.

The clinic was established as a non-profit in 1989 by Caroline and Holmes Morton. Holmes had graduated from Harvard Medical School and then completed a fellowship at the Children’s Hospital of Philadelphia, where he had helped identify 16 Amish children with a genetic disorder known as GA1, short for glutaric aciduria type 1 (one of the metabolic diseases tested for in newborns using a heel prick).

At the time, GA1 was thought to be extremely rare; however, thanks to Holmes’s work, we now know that while only 1 in 40,000 people among the general Caucasian population have it, it affects 1 in 400 Amish people. Holmes also soon learned that the Mennonite community had high rates of a different genetic disorder, maple syrup urine disease (MSUD, named after the sweet-smelling urine of affected people).

Because Plain communities originate from relatively small populations, they experience a high level of certain diseases not often seen in the wider population. (Conversely, certain diseases that are present in the wider population are virtually nonexistent in Plain communities.)

Against the advice of colleagues and mentors, Holmes and Caroline (whose background was in educational administration) decided to move to Lancaster County, Pennsylvania — home to the world’s oldest Amish settlement — and start a clinic devoted to diagnosing and treating Plain patients with genetic disorders. Holmes insisted on having an on-site lab, where patients could be tested quickly and affordably.

Babies with GA1 and MSUD are unable to break down certain amino acids, the building blocks of proteins. If these amino acids and their by-products build up in the body they can prove fatal. In the past, babies and children with GA1 and MSUD would become sick, and many died. Along the way, Plain communities incurred incredible hospital expenses. Now, thanks to early genetic testing harnessed by the clinic, babies can be screened at birth for the genes that cause these disorders. Once identified, they’re fed a special baby formula that restricts particular amino acids. As these babies develop into children and adults, they must follow a special diet, which allows them to remain healthy.

The clinic’s average patient bill is just $140, and often includes genetic testing that would cost Plain families hundreds if not thousands of dollars elsewhere. This is made possible, in part, by private donations and collaborative projects connecting the clinic with nearby hospitals and universities. Perhaps most surprising is that over a third of the clinic’s yearly $2.8 million operating budget comes from benefit auctions organised and supplied by Plain communities, where everything from quilts to wooden clocks to buggies complete with LED lights is sold.

The clinic itself is located in a field on a piece of land donated by an Amish farmer. The structure was built by Plain people in the traditional way: by hand, using hooks and pulleys. This pine and timber structure houses advanced genetics equipment. It’s a unique mix of old and new, low-tech and high-tech, Plain and non-Plain.

With their big families, good genealogical records, and small founder populations, Plain communities are ideal subjects for identifying genetic variants for common diseases. Researchers at the clinic discover 10–15 new disease-causing variants each year, and they expect this rate to increase. One of their recent discoveries is a rare variant that’s strongly associated with bipolar disorder. Says Puffenberger: “What’s really important here is if you find one gene, then you learn a pathway, and you know that gene interacts with 10 other things, so those other 10 genes also become potential targets” for therapy.

Despite the clinic’s success, there hasn’t been the same degree of uptake of its methods in non-Plain healthcare. “It’s actually a hard sell to the medical–industrial complex in this country that we should be investing all our effort in preventive technology,” says the clinic’s medical director, Kevin Strauss. But he believes that the US healthcare system can’t afford not to put genomic medicine to work in a preventive, cost-effective way.

The clinic has estimated that its costs per outpatient are about a tenth of those for government-backed Medicare and Medicaid (which cover adults as well as children). This is achieved through an innovative medical model that prioritizes affordability, prevention and research designed to close the implementation gap — what clinic professionals describe as the gap between the “avalanche” of data acquired through projects like the Human Genome Project and the many patients who have yet to benefit from that data.

 Flickr: Robin Monks

Despite their focus on prevention and use of community healers, Plain patients do spend large sums on healthcare. The Mennonite woman I met at Health Promotion Day told me that her ten-year-old daughter was recently treated for appendicitis with complications. The community paid just under $10,000, which she described as “fair.” I met another family nearby with a young child who was recently diagnosed and treated for colorectal cancer. The girl spent 15 days in the hospital. The hospital bill alone was $19,000, negotiated down from an original $172,000. The child’s mother praised God for the discount.

Plain communities often negotiate discounts, which hospitals are willing to offer in exchange for payment in full at the time of service. “I will tell you, they are very conscientious about cost. They are very business-savvy and will shop around,” says Eric Hagan, the administrator for the Medical Center at Scottsville, Kentucky. Hagan and Susan Jones have worked to strengthen the hospital’s relationship with the local Mennonites, offering, among other things, a prompt-pay discount.

For Americans with health insurance, it may come as a surprise that hospital costs are negotiable. Indeed, pricing is so murky that most of us don’t know the actual cost of our care. Prompt-pay discounts are rarely advertised, but according to Plain people, they’re quite common. One rural Kentucky hospital offers a 25% discount. In Holmes County, Ohio, Pomerene Hospital offers package deals for self-pay patients. Anyone – Plain or non-Plain – can contact the hospital’s Amish advocate for details.

“We negotiate our bills because we have to fight the cost,” says Wengerd. He and others in the Plain community worry that healthcare prices will escalate so dramatically that they will be forced to abandon their self-pay tradition and instead rely on Medicaid or Obamacare.

In all their talk of personal responsibility, there’s a distinct echo of Republican rhetoric. The Amish don’t vote, says Wengerd, who describes himself as “politically illiterate”. But, he says, “If we voted, we would be Republican.” Because of their faith, Plain people are against abortion and, often, against contraception. They don’t believe in evolution. Men and women are expected to adhere to traditional gender roles. Wengerd recalls that during the 2004 presidential campaign, George W Bush met with Amish from Pennsylvania and Ohio, the two states with the largest Amish populations. He says Bush explained that they were living in swing states and that they could, he paraphrases, “save the nation from the strength of the liberal Democrats who would ruin it”. As a result, some Amish voted for the first and only time in their lives.

But some Plain beliefs differ markedly from those of conservative Republicans. Because of their faith, Plain people believe in “non-resistance,” which is why they don’t support war or bear arms. And in some of their practices — buying and building property for young couples, pooling resources to cover health expenses — an outsider might even call their approach to communal living socialist. After all, no Plain community would expect a family whose child had cancer to face that burden alone.

Long before Obamacare, Plain communities achieved what the rest of America had not: universal healthcare coverage.

 Arnie Papp

Coming from an ethic of thriftiness, many Plain people distrust the motives of hospital administrators and even doctors themselves. They believe a profit motive can influence courses of treatment. They are also keenly attuned to unnecessary expenditures within the system. (One Plain woman I spoke with questioned the need for fancy carpets at a nearby clinic.)

“In the Amish world, healthcare is seen as a ministry,” says Wengerd, “which is exactly what healthcare in the [non-Plain] world used to be.” Remember apprenticeships and house calls? The doctor used to be viewed like a minister who sacrificed his life for the patient, but there has been a shift. “The patient now sacrifices his livelihood for the doctor’s wellbeing.”

And yet, increasingly, hospitals have been allowing Plain burn teams to treat their own patients with the B&W burns treatment. They are motivated partly by a desire to reach out to Plain communities so they don’t forgo hospital care. But they are also motivated by results. “We were intrigued by the outcomes,” says Hagan, whose hospital has allowed local Mennonites to use B&W there for about five years.

Pomerene Hospital also allows B&W, having first run a small five-person study to document the healing process. Their findings lent support to what Plain communities had been sharing anecdotally: in patients with first- or second-degree burns, the burdock leaf dressing changes caused little to no pain; none of the burns became infected; and healing time averaged less than 14 days. More recently, the University of Michigan laid the groundwork for a study of how safe and effective B&W is, though results are not expected for several years.

Pomerene does not have a burn unit, so patients with severe burns are transferred to larger centers. Staff at some of these have come into conflict with Plain patients and their caregivers, but others have been willing to work with them. For instance, Holmes County patients currently seek care from Anjay Khandelwal, co-director of MetroHealth Comprehensive Burn Center in Cleveland, Ohio. They don’t allow patients to use B&W in the hospital because it’s “not an approved drug on formulary,” but they will release a patient to the care of Plain burn teams once stabilized.

Khandelwal and colleagues travelled to Holmes County to meet with Amish elders, including Wengerd, who spent several years as a volunteer burn dresser and worked with Pomerene Hospital on its B&W study.

It was here that Khandelwal learned that Plain people don’t sue. When the Amish told him they understand doctors are human and make mistakes, he had to pause to let that sink in. To them, he was not simply a member of the medical establishment, but an autonomous individual doing his best, given the choices and information before him. Khandelwal was profoundly moved: “No one says that to us. No one accepts that.”

Lawsuits aside, allowing B&W to be used can be emotionally difficult for healthcare professionals who have been trained to save lives at all costs. Steven A Kahn, a burns specialist at the University of South Alabama, co-authored a 2013 case report, published in the journal Burns, describing the following encounter:

A 25-year-old Amish man was brought to the hospital after gasoline vapors combusted during a farming accident. The man’s clothing ignited, causing third-degree burns across much of his body. With surgery, his chances of survival were estimated to be 50%. Without surgery, zero. The man’s family insisted he would only want B&W for treatment, though if he were to go into cardiac arrest, he would accept CPR. An ethics consultant determined that the family had provided ample evidence to support their claims. So the hospital team consented to B&W only, and the man died 38 hours after his injury.

“When we have the tools to make someone well but are unable to use them for reasons beyond our control,” says Kahn, “it can make us feel ‘helpless’” — a word used by one of the burn nurses on his team. Still, he believes they made the right choice in allowing the family to be the patient’s voice.

Back in Holmes County, Marvin Wengerd talks about the future of Amish healthcare: “I don’t want to push the medical world beyond their comfort zone,” he says. “We’re not asking them to understand our religious beliefs, but we’re asking for intelligent compromise that says their way of looking at it is not the only way of looking at it.

“We have our own set of values and worldviews that are distinct and just as valid. We don’t always win our cases, but enough of them to make it worth the work.”

This is article was first published on mosaicscience.com. It is republished here under a Creative Commons license. 

MEDICAL POT FOR MILITARY PASSES CONGRESS

http://www.marijuana.com/blog/news/2016/05/congress-oks-medical-marijuana-for-military-veterans/

 

Congress OKs Medical Marijuana for Military Veterans

 

The U.S. Senate and House of Representatives both took action to increase military veterans’ access to medical marijuana on Thursday.

By a vote of 89-8, senators approved a bill containing language preventing the Department of Veterans Affairs (V.A.) from spending money to enforce a current policy that prohibits its government doctors from filling out medical marijuana recommendation forms in states where the drug is legal.

The House approved an amendment to accomplish the same goal by a vote of 233-189 earlier in the day.

“We are pleased that both the House and Senate have made it clear that the Veterans Administration should not punish doctors for recommending medical cannabis to their veteran patients,” Mike Liszewski, government affairs director for Americans for Safe Access (ASA), told Marijuana.com. “Combat veterans are disproportionately affected by several conditions that medical cannabis can effectively treat, including chronic pain, PTSD and traumatic brain injury. We anticipate this amendment will reach the president, and once signed, it will give V.A. physicians another tool in their toolbox to treat the healthcare needs of America’s veterans.”

The provisions are now part of a larger bills to fund the V.A. and other government agencies through next year. The medical cannabis language was attached to the Senate legislation last month in bipartisan vote of 20-10 in the Appropriations Committee, and did not require a separate vote on the floor.

Last year the Senate approved the Fiscal Year 2016 version of the V.A. spending bill, with similar medical cannabis protections for veterans attached, but the House narrowly defeated a move to add the amendment to its version of the legislation by a vote of 213-210. As a result, the provision was not included in the final omnibus appropriations package signed into law by President Obama in December.

Since then, momentum on medical cannabis and broader marijuana law reform issues has continued to increase. Last month, for example, Pennsylvania became the 24th state in the U.S with a comprehensive medical marijuana program. This month the Ohio House of Representatives approved medical cannabis and, on Thursday, Louisiana Gov. John Bel Edwards (D) signed a medical marijuana bill into law.

U.S. House and Senate negotiators will meet in a conference committee to iron out the discrepancies in funding levels and other differing provisions between each chamber’s version of the spending legislation. But since both now include medical marijuana protections for veterans, it is likely that they will make it into the final package sent to President Obama for enactment into law.

“I commend my colleagues for showing compassion and supporting our wounded warriors,” Rep. Earl Blumenauer (D-OR), who sponsored the amendment on the House floor, said in a press release. “Today’s vote is a win for these men and women who have done so much for us and deserve equal treatment in being able to consult with, and seek a recommendation from, their personal V.A. physician about medical marijuana.”

The V.A. policy disallowing its doctors from recommending medical marijuana in states where it is legal actually expired on January 31 but, under the department’s procedures, the ban technically remains in effect until a new policy is enacted.

Advocates expect a new policy soon, but aren’t sure what it will say. In February 2015, a top V.A. official testified before a House committee that the department is undertaking “active discussions” about how to address the growing number of veterans who are seeking cannabis treatments.

The language of the House and Senate veterans medical marijuana protections differ somewhat.

The Senate bill reads:

None of the funds appropriated or otherwise made available to the Department of Veterans Affairs in this Act may be used in a manner that would—

(1) interfere with the ability of a veteran to participate in a medicinal marijuana program approved by a State;

(2) deny any services from the Department to a veteran who is participating in such a program; or

(3) limit or interfere with the ability of a health care provider of the Department to make appropriate recommendations, fill out forms, or take steps to comply with such a program.

Whereas the House bill says:

None of the funds made available by this Act may be used to implement, administer, or enforce Veterans Health Administration directive 2011-004 (or directive of the same substance) with respect to the prohibition on “VA providers from completing forms seeking recommendations or opinions regarding a Veteran’s participation in a State marijuana program”.

While the Senate language seems more all-encompassing, advocates believe that whichever approach is included in the final enacted legislation will be sufficient to give veterans greatly expanded access to medical cannabis.

“The Senate language clearly states that interference is forbidden conduct, while the House language prohibits the V.A. from adopting or enforcing rules to interfere,” Liszewski, of ASA, said in an interview. “Once signed by the president, we look forward to working with the Veterans Administration to make sure that V.A. physicians are aware of their ability to recommend medical cannabis to veterans who could benefit from it.”

A trio of Democratic senators submitted an additional amendment this week intended to spur medical cannabis research by the V.A., but it did not receive a vote on the Senate floor.

Attention now turns to separate Congressional appropriations bills that fund other parts of the government, most importantly legislation that covers the Justice Department. For the past two years, advocates have succeeded in attaching language to the bill preventing the Drug Enforcement Administration and other Justice Department agencies from spending money to interfere with state medical marijuana and industrial hemp laws. A move to broaden the protections to cover all state marijuana laws, including full legalization, narrowly failed on the House floor last year but could now have enough increased support to pass if voted on again.

Advocates will also continue pushing to add protections for banks that work with legal marijuana businesses to legislation covering the Treasury Department. And, there could be votes concerning the District of Columbia’s ability to spend its own money taxing and regulating marijuana sales.

“This is an historic moment and further proof there is real movement and bipartisan support in reforming outdated federal marijuana policies,” said Blumenauer, of the victory for veterans. “There is more to be done, and I will build on today’s momentum and continue my efforts in catching federal policy up to reflect the views held by a majority of Americans.”

Photo Courtesy of Paket.

Read more http://www.marijuana.com/blog/news/2016/05/congress-oks-medical-marijuana-for-military-veterans/

There’s No Such Thing as Free Will

http://www.theatlantic.com/magazine/archive/2016/06/theres-no-such-thing-as-free-will/480750/

Freewill

bt Stephen Cave

There’s No Such Thing as Free Will

But we’re better off believing in it anyway.

For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.

Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”

 

So what happens if this faith erodes?

The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.

Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something.

In recent decades, research on the inner workings of the brain has helped to resolve the nature-nurture debate—and has dealt a further blow to the idea of free will. Brain scanners have enabled us to peer inside a living person’s skull, revealing intricate networks of neurons and allowing scientists to reach broad agreement that these networks are shaped by both genes and environment. But there is also agreement in the scientific community that the firing of neurons determines not just some or most but all of our thoughts, hopes, memories, and dreams.

We know that changes to brain chemistry can alter behavior—otherwise neither alcohol nor antipsychotics would have their desired effects. The same holds true for brain structure: Cases of ordinary adults becoming murderers or pedophiles after developing a brain tumor demonstrate how dependent we are on the physical properties of our gray stuff.

 

Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion.

The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.

This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

 

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

In 2002, two psychologists had a simple but brilliant idea: Instead of speculating about what might happen if people lost belief in their capacity to choose, they could run an experiment to find out. Kathleen Vohs, then at the University of Utah, and Jonathan Schooler, of the University of Pittsburgh, asked one group of participants to read a passage arguing that free will was an illusion, and another group to read a passage that was neutral on the topic. Then they subjected the members of each group to a variety of temptations and observed their behavior. Would differences in abstract philosophical beliefs influence people’s decisions?

Yes, indeed. When asked to take a math test, with cheating made easy, the group primed to see free will as illusory proved more likely to take an illicit peek at the answers. When given an opportunity to steal—to take more money than they were due from an envelope of $1 coins—those whose belief in free will had been undermined pilfered more. On a range of measures, Vohs told me, she and Schooler found that “people who are induced to believe less in free will are more likely to behave immorally.”

It seems that when people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions. Consequently, they act less responsibly and give in to their baser instincts. Vohs emphasized that this result is not limited to the contrived conditions of a lab experiment. “You see the same effects with people who naturally believe more or less in free will,” she said.

Edmon de Haro

In another study, for instance, Vohs and colleagues measured the extent to which a group of day laborers believed in free will, then examined their performance on the job by looking at their supervisor’s ratings. Those who believed more strongly that they were in control of their own actions showed up on time for work more frequently and were rated by supervisors as more capable. In fact, belief in free will turned out to be a better predictor of job performance than established measures such as self-professed work ethic.

 

Another pioneer of research into the psychology of free will, Roy Baumeister of Florida State University, has extended these findings. For example, he and colleagues found that students with a weaker belief in free will were less likely to volunteer their time to help a classmate than were those whose belief in free will was stronger. Likewise, those primed to hold a deterministic view by reading statements like “Science has demonstrated that free will is an illusion” were less likely to give money to a homeless person or lend someone a cellphone.

Further studies by Baumeister and colleagues have linked a diminished belief in free will to stress, unhappiness, and a lesser commitment to relationships. They found that when subjects were induced to believe that “all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules,” those subjects came away with a lower sense of life’s meaningfulness. Early this year, other researchers published a study showing that a weaker belief in free will correlates with poor academic performance.

The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.

Few scholars are comfortable suggesting that people ought to believe an outright lie. Advocating the perpetuation of untruths would breach their integrity and violate a principle that philosophers have long held dear: the Platonic hope that the true and the good go hand in hand. Saul Smilansky, a philosophy professor at the University of Haifa, in Israel, has wrestled with this dilemma throughout his career and come to a painful conclusion: “We cannot afford for people to internalize the truth” about free will.

Smilansky is convinced that free will does not exist in the traditional sense—and that it would be very bad if most people realized this. “Imagine,” he told me, “that I’m deliberating whether to do my duty, such as to parachute into enemy territory, or something more mundane like to risk my job by reporting on some wrongdoing. If everyone accepts that there is no free will, then I’ll know that people will say, ‘Whatever he did, he had no choice—we can’t blame him.’ So I know I’m not going to be condemned for taking the selfish option.” This, he believes, is very dangerous for society, and “the more people accept the determinist picture, the worse things will get.”

 

Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.

Smilansky advocates a view he calls illusionism—the belief that free will is indeed an illusion, but one that society must defend. The idea of determinism, and the facts supporting it, must be kept confined within the ivory tower. Only the initiated, behind those walls, should dare to, as he put it to me, “look the dark truth in the face.” Smilansky says he realizes that there is something drastic, even terrible, about this idea—but if the choice is between the true and the good, then for the sake of society, the true must go.

When people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions.

Smilansky’s arguments may sound odd at first, given his contention that the world is devoid of free will: If we are not really deciding anything, who cares what information is let loose? But new information, of course, is a sensory input like any other; it can change our behavior, even if we are not the conscious agents of that change. In the language of cause and effect, a belief in free will may not inspire us to make the best of ourselves, but it does stimulate us to do so.

Illusionism is a minority position among academic philosophers, most of whom still hope that the good and the true can be reconciled. But it represents an ancient strand of thought among intellectual elites. Nietzsche called free will “a theologians’ artifice” that permits us to “judge and punish.” And many thinkers have believed, as Smilansky does, that institutions of judgment and punishment are necessary if we are to avoid a fall into barbarism.

 

Smilansky is not advocating policies of Orwellian thought control. Luckily, he argues, we don’t need them. Belief in free will comes naturally to us. Scientists and commentators merely need to exercise some self-restraint, instead of gleefully disabusing people of the illusions that undergird all they hold dear. Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. “Promoting determinism is complacent and dangerous.”

Yet not all scholars who argue publicly against free will are blind to the social and psychological consequences. Some simply don’t agree that these consequences might include the collapse of civilization. One of the most prominent is the neuroscientist and writer Sam Harris, who, in his 2012 book, Free Will, set out to bring down the fantasy of conscious choice. Like Smilansky, he believes that there is no such thing as free will. But Harris thinks we are better off without the whole notion of it.

“We need our beliefs to track what is true,” Harris told me. Illusions, no matter how well intentioned, will always hold us back. For example, we currently use the threat of imprisonment as a crude tool to persuade people not to do bad things. But if we instead accept that “human behavior arises from neurophysiology,” he argued, then we can better understand what is really causing people to do bad things despite this threat of punishment—and how to stop them. “We need,” Harris told me, “to know what are the levers we can pull as a society to encourage people to be the best version of themselves they can be.”

According to Harris, we should acknowledge that even the worst criminals—murderous psychopaths, for example—are in a sense unlucky. “They didn’t pick their genes. They didn’t pick their parents. They didn’t make their brains, yet their brains are the source of their intentions and actions.” In a deep sense, their crimes are not their fault. Recognizing this, we can dispassionately consider how to manage offenders in order to rehabilitate them, protect society, and reduce future offending. Harris thinks that, in time, “it might be possible to cure something like psychopathy,” but only if we accept that the brain, and not some airy-fairy free will, is the source of the deviancy.

 

Accepting this would also free us from hatred. Holding people responsible for their actions might sound like a keystone of civilized life, but we pay a high price for it: Blaming people makes us angry and vengeful, and that clouds our judgment.

“Compare the response to Hurricane Katrina,” Harris suggested, with “the response to the 9/11 act of terrorism.” For many Americans, the men who hijacked those planes are the embodiment of criminals who freely choose to do evil. But if we give up our notion of free will, then their behavior must be viewed like any other natural phenomenon—and this, Harris believes, would make us much more rational in our response.

Although the scale of the two catastrophes was similar, the reactions were wildly different. Nobody was striving to exact revenge on tropical storms or declare a War on Weather, so responses to Katrina could simply focus on rebuilding and preventing future disasters. The response to 9/11, Harris argues, was clouded by outrage and the desire for vengeance, and has led to the unnecessary loss of countless more lives. Harris is not saying that we shouldn’t have reacted at all to 9/11, only that a coolheaded response would have looked very different and likely been much less wasteful. “Hatred is toxic,” he told me, “and can destabilize individual lives and whole societies. Losing belief in free will undercuts the rationale for ever hating anyone.”

Whereas the evidence from Kathleen Vohs and her colleagues suggests that social problems may arise from seeing our own actions as determined by forces beyond our control—weakening our morals, our motivation, and our sense of the meaningfulness of life—Harris thinks that social benefits will result from seeing other people’s behavior in the very same light. From that vantage point, the moral implications of determinism look very different, and quite a lot better.

What’s more, Harris argues, as ordinary people come to better understand how their brains work, many of the problems documented by Vohs and others will dissipate. Determinism, he writes in his book, does not mean “that conscious awareness and deliberative thinking serve no purpose.” Certain kinds of action require us to become conscious of a choice—to weigh arguments and appraise evidence. True, if we were put in exactly the same situation again, then 100 times out of 100 we would make the same decision, “just like rewinding a movie and playing it again.” But the act of deliberation—the wrestling with facts and emotions that we feel is essential to our nature—is nonetheless real.

 

The big problem, in Harris’s view, is that people often confuse determinism with fatalism. Determinism is the belief that our decisions are part of an unbreakable chain of cause and effect. Fatalism, on the other hand, is the belief that our decisions don’t really matter, because whatever is destined to happen will happen—like Oedipus’s marriage to his mother, despite his efforts to avoid that fate.

Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. It is “complacent and dangerous” to air them.

When people hear there is no free will, they wrongly become fatalistic; they think their efforts will make no difference. But this is a mistake. People are not moving toward an inevitable destiny; given a different stimulus (like a different idea about free will), they will behave differently and so have different lives. If people better understood these fine distinctions, Harris believes, the consequences of losing faith in free will would be much less negative than Vohs’s and Baumeister’s experiments suggest.

Can one go further still? Is there a way forward that preserves both the inspiring power of belief in free will and the compassionate understanding that comes with determinism?

Philosophers and theologians are used to talking about free will as if it is either on or off; as if our consciousness floats, like a ghost, entirely above the causal chain, or as if we roll through life like a rock down a hill. But there might be another way of looking at human agency.

Some scholars argue that we should think about freedom of choice in terms of our very real and sophisticated abilities to map out multiple potential responses to a particular situation. One of these is Bruce Waller, a philosophy professor at Youngstown State University. In his new book, Restorative Free Will, he writes that we should focus on our ability, in any given setting, to generate a wide range of options for ourselves, and to decide among them without external constraint.

For Waller, it simply doesn’t matter that these processes are underpinned by a causal chain of firing neurons. In his view, free will and determinism are not the opposites they are often taken to be; they simply describe our behavior at different levels.

 

Waller believes his account fits with a scientific understanding of how we evolved: Foraging animals—humans, but also mice, or bears, or crows—need to be able to generate options for themselves and make decisions in a complex and changing environment. Humans, with our massive brains, are much better at thinking up and weighing options than other animals are. Our range of options is much wider, and we are, in a meaningful way, freer as a result.

Waller’s definition of free will is in keeping with how a lot of ordinary people see it. One 2010 study found that people mostly thought of free will in terms of following their desires, free of coercion (such as someone holding a gun to your head). As long as we continue to believe in this kind of practical free will, that should be enough to preserve the sorts of ideals and ethical standards examined by Vohs and Baumeister.

Yet Waller’s account of free will still leads to a very different view of justice and responsibility than most people hold today. No one has caused himself: No one chose his genes or the environment into which he was born. Therefore no one bears ultimate responsibility for who he is and what he does. Waller told me he supported the sentiment of Barack Obama’s 2012 “You didn’t build that” speech, in which the president called attention to the external factors that help bring about success. He was also not surprised that it drew such a sharp reaction from those who want to believe that they were the sole architects of their achievements. But he argues that we must accept that life outcomes are determined by disparities in nature and nurture, “so we can take practical measures to remedy misfortune and help everyone to fulfill their potential.”

Understanding how will be the work of decades, as we slowly unravel the nature of our own minds. In many areas, that work will likely yield more compassion: offering more (and more precise) help to those who find themselves in a bad place. And when the threat of punishment is necessary as a deterrent, it will in many cases be balanced with efforts to strengthen, rather than undermine, the capacities for autonomy that are essential for anyone to lead a decent life. The kind of will that leads to success—seeing positive options for oneself, making good decisions and sticking to them—can be cultivated, and those at the bottom of society are most in need of that cultivation.

To some people, this may sound like a gratuitous attempt to have one’s cake and eat it too. And in a way it is. It is an attempt to retain the best parts of the free-will belief system while ditching the worst. President Obama—who has both defended “a faith in free will” and argued that we are not the sole architects of our fortune—has had to learn what a fine line this is to tread. Yet it might be what we need to rescue the American dream—and indeed, many of our ideas about civilization, the world over—in the scientific age.

BROWN VS THE BOARD OF EDUCATION

 

Segregated-School-in-West-M

A SEGREGATED SCHOOL

Segregated School 4, Photos of Liberty Hill and Summerton

 


 

The Supreme Court ruled that school segregation violated the Fourteenth Amendment on this date in 1954 (5/17/1954).

An eight-year-old girl named Linda Brown in Topeka, Kansas, had to travel 21 blocks every day to an all-black elementary school, even though she lived just seven blocks from another elementary school for white children. Her father, Oliver Brown, asked that his daughter be allowed to attend the nearby white school, and when the white school’s principal refused, Brown sued. The court had five school segregation cases from different states on its docket, so the justices combined them under one name: Oliver Brown et al. v. the Board of Education of Topeka. The Supreme Court justices decided to list Brown’s case first because it originated in Kansas, and they didn’t want to give the impression that segregation was purely a Southern problem.

The legal basis for segregation came from the 1896 Supreme Court case Plessy v. Ferguson, which had established that separate facilities for black and white students were constitutional as long as those separate facilities were equal. When Brown v. Board of Education first came before the Supreme Court in 1952, most of the justices were personally opposed to segregation, but only four of them openly supported overturning such a long-established precedent. The tide shifted in September of 1953 when Chief Justice Fred M. Vinson died of a sudden heart attack, and President Eisenhower chose Earl Warren as the new chief justice. As governor of California, Earl Warren had overseen the internment of many Japanese Americans during World War II, and regretted it. Since the war, he had devoted himself to the cause of civil rights.

Warren’s vote alone made the decision 5 to 4 in favor of overturning segregation, but Warren wanted a unanimous decision for such a controversial case. Once he had all the votes, Warren announced the decision to a crowd at the court on this day in 1954. Justice Stanley Reed, a justice from Kentucky who had been the final holdout, wept as the decision was read.

Even though the nation’s highest court had weighed in, it took many more years and several more Supreme Court cases before most Southern schools were fully integrated, and de facto segregation still exists in some communities.


 

The Writer’s Almanac is produced by Prairie Home Productions and presented by American Public Media.

How should Ancient One’s story end?

The epic saga of the skeleton known as the Ancient One (by Native Americans) and Kennewick Man (by scientists) has taken a wild turn.

 

The story’s beginning is now near myth: In 1996, human remains were accidentally discovered along the Columbia River, near Kennewick, Wash. Scientists determined the remains to be a 9,300-year-old man, calling the find “the most important human skeleton ever found in North America.”

A coalition of related tribes in the Northwest, however, insisted “The Ancient One” was an ancestor who should be returned for respectful reburial under federal law. When the U.S. Army Corps of Engineers accepted the tribes’ claim, eight scientists sued. A long court battle ended in 2004, when a federal judge ruled in favor of the scientists.

In 2014, this story appeared to have reached its end when 48 scholars published a 680-page tome on the skeleton. Significantly, the researchers suggested that Kennewick Man had no connections to any living populations. Based on the shape of his skull and skeletal structure, they deduced his closest living relatives are the Moriori, who live on an archipelago 420 miles southeast of New Zealand, and the Ainu of Japan.

These scientific conclusions seemed to be aligned with the court’s conclusion that Kennewick Man should not be considered “Native American.” The Army Corps had originally inferred that the 9,300-year-old individual was Native American because he predated European arrival, and so should be returned under the 1990 Native American Graves Protection and Repatriation Act.

The judge, however, focused on NAG-PRA’s definition of “Native American”: “of, or relating to a tribe, people, or culture that is indigenous to the United States.” The use of the present tense “is” led the judge to conclude that Congress intentionally restricted NAGPRA to human remains that “bear some relationship to a presently existing tribe, people, or culture to be considered Native American.” Since he felt the remains do not bear a relationship to a presently existing tribe, then they were not legally those of a “Native American” and thus not subject to NAGPRA.

But then last year, it turned out there was still a few more chapters in the Ancient One’s story. An article in the journal Nature proved that Kennewick Man is a genetic ancestor of today’s Native Americans. DNA provided a powerful line of evidence, affirming the oral traditions of the Umatilla, Yakima, Nez Perce, Wanapum, and Colville tribes who claim a deep ancestry to the region. The geneticists even found that one of the most closely related tribes to the Ancient One includes the Colville, situated just 200 miles from where the skeleton was unearthed.

The legal effect of these findings recently played out. On April 26, Sen. Patty Murray, a Democrat from Washington, pushed her short bill — Bring the Ancient One Home Act — through the Senate Environment and Public Works Committee. Then, two days later, the Army Corps announced that, with this new genetic evidence, the skeleton is in fact related to modern Native American tribes, which opens the door again to its return through NAGPRA.

It is the right thing to honor the wishes of the Ancient One’s descendants, and allow them to rebury their ancestors with respect and dignity. However, a law directed at one set of remains offers a small bandage to a deep hemorrhage; it is just a matter of time before another ancient skeleton is found. And then we will again suffer through years of unnecessary antagonism between Native Americans and archaeologists, who have far more to gain by working together than by fighting each other.

The best way to avoid future conflicts is to allow NAGPRA to do its work. This would require the smallest of amendments to the law — simply adding two words, “or was,” to NAGPRA’s definition of Native American. With the broad support of both scientific organizations and tribes, several times Congress has come close, but always failed, to making this necessary adjustment.

The story of the Ancient One appears close to a conclusion. But until Congress fixes NAGPRA, the repatriation wars will continue without end.


 

Chip Colwell is senior curator of anthropology at the Denver Museum of Nature & Science, and author of the upcoming “Plundered Skulls and Stolen Spirits: Inside the Fight to Reclaim Native America’s Culture” (University of Chicago Press).

Universal basic income may be next big thing

Paula Dwyer writes editorials on economics, finance and politics for Bloomberg View.

Now and then a worthy economic proposal comes along that seems as politically unattainable as it is sensible. Then, on closer inspection, you see that it’s more than a policy wonk’s fantasy. And you wonder whether it could actually prevail.

This may be happening with the concept of a universal basic income. The notion that government should guarantee every citizen an annual stipend of, say, $10,000 — no strings attached, no questions asked — is being studied by politicians, economists and policy experts worldwide.

Think of it as Social Security for all. In the social democracies of Europe, Canada and South America, experiments are planned or underway. In the U.S., it’s still little more than a concept — one that appears to have more conservative backers than liberal ones.

Bernie Sanders says he’s “sympathetic” to the theory behind a universal basic income but stops well short of advocating it. Hillary Clinton seems even less enthusiastic. By contrast, conservative economists, politicians and think-tank scholars are not as hesitant. Marco Rubio, for example, proposed the beginnings of a basic income in his 2015 tax plan.

The rest of the world is taking the lead.

Switzerland will hold a June 5 referendum on whether to give every adult citizen 2,500 Swiss francs (about $2,600) a month. Ontario, Canada, will conduct an experiment with a basic income later this year. The city of Utrecht in the Netherlands is conducting a pilot program, and Finland is planning a two-year trial. A British proposal is gathering interest. In May, a nonprofit group will start giving 6,000 Kenyans a guaranteed income for at least a decade and follow the results.

Basic-income proposals come in many varieties and have myriad rationales.

Some progressives see it as the ultimate expression of what a developed economy can achieve: a way to lessen poverty and inequality, and ease the pain of job loss and economic stagnation. But in the U.S., many liberals see it as naive and a distraction from more practical priorities, such as a $15 minimum wage and paid family leave.

For conservatives, the attraction is smaller government. Dozens of social-welfare programs now costing U.S. taxpayers about $1 trillion a year could be folded into a basic-income program, they argue.

With no eligibility criteria or enforcement needed, administrative costs would be bare-bones. Waste, fraud and abuse would be greatly reduced, the argument goes, if not close to zero.

In the 1960s, a basic income was part of the mainstream political discussion. President Richard Nixon even proposed an income floor, based on ideas developed by Daniel Patrick Moynihan, then a domestic-policy adviser. The proposal died in part because of liberal opposition to a work requirement and obstruction by a well-organized welfare lobby, Moynihan would later write.

The earned-income tax credit, a form of basic income, took its place, but only to supplement the earnings of the working poor. The tax credit was first proposed in 1962 by conservative economist Milton Friedman. One of his aims was to end the “earnings cliff,” in which government aid disappears once income exceeds a cap. Such a limit discourages recipients from working, a consequence that keeps them poor and dependent.

The tax credit is still around and widely considered an effective anti-poverty program, but the earnings-cliff issue has only gotten worse: The U.S. now has 80-plus low-income programs, each with its own eligibility rules and earnings caps.

The idea of a universal basic income is enjoying a renaissance today, not only in Washington think tanks but in Silicon Valley, as my Bloomberg View colleague, Justin Fox, has written. Y Combinator, a venture-capital firm, is launching a five-year research project, for example. The goal is to give a randomly selected group of people a monthly check to see if they sit around and play video games or create economic value.

Why does Silicon Valley care? It can see the role of technology in accelerating job losses in the U.S. Two Oxford professors wrote recently that about 47 percent of U.S. jobs are at risk of being replaced by automation. If that happens, the economy would shrink.

The fear that people with a guaranteed basic income would become slackers may be unfounded. One economist who studied trials conducted in the 1970s in Canada found the opposite: Recipients were healthier and finished high school at higher rates. Adults with full-time jobs worked the same number of hours with one exception: Women took off more time after having a baby, an utterly reasonable outcome.

Yes, the costs of guaranteeing 322 million Americans $10,000 a year would be prohibitive — a whopping $3.2 trillion a year.

But by excluding 45 million retirees who already receive a basic income through Social Security, the cost falls to $2.7 trillion. And if the benefit is phased out for households earning more than $100,000 (that would be 20 percent of the U.S.’s 115 million households, or about 70 million people, assuming three to a household), the cost declines to about $2 trillion.

Now we’re getting close to the $1 trillion cost of all those unemployment checks, tax credits, food stamps, housing vouchers and myriad other means-tested benefits that a basic income could supplant.

Here is where liberals start to get queasy. They don’t like that a basic income would replace the safety net, even when assured that some programs, including education, job training and entitlements like Medicare, would be maintained. They worry that the civil servants who now run programs would be laid off. And they fear that a basic income would, in the end, be less than what many people get when all the federal government’s cash and social-service programs are combined.

Those are valid concerns. But as other countries test the idea and seek improvements in their social-welfare systems, will it make sense for the U.S. to maintain an expensive crazy-quilt of programs, many of which have not lifted people out of poverty and dependence? A Social-Security-for-all approach might not seem like such a fantasy after all.

Honeybees beat the heat

Insects fan wings to manage deadly temperatures

Honeybees gather at the entrance of their new home after beekeepers transferred 25,000 honeybees to two hives installed on the lawn of the governor’s mansion in Olympia, Wash. Ted S. Warren, Associated Press

 

Honeybees have developed a way to survive sharp climate change by fanning their wings, cooling hives to keep bee larvae from baking, University of Colorado scientists found.

But honeybees adjusted their behavior to the extent necessary only when they were in groups of 10 or more — the insect equivalent of flash mobs relying on decentralized collection of information on temperature, the scientists concluded in a peer-reviewed study published this week in the British science journal Animal Behaviour.

“How do large, decentralized societies deal with traumatic changes in order to stay alive? We humans know our society could potentially be at risk as well. And we can respond as well,” said biologist Chelsea Cook, lead author of the study done at CU-Boulder’s Department of Ecology and Evolutionary Biology.

“Bees collect the most relevant information at the time and use that information to respond. Groups of 10 were better at responding because they had more individuals gathering more information and sharing that information,” Cook said.

Solo bees in the study failed to fan their wings as temperatures increased. Those bees cooked along with their eggs.

Previous research had established honeybees’ ability to fan their wings rapidly to try to maintain a stable temperature for their vulnerable larvae inside hives — ideally below 96.8 degrees. Otherwise, bee larvae cannot survive heat spikes.

While queen bees lay all the eggs in a colony, no authority figure exists controlling bee behavior. Scientists regard honeybee colonies as models for self-organizing and decentralized information-gathering aimed at survival.

CU researchers said they used hot plates to heat bees. They observed that, when temperatures spiked by 3.6 degrees within one minute, bees began fanning their wings with an intensity that cooled hives enough for larvae to live. A more gradual increase in temperature brought a lesser response.

Solo bees stayed dormant, and clusters of three bees were relatively slow to fan wings.

The scientists concluded that larger decentralized groups are better than individuals at assessing and reacting to abrupt changes. They said that’s because, in fast-changing environments, information quickly becomes outdated.

Bruce Finley: 303-954-1700, bfinley@denverpost.com or @finleybruce