Alarm after vomiting passenger dies on flight from Nigeria to JFK | New York Post

A plane from Nigeria landed at JFK Airport Thursday with a male passenger aboard who had died during the flight after a fit of vomiting — and CDC officials conducted a “cursory” exam before announcing there was no Ebola and turning the corpse over to Port Authority cops to remove, Rep. Peter King said on Thursday.

via New York Post.

Just how much dishonesty is going on here?

We are told that it would be “counterproductive” to ban flights from infected areas into the US.

We are told that it is not airborne, even though it apparently is: http://www.nature.com/srep/2012/121115/srep00811/full/srep00811.html

This is crazy.

 

“Golden Rice Opponents Should Be Held Accountable for Health Problems Linked to Vitamain A Deficiency”

Except for the regulatory approval process, Golden Rice was ready to start saving millions of lives and preventing tens of millions of cases of blindness in people around the world who suffer from Vitamin A deficiency.

It’s still not in use anywhere, however, because of the opposition to GM technology.
via Scientific American

Sure – we can arrest people for interfering with scientists’ right to save the world, because scientists are so expert that they know what they’re doing and can absolutely guarantee that everything will work as promised…so that means if they’re wrong, they can be held liable for manslaughter if anyone dies, right?

Right?

Whaddya mean “that’s not how science works”?

Baby born to a mother who had taken thalidomide while pregnant. Image via wikipedia.

Baby born to a mother who had taken thalidomide while pregnant. Image via Wikipedia.

Authority means accountability. If scientists want the one, they should be ready to accept the other.

And that doesn’t even touch the issue of whether they are taking too much license with the environment we all share. I hate saying that, because I am not at all a fan of environmentalists, and I hate sounding like them. To me the question is not environment vs. science, but rather the correct way to handle risk. The history of science is full of projects that crashed first and only learned to fly after examining what went wrong the first three or seven or fifty times. Scientists don’t own the environment. We all do. That is why the correct way to win debates over whether or not there is such a thing as “genetic pollution” or whether cross-pollination issues are potentially of concern is by persuading the voters – not by punishing thoughtcrimes, as this writer advocates, by making people criminally liable for invented crimes just because those people and their hard-to-rebut arguments happen to be politically inconvenient.

…or just because the scientific community doesn’t know how to effectively rebut a valid point?

…or just because the scientific community doesn’t want to even try, because they think people should just obey?

Maybe if scientists want to go back to the good old days – when people still trusted them – they could start with an apologize for their own past lack of accountability (which is why people stopped trusting them, after all). Blind obedience hasn’t worked out very well for too many of us.

Milgram Experiment advertisement. Image via Wikipedia.

Milgram Experiment advertisement. Volunteers were treated unethically. Image via Wikipedia.

The history of science is littered with experiments that were supposed to be safe but wemt wrong. A disturbing number of these science-gone-wrong stories have occurred in the third world. Scientists have a long and ugly history of using developing-world populations as their personal guinea pigs. For example, most people have heard of the Tuskegee syphilis experiment – but how many people know that after it was exposed and shut down, the scientists moved it overseas?

The Commission confirms that despite knowledge that it was unethical, US government medical scientists PURPOSELY infected  “at least 1,300 who were exposed to the sexually transmitted diseases syphilis, gonorrhea and chancroid” to study the effects of penicillin. At least 83 subjects died.”

Reading this article, it seems that wanting to experiment on third world populations is what this is all about. Poverty isn’t caused by lack of resources. It’s caused by corruption and other political problems. We already have more than enough food to feed the world. So don’t fall for the guy using Third World poverty-stricken people as meat shields: this is not about solving the problems of the poor. It’s about the question of whether scientists promising awesome things have the right to bypass that part of the political process where they have to prove their awesome products are safe and worthwhile – to our satisfaction, not their own.

In other words, it’s about self-governance (as opposed to top-down experts telling us what to want, think, feel, need, desire, use, and not use).

And the people who want the right to override our political processes – because they are quote-unquote ‘experts’ – have a history of being ethically stunted people who view the developing world as their own personal sandbox for exploitative experimentation.

But medical ethicists say that even if today’s research is not as egregious as the Guatemala experiment, American companies are still testing drugs on poor, sometimes unknowing populations in the developing world.

Many, like Markel, note that experimenting with AIDS drugs in Africa and other pharmaceutical trials in Third World countries, “goes on every day.”

“It’s not good enough, in my opinion, to protect only people who live in the developed world — but all human beings,” he said.

via ABC

Scientists have relied on bullying to artificially manipulate outcomes – in the case of GMO foods, they have forced people to falsely equate GMO foods with lower-risk foods. Yes, lower risk. There is a risk in GMO foods, and the scientists want us to behave as if there isn’t. That’s the heart of the matter right there – that is what they want, but they are not willing to do what they have to do to earn the outcome; they want to manipulate the outcome dishonestly. They want to deny the existence of real issues that could or do exist. They want to skip the part where they have to persuade us, and their preferred method for doing this is to replace self-governance with top-down bullying – using the three-step “impending doom” song-and-dance beloved of “progressives” everywhere:

  1. Make optimistic promises about how great the results of the proposed policy will be, then treat those promises as if they’re fact. (How could you be against ending world hunger?)
  2. Make dire predictions of impending doom if the policy is not implemented, and act as if criticizing (or even evaluating) the policy equals wanting that horrible doom to fall. (You don’t just want to end world hunger, but you want everyone to starve and die!)
  3. Ignore or, if necessary, deny the consequences if these grossly exaggerated and highly improbable predictions are incorrect.

There is always risk in science – that is why we don’t hold scientists accountable for the deaths their mistakes cause, even though science has caused a steady stream of death and mutilation. We know that science is frequently wrong. The flip side of this is acknowledging that scientists don’t really know, and aren’t honestly in a position to guarantee safety or certainty. Some of the worst atrocities in the history of science come from scientists losing their objectivity – forgetting that they don’t really know. Getting carried away.

It is accurate and correct to perceive GMO products as risky – potentially very risky – to both health and the environment. It isn’t “anti-science” to point out that risk warrants caution. We don’t actually know they’re safe. Note that the people insisting that we should accept they are safe are people who want all the profits while we are stuck with all the risk. (Normally risk and reward go together, but of course it’s always nicer if you can keep the reward and give some other poor slob the risk.)

The honest way to handle it would be to admit that consumers have good reason to prefer non-manipulated foods – and to price GMO foods less, accordingly. But they don’t want to do that. They want to make it so that you can’t tell if a food is GMO or not. They want to replace non-GMO foods with GMO foods.They want to own the food supply.

And, no, the fact that they’re willing to forego profits doesn’t mean anything – not when you’re talking about a product with the power to foster dependency and create market dominance. Remember when Nestle gave away baby formula? WHOOPS!

If their real goal were to prevent vitamin A deficiency, it wouldn’t be hard to dispense vitamin A to all at-risk populations without forcing farmers into accepting crops that may be wonderful or may cause serious problems.

“Should government force businesses to hire felons?”

Obama’s Equal Employment Opportunity Commission has ruled that the use of background checks in hiring is racially discriminatory. In 2012, the EEOC issued “guidance” to the nation’s businesses, citing statistics showing blacks and Hispanics are convicted of crimes at significantly higher rates than whites. Therefore, the EEOC ruled, excluding job applicants based on their criminal records would have “a disparate impact based on race and national origin.”

The EEOC did not say past felonies could never be considered in job applications. But the guidance made clear that an employer who chooses not to hire a felon could have to present a detailed defense to the EEOC. “The employer needs to … effectively link specific criminal conduct, and its dangers, with the risks inherent in the duties of a particular position,” the guidance said. Employers who cannot prove to the EEOC’s satisfaction that excluding a felon from a particular job is a “business necessity” could be in trouble. And whatever the outcome, the company could have its hands full with a costly lawsuit from the government.

“One bright-line policy you should not adopt is having a no-felons policy,” EEOC commissioner Victoria Lipnic told the U.S. Chamber of Commerce in a March 2012 speech. “If you have that policy, that’s going to be a problem if you’re subject to an EEOC investigation.”

Hearing that, many employers might say: This is crazy. There are companies that will reject a job candidate because he posted something embarrassing on his Facebook page, and the Obama administration is warning businesses they’ll be in trouble if they don’t hire convicted felons?

Of course a business, after a background check, might well choose to hire a felon. But that is the employer’s decision — not the Obama administration’s.

via Washington Examiner

This was an op-ed about an Obama nominee (whose nomination is now squelched), but the policy itself seems to be a real policy (http://www.eeoc.gov/laws/guidance/arrest_conviction.cfm).

Screen for felony convictions, and you may be sued.  That’s an actual warning to American employers, courtesy of the Obama EEOC.

via CFIF

What a horrible burden to put on businesses. It’s almost as if Obama wants to drive everyone out of business.

The Texas suit alleges that the EEOC’s guidelines on employers’ use of criminal history effectively intrudes on the State of Texas’ “sovereign right to impose categorical bans on the hiring of criminals.

Specifically, the complaint alleges that the EEOC’s 2012 guidelines would serve to force Texas employers to hire felons under threat of disparate impact investigations and suits prompted by the EEOC.

Currently the Texas Department of Public Safety (DPS) has an absolute ban on hiring felons to become law enforcement agents, one that is supported by Texas law, but may violate the EEOC’s guidelines.

The parade of horribles caused by following the EEOC’s authority includes hiring felons as  “[t]roopers, jailers, and school teachers.” In the alternative, ignoring the EEOC’s guidance would risk a hellstorm of Title VII disparate impact investigations, a significant burden even if the investigation is found to be frivolous.

via blogs.findlaw.com

“An Age of Seriousness Returns”

Until Friday, we lived in a world where the West had grown comfortable that Francis Fukuyama was right and history had ended. Events would still happen, but the world would inevitably evolve toward liberal democracy. We all learned in college that liberal democracies were more stable and least prone to violence of all forms of government. Barack Obama, David Cameron, the French buffoon with his mistress du jour, and the rest of the West could sit around tables and worry about the environment, income inequality, unisex bathrooms, and other issues. The West had concluded there were no longer national interests, but global interests where we would all win or all lose together.

Hell, just last week the Ninth Circuit ruled that an American public school could ban displays of the American flag lest Mexican nationalist oriented students were offended.

It is the foreign policy view of the naive, the rube, and elite in comfortable times.

via RedState

If he’s right, it’s a turning point in our history – that is, the sort of marker historians look for when trying to establish useable boundaries demarcating one historical moment from another (and the sort of thing grandparents want to tell their grandchildren about – and grandchildren later wish they’d listened to more).

Launch code for US nukes was 00000000 for 20 years | Ars Technica

Remember all those cold war movies where nuclear missile crews are frantically dialing in the secret codes sent by the White House to launch nuclear-tipped intercontinental ballistic missiles? Well, for two decades, all the Minuteman nuclear missiles in the US used the same eight-digit numeric passcode to enable their warheads: 00000000.

via Ars Technica.

Attempts To Terraform Mars Could Fail – With No Chance To Try Again

Most science fiction and news stories describe Mars terraforming as a long term but simple process. You warm up the planet first, with greenhouse gases, giant mirrors, impacting comets or some such. You land humans on the surface right away and they introduce lifeforms designed to live on Mars. Over a period of a thousand years or so, life spreads over the planet and transforms it, and Mars becomes a second Earth.

However no-one has yet terraformed a planet. There are many theoretical reasons for supposing it wouldn’t be as easy as that. What’s more, this process if it goes wrong could lead to a Mars that is worse for humans than it is now. It could so alter the planet that it can never be terraformed again in such a simple way.

What happens if you make a mistake with a planet?

Our only attempt at making a closed Earth-like ecosystem so far on Earth, in Biosphere 2, failed. There, it was because of an interaction of a chemical reaction with the concrete in the building, which indirectly removed oxygen from the habitat. Nobody predicted this and it was only detected after the experiment was over. The idea itself doesn’t seem to be fundamentally flawed, it was just a mistake of detail.

In the future perhaps we will try a Biosphere 3 or 4, and eventually get it right. When we build self-enclosed settlements in space such as the Stanford Torus, they will surely go wrong too from time to time in the early stages. But again, you can purge poisonous gases from the atmosphere, and replenish its oxygen. In the worst case, you can evacuate the colonists from the space settlement, vent all the atmosphere, sterilize the soil, and start again.

It is a similar situation with Mars, there are many interactions that could go wrong, and we are sure to make a few mistakes to start with. The difference is, if you make a mistake when you terraform a planet, it is likely that you can’t “turn back the clock” and undo your mistakes.

With Mars, we can’t try again with a Mars 2, Mars 3, Mars 4 etc. until we get it right.

via science20.com.

“Doctors on social media share embarrassing photos, details of patients”

Some doctors have misgivings about employing social media in the service of patient care: “What if one finds something that is not warm and fuzzy?” frets resident physician Haider Javed Warraich in a post this week on the New York Times’ Well blog. Despite his reservations, Warraich defends the practice, pointing out that doctors have used online intel to gauge suicide risk, discover relevant undisclosed criminal histories, and contact the families of unresponsive patients.

Social networking was also helpful on the day of the Boston Marathon bombing. Doctors near the finish line tweeted accounts of the attack to local emergency personnel six minutes before official announcements were made, giving staff critical time to prepare for the arrival of victims.

But until the utility of online sharing in health care contexts becomes obvious to hospital operatives, they’ll continue to view it the way the rest of us regard twerking—if we ignore it long enough, surely it will just go away. Nearly 60 percent of the health care professionals surveyed by InCrowd report having no social media access in clinical settings at work.

The American Nurses Association, American Medical Association, and other trade groups have tried to soften administrators’ hard line by setting standards for social media use in the workplace. They’ve published guidelines packed with nuggets like “Pause before you post” and “Be aware that any information [you] post on a social networking site may be disseminated (whether intended or not) to a larger audience.”

via Slate

This really isn’t as difficult as Slate makes it seem.

Social media employing any potentially identifying information should be permissable if and only if there is a clear benefit to the patient, and privacy precautions are taken.

It’s really that simple.

There’s no reason why doctors need to be digging around or worrying about patients’ undisclosed criminal history, and there’s certainly no reason why we ought to view privacy violations as inevitable.

The life-saving nature of certain types of tweet (for example, the doctors who seek help in assessing suicide risk) may suggest that some types of privacy violations may seem justifiable, but there is no reason why professionals should not be held to roughly the same standards as other life-saving professional ethical codes with regards to judgment calls, and full privacy protections should only be waived if for some reason adhering to them might cause serious harm.

Professionals who don’t take privacy seriously should lose their license and face criminal charges. If the profession won’t police itself, the entire profession will suffer a loss of credibility – patients will rightfully lose faith and trust in doctors.

The issue seems somehow more complicated than this in the Slate article because they use examples that border on dishonesty: why would they even include the Boston Marathon bombing incident? What possible reason could they have for treating that situation as if it were somehow in the same category as the incident with the nurses who posted private patient photos on their Facebook pages? The Boston Marathon case could not have involved privacy violations, since the tweeters were writing about what they’d observed in a public situation.

Under no circumstances should patient information be uploaded to any site for reasons that are not beneficial to the patient. Nobody should be afraid to seek medical help for fear that he will end up on a Facebook page, ridiculed by the so-called professionals.

A good rule of thumb might go like this:  if you would be embarrassed, ashamed, or afraid of what people might think if the person whose information you posted found out what you did, you are probably committing a crime.

In 1999 the California HealthCare Foundation issued a report titled “The Future of the Internet in Health Care: Five-Year Forecast,” by Robert Mittman and Mary Cain of the Institute for the Future… overall, the forecast proved remarkably prescient. Its conclusions about online privacy foreshadow the equilibrium most contemporary patients and providers have reached: “[T]here will inevitably be several well-publicized incidents of people being harmed by public releases of their health care information—those exceptional cases will shape the debate,” the report predicts. “[I]n the end, people and organizations will have to learn to live with a less-than-perfect combination of technologies and policies.”

There’s “less than perfect”, and then there’s just professionals who aren’t behaving according to professional standards.

“Our Final Invention: How the Human Race Goes and Gets Itself Killed”

Hardly a day goes by where we’re not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.

Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he’s not worried about them taking our jobs. He’s worried about them exterminating the human race.

Wait, What?

I’ll grant you that this premise sounds a bit…. dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat’s is not a lone voice — the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.

In Barrat’s telling, we are on the brink of creating machines that will be as intelligent as humans….[O]nce we have achieved AGI [artificial general intelligence], the AGI will go on to achieve something called artificial superintelligence (ASI) — that is, an intelligence that exceeds — vastly exceeds — human-level intelligence.

Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn….

… Once a machine built this way reaches human-level intelligence, it won’t stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an “intelligence explosion” — an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a “hard take off” and rockets past what mere flesh and blood brains are capable of.

And here’s where things get interesting. And by interesting I mean terrible.

via RealClearTechnology

Feedback loop

Feedback loop (Photo credit: Wikipedia)

The problem: the people doing this are powerful, and don’t care what we think. We have no say.They worship science and technology. To them, “reproducing” themselves by creating a race is better than just leaving a son or daughter – and who cares if the rest of the race is exterminated along the way? That just proves the new race – their child – is ‘better’, right?

Because, it turns out, ethics really is the line that separates a nice place to live from total nightmare…and reciprocity (the Golden Rule, aka doing unto others as you would have them do unto you) is the key to ethical behavior.