If the Internet becomes a public utility, you’ll pay more. Here’s why.


internet-logos-1024x643

The Federal Communications Commission is in the middle of a high-stakes decision that could raise taxes for close to 90 percent of Americans. The commission is considering whether to reclassify broadband as a telecommunications service and, in doing so, Washington would trigger new taxes and fees at the state and local level. Continue reading

Half a billion SIM cards could be bugged or have information stolen from them because of ‘serious security flaws’


(Daily Mail UK) An eighth of all SIM cards used around the world could be at risk of fraud, theft, or being bugged, a German security expert has claimed.

Karsten Nohl, a cryptographer, discovered the serious security flaw that could let hackers send hidden text messages to affected handsets and infect them with a virus - regardless of what operating system the phone runs on. Continue reading

Maine Enacts Pioneering Law Prohibiting Warrantless Cellphone Tracking


169973295

On Tuesday, Maine became the second state in recent weeks to enact a law that will force authorities to get a judge to authorize a warrant before obtaining either historic or real-time location data about a person’s movements. Maine’s legislature voted to pass the tracking law after it sailed through both houses in the state in May. According to the ACLU of Maine, the legislature took the decisive step of overriding veto by Gov. Paul LePage, who had declined to sign off on the bill.* Continue reading

AT&T joins Verizon, Facebook in selling customer data


RT News

AT&T has announced that it will begin selling customers’ smart phone data to the highest bidder, putting the telecommunications giant in line with Verizon, Facebook and other competitors that quietly use a consumer’s history for marketing purposes. Continue reading

NSA admits listening to U.S. phone calls without warrants


National Security Agency discloses in secret Capitol Hill briefing that thousands of analysts can listen to domestic phone calls. That authorization appears to extend to e-mail and text messages too.

NSA Director Keith Alexander says his agency's analysts, which until recently included Edward Snowden among their ranks, take protecting "civil liberties and privacy and the security of this nation to their heart every day."

NSA Director Keith Alexander says his agency’s analysts, which until recently included Edward Snowden among their ranks, take protecting “civil liberties and privacy and the security of this nation to their heart every day.” Continue reading

IRS Buying Spying Equipment: Covert Cameras in Coffee Trays, Plants


The IRS, currently in the midst of scandals involving the targeting of conservative groups and lavish taxpayer-funded conferences, is ordering surveillance equipment that includes hidden cameras in coffee trays, plants and clock radios.

The IRS wants to secure the surveillance equipment quickly – it posted a solicitation on June 6 and is looking to close the deal by Monday, June 10.  The agency already has a company lined up for the order but is not commenting on the details.

“The Internal Revenue Service intends to award a Purchase Order to an undisclosed Corporation,” reads the solicitation.

“The following descriptions are vague due to the use and nature of the items,” it says.

“If you feel that you can provide the following equipment, please respond to this email no later than 4 days after the solicitation date,” the IRS said.

Among the items the agency will purchase are four “Covert Coffee tray(s) with Camera concealment,” and four “Remote surveillance system(s)” with “Built-in DVD Burner and 2 Internal HDDs, cameras.”

The IRS also is buying four cameras to hide in plants: “(QTY 4) Plant Concealment Color 700 Lines Color IP Camera Concealment with Single Channel Network Server, supports dual video stream, Poe [Power over Ethernet], software included, case included, router included.”

Finishing out the order are four “Color IP Camera Concealment with single channel network server, supports dual video stream, poe, webviewer and cms software included, audio,” and two “Concealed clock radio.”

(AP Photo)

“Responses to this notice must be received by this office within 3 business days of the date of this synopsis by 2:00 P.M. EST, June 10, 2013,” the IRS said.  Interested vendors are to contact Ricardo Carter, a Contract Specialist at the IRS.

“If no compelling responses are received, award will be made to the original solicited corporation,” the IRS said.

The original solicitation was only available to private companies for bids for 19 business hours.

The notice was posted at 11:07 a.m. on June 6 and had a deadline of 2:00 p.m. on Monday. Taking a normal 9-to-5 work week, the solicitation was open for bids for six hours on Thursday, eight hours on Friday, and five hours on Monday, for a total of 19 hours.

The response date was changed on Monday, pushed back to 2:00 p.m. on Tuesday, June 11.

The location listed for the solicitation is the IRS’s National Office of Procurement, in Oxon Hill, Md.

“The Procurement Office acquires the products and services required to support the IRS mission,” according to its website.

In recent weeks the IRS has been at the center of multiple scandals, admitting to targeting Tea Party groups and subjecting them to greater scrutiny when applying for non-profit status during the 2010 and 2012 elections.

A report by the Treasury Inspector General for Tax Administration revealed that groups with names like “patriot” in their titles were singled out, required to complete lengthy personal questionnaires (often multiple times) and having their nonprofit status delayed, sometimes for more than three years.

Last week a second Inspector General report detailed nearly $50 million in wasteful spending by the agency on conferences, in which employees stayed at luxurious Las Vegas hotels, paid a keynote speaker $17,000 to paint a picture of U2 singer Bono, and spent $50,000 on parody videos of “Star Trek.”

Requests for comment from the IRS and Mr. Carter were not returned before this story was posted.

CNSNews.com asked IRS spokesmen Dean Patterson and Anthony Burke to explain the reasoning behind the solicitation, where the surveillance equipment will be used, why the request was so urgent, and whether the request has anything to do with the recent scandals at the IRS.

 

 

http://cnsnews.com/news/article/irs-buying-spying-equipment-covert-cameras-coffee-trays-plants

The Facebook Home disaster


The Facebook Home disaster

The reviews are in: Facebook Home, Mark Zuckerberg’s grandiose stab at totally controlling our mobile experience, is an unmitigated disaster.

On Wednesday, AT&T announced that it was dropping the price of the HTC First smartphone, which comes with Facebook Home built in, from $99 to 99 cents. Think about that: a new smartphone, priced to jump off the shelves at Dollar General. It’s a great deal, but it is also hugely embarrassing for Zuckerberg.

A little over a month ago, I wrote that the only way I could see a Facebook phone making sensewas if Facebook planned to cut deals with the phone carriers to give the phone away for free. But such a strategy doesn’t seem to be what’s in play here. Best guess, no one wants to buy a Facebook phone.

For confirmation we need only look at the Google Play store, where the Facebook Home app, which can be installed on select Android phones, has now fallen to the No. 338 ranking in the category of free apps. That’s 200 spots lower than it ranked just two weeks ago.

Even worse: More than half of Facebook Home’s 15,000 user reviews give the app just one star. A typical review:

Uninstalled after 1 min
Just takes a nice phone and ruins the interface. Waste of time.

The numbers represent a remarkable rejection of an initiative that Facebook pushed with a high-profile national advertising campaign and a dog-and-pony rollout at its Menlo Park headquarters. Smartphone users are announcing, loud and clear, that they do not want Facebook in charge of their interface with the mobile universe.

 

 

http://www.salon.com/2013/05/09/the_facebook_home_disaster/

The U.S. Government Is Monitoring All Phone Calls, All Emails And All Internet Activity


The U.S. Government Is Monitoring All Phone Calls, All Emails And All Internet Activity - Photo by Jo Amelia Finlay

Big Brother is watching everything that you do on the Internet and listening to everything that you say on your phone.  Every single day in America, the U.S. government intercepts and stores nearly 2 billion emails, phone calls and other forms of electronic communication.  Former NSA employees have come forward and have described exactly what is taking place, and this surveillance activity has been reported on by prominent news organizations such as the Washington Post, Fox News and CNN, but nobody really seems to get too upset about it.  Either most Americans are not aware of what is really going on or they have just accepted it as part of modern life.  But where will this end?  Do we really want to live in a dystopian “Big Brother society” where the government literally reads every single thing that we write and listens to every single thing that we say?  Is that what the future of America is going to look like?  If so, what do you think our founding fathers would have said about that?

Many Americans may not realize this, but nothing that you do on your cell phone or on the Internet will ever be private again.  According to the Washington Post, the NSA intercepts and stores an astounding amount of information every single day…

Every day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases.

But even the Washington Post may not have been aware of the full scope of the surveillance.  In fact, National Security Agency whistleblower William Binney claims that the NSA has collected “20 trillion transactions” involving U.S. citizens…

In fact, I would suggest that they’ve assembled on the order of 20 trillion transactions about U.S. citizens with other U.S. citizens.

And NSA whistleblowers have also told us that the agency “has the capability to do individualized searches, similar to Google, for particular electronic communications in real time through such criteria as target addresses, locations, countries and phone numbers, as well as watch-listed names, keywords, and phrases in email.”

So the NSA must have tremendous data storage needs.  That must be why they are building such a mammoth data storage center out in Utah.  According to Fox News, it will have the capability of storing 5 zettabytes of data…

The NSA says the Utah Data Center is a facility for the intelligence community that will have a major focus on cyber security. The agency will neither confirm nor deny specifics. Some published reports suggest it could hold 5 zettabytes of data. (Just one zettabyte is the equivalent of about 62 billion stacked iPhones 5?s– that stretches past the moon.

Are you outraged by all of this?

You should be.

The U.S. government is spying on the American people and yet they continue to publicly deny that they are actually doing it.

Last week, this government spying program was once again confirmed by another insider.  What former FBI counterterrorism agent Tim Clemente told Erin Burnett of CNN is absolutely astounding

BURNETT: Tim, is there any way, obviously, there is a voice mail they can try to get the phone companies to give that up at this point. It’s not a voice mail. It’s just a conversation. There’s no way they actually can find out what happened, right, unless she tells them?

CLEMENTE: “No, there is a way. We certainly have ways in national security investigations to find out exactly what was said in that conversation. It’s not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her. We certainly can find that out.

BURNETT: “So they can actually get that? People are saying, look, that is incredible.

CLEMENTE: “No, welcome to America. All of that stuff is being captured as we speak whether we know it or like it or not.”

Yes, “all of that stuff” is most definitely being “captured” and it is time for the Obama administration to be honest with the American people about what is actually going on.

Meanwhile, the recent bombing in Boston has many of our politicians calling for even tighter surveillance.

For example, New York City Mayor Michael Bloomberg recently said that our interpretation of the U.S. Constitution will “have to change” to deal with the new threats that we are facing.  More “smart cameras” are going up in New York, and Bloomberg says that we are “never going to know where all of our cameras are”.  The following is from a recent RT article

New York City police officials intend to expand the already extensive use of surveillance cameras throughout town. The plan, unveiled Thursday, comes as part of a drive for increased security around the US following the Boston Marathon attack.

New York City Police Department Commissioner Ray Kelly announced the plan during a press conference with Mayor Michael Bloomberg, in which the two announced that the suspected Boston Marathon bombers were planning to attack New York next. The pair said they hope to discourage criminals by using so-called “smart cameras” that will aggregate data from 911 alerts, arrest records, mapped crime patterns, surveillance cameras and radiation detectors, among other tools, according to The Verge.

You’re never going to know where all of our cameras are,” Bloomberg told reporters gathered outside City Hall. “And that’s one of the ways you deter people; they just don’t know whether the person sitting next to you is somebody sitting there or a detective watching.”

Will you feel safer if the government is watching you 100% of the time?

Do you want them to see what you are doing 100% of the time?

You might want to think about that, because that is where all of this is headed.

In fact, the truth is that spy cameras are not just going up all over New York City.  Most Americans may not realize this, but a network of spy cameras is now going up all over the nation.  The following is an excerpt from one of my previous articles

“You are being watched.  The government has a secret system – a machine – that spies on you every hour of every day.”  That is how each episode of “Person of Interest” on CBS begins.  Most Americans that have watched the show just assume that such a surveillance network is completely fictional and that the government would never watch us like that.  Sadly, most Americans are wrong.  Shocking new details have emerged this week which prove that a creepy nationwide network of spy cameras is being rolled out across the United States.  Reportedly, these new spy cameras are “more accurate than modern facial recognition technology”, and every few seconds they send back data from cities and major landmarks all over the United States to a centralized processing center where it is analyzed.  The authorities believe that the world has become such a dangerous place that the only way to keep us all safe is to watch what everyone does all the time.  But the truth is that instead of “saving America”, all of these repressive surveillance technologies are slowly killing our liberties and our freedoms.  America is being transformed into an Orwellian prison camp right in front of our eyes, and very few people are even objecting to it.

For many more examples of how the emerging Big Brother surveillance grid is tightening all around us, please see my previous article entitled “19 Signs That America Is Being Systematically Transformed Into A Giant Surveillance Grid“.

Meanwhile, Barack Obama is telling us to reject those that are warning us about government tyranny.  The following is what he told the graduating class of The Ohio State University on May 5th, 2013

Unfortunately, you’ve grown up hearing voices that incessantly warn of government as nothing more than some separate, sinister entity that’s at the root of all our problems. Some of these same voices also do their best to gum up the works. They’ll warn that tyranny always lurking just around the corner. You should reject these voices.

So what do you think?

Should we just ignore all of the violations of our privacy that are happening?

Should we just ignore what the U.S. Constitution says about privacy and let the government monitor us however it wants to?
Read more at http://investmentwatchblog.com/the-u-s-government-is-monitoring-all-phone-calls-all-emails-and-all-internet-activity/#vLT5zSVSEKcqHuIQ.99

Police want ‘kill switch’ for smartphones


AFP Photo

An epidemic of cell phone swiping has prompted calls for a ‘kill switch’ that would render mobile devices inoperable if they end up in the wrong hands.

In San Francisco, California, half of the robberies reported in the last year involved the loss of a cell phone. George Gascon is the city’s district attorney and says things don’t have to stay that way.

“Unlike other types of crimes, this is a crime that could be easily fixed with a technological solution,” Gascon told the New York Times recently.

Apple’s iPhone and other smartphone models retail new in stores for hundreds of dollars apiece, and they are far from worthless on the black market. According to the Times, smartphones sold on the street in San Francisco can fetch upwards of $500, substantially less than a brand new iPhone 5 will set a customer back. But would that be any different if, say, stolen cell phones couldn’t be used again? Gascon and others think so and are calling on cell phone companies like Apple and others to implement a new technology that could remotely turn off a stolen phone for good.

We know that the technology can be developed to prevent this. This is more about social responsibility than economic gain,” he added to the Associated Press.

But that economic gain is indeed something that cell phone makers don’t want to miss out on. Last year almost 174 million new cellphones were sold in the United States, totaling roughly $69 billion in sales. In cities such as San Francisco or Washington, DC — where 40 percent of last year’s robberies involved cell phones — victims of mobilephone threat are stuck spending hundreds, sometimes thousands of dollars a year on replacement phones. If swiped smartphones couldn’t be used, Gascon said the number of incidents would likely go down.

What I’m talking about is creating a kill switch so that when the phone gets reported stolen, it can be rendered inoperable in any configuration or carrier,” Gascon he told the newspaper.

Chuck Wexler is the executive director of the Police Executive Research Forum, and explained to the Times that the cell phone industry could have started taking these steps years ago.

If you look at auto theft, it has really plummeted in this country because technology has advanced so much and the manufacturers recognize the importance of it,” he said. “The cellphone industry has for the most part been in denial. For whatever reasons, it has been slow to move.”

It will likely take a whole lot more than asking politely to have Apple, Google and others change the way their phones work, though. Gascon met with Apple’s government liaison officer Michael Foulkes last year to discuss the ‘kill switch’ option and described the encounter to the Times as “disappointing.”

For me, a technical solution is probably better than just a criminal solution,” Gascon said. “We can always create more laws, but look at how long it already takes to prosecute somebody at the expense of the taxpayers?”

Washington, DC Police Chief Cathy Lanier emailed the Associated Press to say she advocates new federal laws that will require wireless service providers to participate in a national stolen phone database that, while in place today, doesn’t mandate that carrier subscribe to the system.

This is a voluntary agreement and the decision makers, heads of these (wireless) companies may transition over time and may not be in the same position five years from now.” she said. “Something needs to be put in place to protect consumers.”

http://rt.com/usa/kill-switch-cell-phone-892/

Obama administration bypasses CISPA by secretly allowing Internet surveillance


U.S. Deputy Defense Secretary William Lynn (2nd R).(Reuters / Jim Young)

(RT) Scared that CISPA might pass? The federal government is already using a secretive cybersecurity program to monitor online traffic and enforce CISPA-like data sharing between Internet service providers and the Department of Defense.

The Electronic Privacy Information Center has obtained over 1,000 pages of documents pertaining to the United States government’s use of a cybersecurity program after filing a Freedom of Information Act request, and CNET reporter Declan McCullagh says those pages show how the Pentagon has secretly helped push for increased Internet surveillance.

“Senior Obama administration officials have secretly authorized the interception of communications carried on portions of networks operated by AT&T and other Internet service providers, a practice that might otherwise be illegal under federal wiretapping laws,” McCullagh writes.

That practice, McCullagh recalls, was first revealed when Deputy Secretary of Defense William Lynn disclosed the existence of the Defense Industrial Base (DIB) Cyber Pilot in June 2011. At the time, the Pentagon said the program would allow the government to help the defense industry safeguard the information on their computer systems by sharing classified threat information between the Department of Defense, the Department of Homeland Security and the Internet service providers (ISP) that keep government contractors online.

“Our defense industrial base is critical to our military effectiveness. Their networks hold valuable information about our weapons systems and their capabilities,” Lynn said. “The theft of design data and engineering information from within these networks greatly undermines the technological edge we hold over potential adversaries.”

Just last week the US House of Representatives voted in favor of the Cyber Intelligence Sharing and Protection Act, or CISPA — a legislation that, if signed into law, would allow ISPs and private Internet companies across the country like Facebook and Google to share similar threat data with the federal government without being held liable for violating their customers’ privacy. As it turns out, however, the DIB Cyber Pilot has expanded exponentially in recent months, suggesting that a significant chunk of Internet traffic is already subjected to governmental monitoring.

In May 2012, less than a year after the pilot was first unveiled, the Defense Department announced the expansion of the DIB program. Then this past January, McCullagh says it was renamed the Enhanced Cybersecurity Services (ECS) and opened up to a larger number of companies — not just DoD contractors. An executive order signed by US President Barack Obama earlier this year will let all critical infrastructure companies sign-on to ECS starting this June, likely in turn bringing on board entities in energy, healthcare, communication and finance.

Although the 1,000-plus pages obtained in the FOIA request haven’t been posted in full on the Web just yet, a sampling of that trove published by EPIC on Wednesday begins to show just exactly how severe the Pentagon’s efforts to eavesdrop on Web traffic have been.

In one document, a December 2011 slide show on the legal policies and practices regarding the monitoring of Web traffic on DIB-linked systems, the Pentagon instructs the administrators of those third-party computer networks on how to implement the program and, as a result, erode their customers’ expectation of privacy.

In one slide, the Pentagon explains to ISPs and other system administrators how to be clear in letting their customers know that their traffic was being fed to the government. Key elements to keep in mind, wrote the Defense Department, was that DIB “expressly covers monitoring of data and communications in transit rather than just accessing data at rest.”

“[T]hat information transiting or stored on the system may be disclosed for any purpose, including to the government,” it continued. Companies participating in the pilot program were told to let users know that monitoring would exist “for any purpose,” and that users have no expectation of privacy regarding communications or data stored on the system.

According to the 2011 press release on the DIB Cyber Pilot, “the government will not monitor, intercept or store any private-sector communications through the program.” In a privacy impact assessment of the ECS program that was published in January by the DHS though, it’s revealed that not only is information monitored, but among the data collected by investigators could be personally identifiable information, including the header info from suspicious emails. That would mean the government sees and stores who you communicate with and what kind of subject lines are used during correspondence.

The DHS says that personally identifiable information could be retained if “analytically relevant to understanding the cyber threat” in question.

Meanwhile, the lawmakers in Congress that overwhelmingly approved CISPA just last week could arguably use a refresher in what constitutes a cyber threat. Rep. Michael McCaul (R-Texas) told his colleagues on the Hill that “Recent events in Boston demonstrate that we have to come together as Republicans and Democrats to get this done,” and Rep. Dan Maffei (D-New York) made unfounded claims during Thursday’s debate that the whistleblowing website WikiLeaks is pursuing efforts to “hack into our nation’s power grid.”

Should CISPA be signed into law, telecommunication companies will be encouraged to share Internet data with the DHS and Department of Justice for so-called national security purposes. But even if the president pursues a veto as his advisers have suggested, McCullagh says few will be safe from this secretive cybersecurity operation already in place.

The tome of FOIA pages, McCullagh says, shows that the Justice Department has actively assisted telecoms as of late by letting them off the hook for Wiretap Act violations. Since the sharing of data between ISPs and the government under the DIB program and now ECS violates federal statute, the Justice Department has reportedly issued an undeterminable number of “2511 letters” to telecoms: essentially written approval to ignore provisions of the Wiretap Act in exchange for immunity.

“The Justice Department is helping private companies evade federal wiretap laws,” EPIC Executive Director Marc Rotenberg tells CNET. “Alarm bells should be going off.”

In an internal Justice Department email cited by McCullagh, Associate Deputy Attorney General James Baker is alleged to write that ISPs will likely request 2511 letters and the ECS-participating companies“would be required to change their banners to reference government monitoring.”

“These agencies are clearly seeking authority to receive a large amount of information, including personal information, from private Internet networks,” EPIC staff attorney Amie Stepanovich adds to CNET. “If this program was broadly deployed, it would raise serious questions about government cybersecurity practices.”

Department of Justice secretly gave Internet service providers immunity when conducting surveillance


Activist Post

According to documents obtained by the Electronic Privacy Information Center (EPIC), the Department of Justice secretly authorized the interception of electronic communications on certain parts of AT&T and other Internet service providers’ networks.

Previously, EPIC obtained documents on the National Security Agency’s Perfect Citizen program which involves monitoring private computer networks. This latest revelation deals with an entirely different program first called Defense Industrial Base Cyber Pilot, or DIB Cyber Pilot, though it is now operating as Enhanced Cybersecurity Services.

While this type of activity might be illegal under federal wiretapping legislation, the Obama administration gave the companies immunity when monitoring networks under a cybersecurity pilot program.

“The Justice Department is helping private companies evade federal wiretap laws,” said Marc Rotenberg, executive director of EPIC. “Alarm bells should be going off.”

The alarm bells should get louder when one realizes that while this collaboration between the Department of Defense (DoD), the Department of Homeland Security (DHS) and the private sector began focusing only on defense contractors, the program was massively expanded.

Thanks to an executive order dated February 12, 2013 entitled, “Improving Critical Infrastructure Cybersecurity,” the program was widened significantly.

The order expanded it to cover other “critical infrastructure industries” which includes “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.”

Declan McCullagh, writing for CNET, points out that this includes “all critical infrastructure sectors including energy, healthcare, and finance starting June 12.”

The documents reveal that the National Security Agency (NSA) and Defense Department were directly involved in pushing for this secret legal authorization.

NSA director Keith Alexander participated in some of the discussions personally, according to the documents.

Attorneys from the Justice Department signed off on the immunity despite the Department of Justice’s and industry participants’ initial reservations, according to CNET.

The legal immunity was given to participating internet service providers in the form of “2511 letters,” as the participants in the confidential discussions refer to them.

A 2511 letter is named after the Wiretap Act, 18 USC 2511, which the participants will not be held to by the Department of Justice.

According to CNET, “the 2511 letters provided legal immunity to the providers by agreeing not to prosecute for criminal violations of the Wiretap Act. It’s not clear how many 2511 letters were issued by the Justice Department.”

DIB Cyber Pilot was first publicly disclosed in 2011 by then Deputy Secretary of Defense William Lynn but in 2012, the pilot program expanded into an ongoing program dubbed Joint Cybersecurity Services Pilot. As of January it was renamed yet again, this time to Enhanced Cybersecurity Services program.

The same model used under the DIB pilot will be used under the new program, which means that participating companies “would be required to change their banners to reference government monitoring.”

The DHS privacy office stated that users on participating company networks will see “an electronic login banner [stating] information and data on the network may be monitored or disclosed to third parties, and/or that the network users’ communications on the network are not private.”

It is not clear how the banner will be worded exactly, but a 2011 Department of Defense Office of General Counsel PowerPoint presentation obtained by EPIC reveals eight of the elements that should be part of the banner:

1. It expressly covers monitoring of data and communications in transit rather than just accessing data at rest.
2. It provides that information transiting or stored on the system may be disclosed for any purpose, including to the Government.
3. It states that monitoring will be for any purpose.
4. It states that monitoring may be done by the Company/Agency or any person or entity authorized by Company/Agency.
5. It explains to users that they have “no [reasonable] expectation of privacy” regarding communications or data transiting or stored on the system.
6. It clarifies that this consent covers personal use of the system (such as personal emails or websites, or use on breaks or after hours) as well as official or work-related use.
7. It is definitive about the fact of monitoring, rather than conditional or speculative.
8. It expressly obtains consent from the user and does not merely provide notification.

“EPIC staff attorney Amie Stepanovich says the banner the government proposed is so broad and vague that it would allow ISPs not only to monitor the content of all communication, including private correspondence, but also potentially hand over the monitoring activity itself to the government,” Threat Level reports.

Similarly troubling is that it would only be seen by employees of participating companies, meaning that outsiders who communicate with those employees would have no clue that their communication was under surveillance.

“One of the big issues is the very broad notice and consent that they’re requiring, which far outpaces the description of the program the we’ve been given so far of not only the extent of the DIB pilot program but also the extent of the program that expands this to all critical infrastructure,” Stepanovich said, according to Threat Level.

“The concern is that information and communications between employees will be sent to the government, and they’re preparing employees to consent to this,” she added.

Both the NSA and Justice Department declined to comment to CNET but Sy Lee, a DHS spokesman sent a statement to CNET saying:

DHS is committed to supporting the public’s privacy, civil rights, and civil liberties. Accordingly, the department has implemented strong privacy and civil rights and civil liberties standards into all its cybersecurity programs and initiatives from the outset, including the Enhanced Cybersecurity Services program. In order to protect privacy while safeguarding and securing cyberspace, DHS institutes layered privacy responsibilities throughout the department, embeds fair practice principles into cybersecurity programs and privacy compliance efforts, and fosters collaboration with cybersecurity partners.

However, even individuals in the Justice Department “expressed misgivings that the program would ‘run afoul of privacy laws forbidding government surveillance of private Internet traffic,’” according to EPIC.

Furthermore, the Department of Homeland Security has no problem lying to Congress about their privacy breaches. Why anyone should believe that they would be honest now isn’t quite clear.

While the NSA claims they “will not directly filter the traffic or receive the malicious code captured by Internet providers,” EPIC points out that it is unclear how they can detect malicious code and prevent its execution without actually “captur[ing]” it in violation of federal law.

Former Homeland Security official Paul Rosenzweig likened the NSA and Defense Department asking the Justice Department for 2511 letters to “the CIA asking the Justice Department for the so-called torture memos a decade ago,” according to CNET.

“If you think of it poorly, it’s a CYA function,” Rosenzweig said. “If you think well of it, it’s an effort to secure advance authorization for an action that may not be clearly legal.”

This perspective was reinforced by a Congressional Research Service report published last month.

The report states it is likely the case that the executive branch does not actually have the legal authority to authorize additional widespread monitoring of communications unless Congress rewrites the law to give that authority.

“Such an executive action would contravene current federal laws protecting electronic communications,” the non-partisan report states.

However, CISPA – which the House passed last week – would actually give formal authorization to the program without resorting to workarounds like 2511 letters.

Since CISPA simply overrides any and all privacy laws at the state and federal level, any program like this would be given the legal green light.

Even more troubling is that the internal documents show that in late 2011, NSA, DoD and DHS officials actively met with aides on the House Intelligence committee who actually drafted the legislation.

“The purpose of the meeting, one e-mail shows, was to brief committee aides on ‘cyber defense efforts,’” as CNET put it.

Ryan Gillis, a director in the DHS Office of Legislative Affairs also sent an e-mail to Sen. Dianne Feinstein, a California Democrat and chairman of the Senate Intelligence Committee, discussing the pilot program during the same period.

It is hardly surprising that at least one of the same companies getting immunity under the 2511 letters has expressed support for CISPA, since both give network providers immunity from prosecution.

AT&T and CenturyLink are the only two providers publicly announcing their participation in the program thus far.

However, an unnamed government official cited by CNET said that other unnamed companies have signed a memorandum of agreement with DHS to join the program and are undergoing security certification.

“These agencies are clearly seeking authority to receive a large amount of information, including personal information, from private Internet networks,” Stepanovich said to CNET. “If this program was broadly deployed, it would raise serious questions about government cybersecurity practices.”

Rosenzweig points out that the expansion into the many sectors outlined in the executive order above could potentially even include the monitoring of meat packing plants.

Indeed, the language is broad enough to include just about anything at this point.

House “wouldn’t even allow debate” on CISPA amendment requiring warrant before database search


 

Madison Ruppert
Activist Post

The U.S. House of Representatives passed CISPA todaywith the majority of the major problems intact. When Rep. Alan Grayson proposed an amendment that would require the National Security Agency, FBI, Department of Homeland Security and others to obtain a warrant before searching a database, it was shot down without debate.

Grayson, a Florida Democrat, proposed a simple amendment only a sentence long that would require “a warrant obtained in accordance with the fourth amendment to the Constitution of the United States” when a government agency sought to search a database of private information obtained from e-mail and internet service providers for evidence of criminal activity.

Grayson complained earlier today on Twitter saying, “The Rules Committee wouldn’t even allow debate on requiring a warrant before a search.”

As CNET points out, “That’s a reference to a vote this week by the House Rules committee that rejected a series of privacy-protective amendments, meaning they could not be proposed and debated during today’s floor proceedings.”

Without that amendment, a wide range of agencies within the federal government can be searched without a warrant so long as it is supposedly related to any crime related to protecting someone from “serious bodily harm.”

While some of the crimes are quite serious, like child pornography, kidnapping and “serious threats to the physical safety of minors,” the database can also be searched for “cybersecurity purposes” and for the “investigation and prosecution of cybersecurity crimes.” Obviously that leaves a lot of leeway.
According to Rep. Jared Polis, a Colorado Democrat and former Internet entrepreneur, the “serious bodily harm” is dangerous because it is ambiguous enough to allow federal agencies to “go on fishing expeditions for electronic evidence,” according to CNET.

“The government could use this information to investigate gun shows” and even football games, simply because there is a threat of serious bodily harm if an accident were to occur, Polis said.

“What do these things even have to do with cybersecurity? … From football to gun show organizing, you’re really far afield,” he said.

To make matters even worse, there is nothing under CISPA requiring the anonymization of records as incredibly sensitive and personal as health records or banking information before they are shared and searched by the government according to ZDNet.

Another amendment similarly shot down was proposed by Rep. Justin Amash, a Michigan Republican. This would have ensured the privacy policies and terms of use of companies remained both valid and legally enforceable in the future.

As CNET rightly notes, one of the most controversial aspects of CISPA is that it “overrules all existing federal and state laws by saying ‘notwithstanding any other provision of law,’ including privacy policies and wiretap laws, companies may share cybersecurity-related information ‘with any other entity, including the federal government.’”

While CISPA would not require companies to share the information, it is hard to think of a large corporation these days that wouldn’t play ball if pressured by the federal government.

The problems with CISPA are so glaring that 34 advocacy groupsas diverse as the American Library Association, the Electronic Frontier Foundation, the American Civil Liberties Union, Reporters Without Borders and many more, wrote a letter in opposition to the legislation.

“CISPA’s information sharing regime allows the transfer of vast amounts of data, including sensitive information like Internet records or the content of emails to any agency in the government including military and intelligence agencies like the National Security Agency or the Department of Defense Cyber Command,” the groups pointed out in their letter.

The ACLU opposes the bill because, “there’s a disconnect here between what they say is going to happen and what the legislation says,” thanks to the amendments that were blocked, according to legislative counsel Michelle Richardson.

Unfortunately, this time around CISPA is not receiving the widespread opposition legislation like the Stop Online Piracy Act (SOPA) received, which eventually led to SOPA’s failure.

CNET chalks this up to the lack of alliances between ordinary Internet users, civil liberties groups and technology companies this time around.

While no broad-based coalition exists fighting CISPA, massive corporations have gathered together to support it.

“Companies including AT&T, Comcast, EMC, IBM, Intel, McAfee, Oracle, Time Warner Cable, and Verizon have instead signed on as CISPA supporters,” Declan McCullagh writes.

Some, including Alexis Ohanian, co-founder of Reddit, are calling on Internet companies to take a stand in opposing CISPA.

In the below video, Ohanian calls on Google, Facebook and Twitter to stand up for the privacy rights of the American people:

There is still time to fight CISPA in the Senate.

“We’re committed to taking this fight to the Senate and fighting to ensure no law which would be so detrimental to online privacy is passed on our watch,” said Rainey Reitman, EFF Activism Director.

Contacting your senators is incredibly easy. If you care about the principles of the Constitution and your privacy, you will be doing yourself and others a favor by taking a few moments to contact your senators to instruct them to uphold their oath of office by defending the Constitution and the rights of the American people.

Did I forget anything or miss any errors? Would you like to make me aware of a story or subject to cover? Or perhaps you want to bring your writing to a wider audience? Feel free to contact me at[email protected] with your concerns, tips, questions, original writings, insults or just about anything that may strike your fancy.

Please support our work and help us start to pay contributors by doing your shopping through our Amazon link or check out some must-have products at our store.

This article first appeared at End the Lie.

Madison Ruppert is the Editor and Owner-Operator of the alternative news and analysis database End The Lie and has no affiliation with any NGO, political party, economic school, or other organization/cause. He is available for podcast and radio interviews. Madison also now has his own radio show on UCYTV Monday nights 7 PM - 9 PM PT/10 PM - 12 AM ET. Show page link here: http://UCY.TV/EndtheLie. If you have questions, comments, or corrections feel free to contact him at [email protected]

BitCoin Down 50% In Massive Sell Off: Over $1 Billion Vaporized In a Few Hours


Just a few months ago the total net worth of all Bitcoins, a popular encrypted digital currency, was worth about $140 million. The non-tangible exchange mechanism is used by people all over the world to purchase everything from traditional goods and services, to illicit trade that may include drugs and stolen credit card numbers. The coins became a go-to digital store of wealth around the world after the meltdown of the Cypriot financial system, and was pushed as a ‘safe’ way to preserve wealth out of view prying government eyes. All of the excitement surrounding Bitcoin has driven the price of a single unit to in excess of $250, giving the total Bitcoins in global circulation a market capitalization of over $2.5 Billion in just a few months time.

Earlier this morning, Mike Adams of Natural News penned a warning to investors and those seeking privacy and wealth protection by utilizing the digitally encrypted BitCoin currency unit:

Bitcoin has become a casino. It is almost a perfect reflection of the tulip bulb mania of 1637 in these two ways: 1) Most people buying bitcoins have no use for bitcoins (just like tulip bulbs), and 2) The rapid increase in bitcoin valuations cannot be substantiated in any way that reflects reality.

In other words, there is no fundamental reason why bitcoins should be 2000% more valuable today than four months ago. Nothing has changed other than the craze / mania of people buying in.

When bitcoins were in the sub-$20 range, I was not concerned about any of this. I actually encouraged people to buy bitcoins and support the bitcoin movement. But alarm bells went off in my mind when it skyrocketed past $150 and headed to $200+ virtually overnight. These are not the signs of rational markets. These are warning signs of bad things yet to occur. (Via Infowars)

A few hours after Adams’ dire warning was posted, the crash he warned about has become a reality.

This morning, without warning, and moments after Bitcoin achieved its all time highs, the currency collapsed over 50%, essentially vaporizing upwards of one billion dollars in value.

This is what panic selling looks like – in real time:

Bitcoin-Collapse(Chart Courtesy Bitcoinbullbear.com)

And given that there are no protective mechanisms for the alternative free market Bitcoin trade, the crash may not yet be over.

Will it stage an amazing recovery? Alas, for this particular bubble, there are no NYSE circuit breakers nor is there a Federal Reserve-mandated “plunge protection team.” And why should there be? The central banks hate all currency alternatives. Firehats: on, especially since the volume is still relatively lite. (Zero Hedge)

The momentum for Bitcoin has now turned to the downside, much like it did in previous crashes where the currency achieved new highs, and was promptly sold off by those who bought into the bubble early at rock-bottom prices.

While BitCoin may be a preferred method of keeping payments for services and products private through its crypto-mechanisms, it is still a non-tangible asset and it require brokers and the internet to function properly.

Touted as a safe haven store of wealth and a “gold standard of the internet age” by Forbes, tens of thousands of investors bought into the hype.

Today they are paying the price.

During times of financial and economic stability BitCoin may function just fine as a suitable mechanism of exchange. But these are not ordinary times. Interesting, yes. Stable, no. And thus, exchanging one’s assets and turning them into digital Bitcoins may not be the best choice of asset protection during periods of financial, economic and political turmoil and uncertainty.

Only physical assets – the kind we can hold in our hand – can truly be called safe havens.

Food in your pantry that you can consume at anytime.

Skills and labor you can barter for other goods.

Precious metals, which have stood the test of time over thousands of years.

Land on which you can produce food and alternative power.

These are the assets that provide a realistic level of safety and security.

These are money when the system crashes and confidence in the paper ponzi schemes around the world is lost.

Bitcoin is fine for certain types of transactions. But having funds in Bitcoin is, obviously, no different than a deposit account at a bank which can go under or a stock marketprone to manipulation.

Get physical. It’s the only way to ensure your assets will really be there when you need them.

 

 

http://www.shtfplan.com/headline-news/bitcrash-down-50-in-massive-sell-off-over-1-billion-vaporized-in-a-few-hours_04102013

Bitcoin crashes, losing nearly half of its value in six hours


(arstechnica.com) On Wednesday afternoon, the Bitcoin bubble appears to have burst. As of this writing, its current value is around $160—down from a high of $260. (It fell as low as $130 today.) There is no obvious explanation for why the digital currency has fallen so far and so fast, although the market correcting after such a huge rise might be a good explanation.

Some redditors have taken solace in a comment thread entitled “Hold Spartans.”

“This is just the market venting some pressure after these huge gains,” wrote anotherblog. “To be honest I’m glad it’s happening now. If it recovers, it will demonstrate resilience in the market and give confidence to future buyers and current holders that they don’t need to panic sell, reduce the chances of a crash in the future.”

Coincidentally, the plunge came several hours after a reddit user by the name of “Bitcoinbillionaire” suddenly, spontaneously decided to give away around $12,000 (more than 63 BTC) worth of the digital currency. Bitcoinbillionaire rewarded 13 seemingly random redditors, then stopped the whirlwind spree after about eight hours. At the moment, no evidence links the currency’s plunge with this random reddit charity.

Bitcoinbillionaire took advantage of reddit’s Bitcointip mechanism, which allows users to send each other small amounts of cash (usually less than $5). The mysterious benefactor appears to have given away 20 BTC (now worth slightly less than $4,000) as his or her first gift to one Karelb. This gift happened under a comment titled: “I wish for the price to crash.” That comment now seems prophetic.

A look at the account transferring all this money shows that two hours before the giveaways began, Bitcoinbillionaire received 50 BTC (about $9,500) from another account without an IP address.

Business Insider reported that Bitcoinbillionaire has left hints that he or she was an “early adopter”and had forgotten he or she even had any bitcoins. Not much is known beyond that, as Bitcoinbillionaire vanished as suddenly as he or she appeared.

“You’ve made me change my mind about this whole thing,” Bitcoinbillionaire wrote. “I’m done.”

Don’t feel bad if you missed the action. Business Insider also notes that this pot of cash is now being “paid forward.”

Who wants a smart meter to track’n’tax your car? Hello, Israel


(theregister.co.uk) Israel is drafting a tender for smart meters to be mandated in every vehicle in the country, tracking drivers to allow for differential taxation, but only once the privacy issues have been resolved.

The plan is to vary vehicle tax based on usage, so drivers who don’t drive during peak times, or stay out of city centres, get discounted road tax, but the Ministry of Finance and the Ministry of Transport are adamant that any solution will have to protect the privacy of drivers who might not want every journey recorded and logged forever.

“Without a full solution to the privacy problem, we cannot even think about implementing the new tax method,” a source in the transportation department told local business site Globes. “We want a system which will not notify Big Brother about where a vehicle is located, but in which the device will make the calculations, and allow the car owner to delete data after use.”

It’s an alarmingly enlightened viewpoint, and not one that our Big-Data-Cloud-Analysis corporations would approve, but the rational approach doesn’t stop there - the idea isn’t just to tax users by the mile, as most systems would, but to reward them for reducing their existing mileage as demonstrated by the trial scheme which launched last month.

That scheme, called Going Green, will monitor 1,200 drivers over two years, and pay them up to 25 shekels (about a fiver) for every journey they don’t take. The first six months are used to work out “normal” driving, after which the volunteer can receive to a maximum of around a £1,000 over 18 months, calculated on the journeys they didn’t take, where they didn’t go, and the time at which they didn’t go there.

The formula is necessarily complicated, but laid out in full (in Hebrew) on the sign-up site. Globes reckons several hundred volunteers have already put their names down despite the privacy issues not yet being addressed, but the forthcoming tender will require a privacy-securing solution.

The UK system of recording every numberplate which enters the city centre is much easier and has the added benefit of feeding an enormous database of our movements, and as long as you’ve nothing to hide then presumably you have nothing to fear. We’re told this is the way a congestion charge is run, so it will be interesting to see if the Israelis can come up with a better solution, and if such a thing would ever be acceptable to our own government.

Obama - Executive order on Cyber “Security” due to hacks. Yet, No proof of hacks. There goes our 1st amendment!


(www.sherriequestioningall.blogspot.com) Well is it any wonder they want to wipe out every single Bill of Rights we have?  We have had only 2 left, the 2nd amendment (they are working on) and the 1st amendment (up to a point).

Obama is set to sign an Executive order on Wednesday for Cyber Security, due to all the hackings going on.

I have a question for everyone?  Have you ever seen any of the “top secret” documents from hackings?  We heard about the Pentagon hacking last week.  Yet I never saw anything from it.  We heard about the Bush email hacks the other day.  The only thing released was a portrait that Bush Jr. painted of himself.  So, in other words…. we are suppose to just take their word for it all.  Just like all the other events.  If they say it “it must be true.”  Jeez… have they ever lied to us before?  The government and media wouldn’t do that now…. would they?

Oh.. how convenient for the government that Aaron Swartz committed suicide a couple of weeks ago, considering he is the one who was instrumental in stopping any other cyber security (control) of the internet.

With the “control” of the internet, we will not actually have our 1st amendment left.  They will begin taking down the sites they do not want.   They will stop the flow of information of truth and questioning their actions.

Don’t think that won’t happen?  When the government takes control of anything…. has any good ever come from it?

We don’t know what is going to be in that Executive order, but I guess we will find out after it is signed.  OH… who was that person in 2008 who spoke about transparency and all bills/orders being on the net 2 days before being voted or signed off on?   Oh… yeah that Nobel Peace Prize winner who has murdered more innocent people with drones and begun more wars than any other President before him.

Portion from article:

The White House is poised to release a cybersecurity executive order on Wednesday, two people familiar with the matter told The Hill.
The highly anticipated directive from President Obama is expected to be released at a briefing Wednesday morning at the U.S. Department of Commerce, where senior administration officials will provide an update about cybersecurity policy.
The White House began crafting the executive order after Congress failed to pass cybersecurity legislation last year. Officials said the threat facing the United States was too great for the administration to ignore.

 

Seventy Years of Nuclear Fission, Thousands of Centuries of Nuclear Waste


(Truth-Out.org) On December 2, 1942, a small group of physicists under the direction of Enrico Fermi gathered on an old squash court beneath Alonzo Stagg Stadium on the Campus of the University of Chicago to make and witness history. Uranium pellets and graphite blocks had been stacked around cadmium-coated rods as part of an experiment crucial to the Manhattan Project - the program tasked with building an atom bomb for the allied forces in World War II. The experiment was successful, and for 28 minutes, the scientists and dignitaries present witnessed the world’s first manmade, self-sustaining nuclear fission reaction. They called it an atomic pile - Chicago Pile 1 (CP-1), to be exact- but what Fermi and his team had actually done was build the world’s first nuclear reactor.

The Manhattan Project’s goal was a bomb, but soon after the end of the war, scientists, politicians, the military and private industry looked for ways to harness the power of the atom for civilian use, or, perhaps more to the point, for commercial profit. Fifteen years to the day after CP-1 achieved criticality, President Dwight Eisenhower threw a ceremonial switch to start the reactor at Shippingport, Pennsylvania, which was billed as the first full-scale nuclear power plant built expressly for civilian electrical generation.

Shippingport was, in reality, little more than a submarine engine on blocks, but the nuclear industry and its acolytes will say that it was the beginning of billions of kilowatts of power, promoted (without a hint of irony) as “clean, safe and too cheap to meter.” It was also, however, the beginning of what is now a weightier legacy: 72,000 tons of nuclear waste.

Atoms for Peace, Problems Forever

News of Fermi’s initial success was communicated by physicist Arthur Compton to the head of the National Defense Research Committee, James Conant, with artistically coded flair:

Compton: The Italian navigator has landed in the New World.

Conant: How were the natives?

Compton: Very friendly.

But soon after that initial success, CP-1 was disassembled and reassembled a short drive away, in Red Gate Woods. The optimism of the physicists notwithstanding, it was thought best to continue the experiments with better radiation shielding - and slightly removed from the center of a heavily populated campus. The move was perhaps the first necessitated by the uneasy relationship between fissile material and the health and safety of those around it, but if it was understood as a broader cautionary tale, no one let that get in the way of “progress.”

By the time the Shippingport reactor went critical, North America already had a nuclear waste problem. The detritus from manufacturing atomic weapons was poisoning surrounding communities at several sites around the continent (not that most civilians knew it at the time). Meltdowns at Chalk River in Canada and the Experimental Breeder Reactor in Idaho had required fevered cleanups, the former of which included the help of a young Navy officer named Jimmy Carter. And the dangers of errant radioisotopes were increasing with the acceleration of above-ground atomic weapons testing. But as President Eisenhower extolled “Atoms for Peace,” and the US Atomic Energy Commission promoted civilian nuclear power at home and abroad, a plan to deal with the spent fuel (as used nuclear fuel rods are termed) and other highly radioactive leftovers was not part of the program (beyond, of course, extracting some of the plutonium produced by the fission reaction for bomb production, and the promise that the waste generated by US-built reactors overseas could at some point be marked “return to sender” and repatriated to the United States for disposal).

Attempts at what was called reprocessing - the re-refining of used uranium into new reactor fuel - quickly proved expensive, inefficient and dangerous, and created as much radioactive waste as they hoped to reuse. Reprocessing also provided an obvious avenue for nuclear weapons proliferation because of the resulting production of plutonium. The threat of proliferation (made flesh by India’s test of an atomic bomb in 1976) led President Jimmy Carter to cancel the US reprocessing program in 1977. Attempts by the Department of Energy (DOE) to push mixed-oxide (MOX) fuel fabrication (combining uranium and plutonium) over the last dozen years has not produced any results, either, despite over $5 billion in government investments.

In fact, there was no official federal policy for the management of used but still highly radioactive nuclear fuel until passage of The Nuclear Waste Policy Act of 1982(NWPA). And while that law acknowledged the problem of thousands of tons of spent fuel accumulating at US nuclear plants, it didn’t exactly solve it. Instead, the NWPA started a generation of political horse-trading, with goals and standards defined more by market exigencies than by science, that leaves America today with what amounts to over five dozen nominally temporary repositories for high-level radioactive waste - and no defined plan to change that situation anytime soon.

Lack of Permanent Spent Fuel Storage Looms Large

When a US Court of Appeals ruled in June that the Nuclear Regulatory Commission (NRC) acted improperly when it failed to consider all the risks of storing spent radioactive fuel onsite at the nation’s nuclear power facilities, it made specific reference to the lack of any real answers to the generations-old question of waste storage:

[The Nuclear Regulatory Commission] apparently has no long-term plan other than hoping for a geologic repository…. If the government continues to fail in its quest to establish one, then SNF (spent nuclear fuel) will seemingly be stored on site at nuclear plants on a permanent basis. The Commission can and must assess the potential environmental effects of such a failure.

The court concluded the current situation - in which spent fuel is stored across the country in what were supposed to be temporary configurations-“poses a dangerous long-term health and environmental risk.”

The decision also harshly criticized regulators for evaluating plant relicensing with the assumption that spent nuclear fuel would be moved to a central long-term waste repository.

A Mountain of Risks

The Nuclear Waste Policy Act set in motion an elaborate process that was supposed to give the US a number of possible waste sites, but, in the end, the only option seriously explored was the Yucca Mountain site in Nevada. After years of preliminary construction and tens of millions of dollars spent, Yucca was determined to be a bad choice for the waste. As I wrote in April of last year: “[Yucca Mountain’s] volcanic formation is more porous and less isolated than originally believed - there is evidence that water can seep in, there are seismic concerns, worries about the possibility of new volcanic activity, and a disturbing proximity to underground aquifers. In addition, Yucca mountain has deep spiritual significance for the Shoshone and Paiute peoples.”

Every major Nevada politician on both sides of the aisle has opposed the Yucca repository since its inception. Senate Majority Leader Harry Reid has worked most of his political life to block the facility. And with the previous NRC head, Gregory Jaczko, (and now his replacement, Allison Macfarlane, as well) recommending against it, the Obama administration’s DOE moved to end the project.

Even if it were an active option, Yucca Mountain would still be many years and maybe as much as $100 million away from completion. And yet, the nuclear industry (through recipients of its largesse in Congress) has challenged the administration to spend any remaining money in a desperate attempt to keep alive the fantasy of a solution to their waste crisis.

Such fevered dreams, however, do not qualify as an actual plan, according to the courts.

The judges also chastised the NRC for its generic assessment of spent fuel pools, currently packed well beyond their projected capacity at nuclear plants across the United States. Rather than examine each facility and the potential risks specific to its particular storage situation, the NRC had only evaluated the safety risks of onsite storage by looking at a composite of past events. The court ruled that the NRC must appraise each plant individually and account for potential future dangers. Those dangers include leaks, loss of coolant, and failures in the cooling systems, any of which might result in contamination of surrounding areas, overheating and melting of stored rods, and the potential of burning radioactive fuel - risks heightened by the large amounts of fuel now densely packed in the storage pools and underscored by the ongoing disaster at Japan’s Fukushima Daiichi plant.

Indeed, plants were neither designed nor built to house nuclear waste long-term. The design life of most reactors in the United States was originally 40 years. Discussions of the spent fuel pools usually gave them a 60-year lifespan. That limit seemed to double almost magically as nuclear operators fought to postpone the expense of moving cooler fuel to dry casks and of the final decommissioning of retired reactors.

Everyone Out of the Pool

As disasters as far afield as the 2011 Tohoku earthquake and last October’s Hurricane Sandy have demonstrated, the storage of spent nuclear fuel in pools requires steady supplies of power and cool water. Any problem that prevents the active circulation of liquid through the spent fuel pools - be it a loss of electricity, the failure of a back-up pump, the clogging of a valve or a leak in the system - means the temperature in the pools will start to rise. If the cooling circuit is out long enough, the water in the pools will start to boil. If the water level dips (due to boiling or a leak) enough to expose hot fuel rods to the air, the metal cladding on the rods will start to burn, in turn heating the fuel even more, resulting in plumes of smoke carrying radioactive isotopes into the atmosphere.

And because these spent fuel pools are so full - containing as much as five times more fuel than they were originally designed to hold, and at densities that come close to those in reactor cores - they both heat stagnant water more quickly and reach volatile temperatures faster when exposed to air.

After spent uranium has been in a pool for at least five years (considerably longer than most fuel is productive as an energy source inside the reactor), fuel rods are deemed cool enough to be moved to dry casks. Dry casks are sealed steel cylinders filled with spent fuel and inert gas, which are themselves encased in another layer of steel and concrete. These massive fuel “coffins” are then placed outside and spaced on concrete pads so that air can circulate and continue to disperse heat.

While the long-term safety of dry casks is still in question, the fact that they require no active cooling system gives them an advantage, in the eyes of many experts, over pool storage. As if to highlight that difference, spent fuel pools at Fukushima Daiichi have posed some of the greatest challenges since the March 2011 earthquake and tsunami, whereas, to date, no quake or flood-related problems have been reported with any of Japan’s dry casks. The disparity was so obvious that the NRC’s own staff review actually added a proposal to the post-Fukushima taskforce report, recommending that US plants take more fuel out of spent fuel pools and move it to dry casks. (A year and a half later, however, there is still no regulation - or even a draft - requiring such a move.)

But current dry cask storage poses its own set of problems. Moving fuel rods from pools to casks is slow and costly - about $1.5 million per cask, or roughly $7 billion to move all of the nation’s spent fuel (a process, it is estimated, that would take no less than five to ten years). That is expensive enough to have many nuclear plant operators lobbying overtime to avoid doing it.

Further, though not as seemingly vulnerable as fuel pools, dry casks are not impervious to natural disaster. In 2011, a moderate earthquake centered about 20 miles from the North Anna, Virginia, nuclear plant caused most of its vertical dry casks - each weighing 115 tons - to shift, some by more than four inches. The facility’s horizontal casks didn’t move, but some showed what was termed “cosmetic damage.”

Dry casks at Michigan’s Palisades plant sit on a pad atop a sand dune just 100 yards from Lake Michigan. An earthquake there could plunge the casks into the water. And the casks at Palisades are so poorly designed and maintained, submersion could result in water contacting the fuel, contaminating the lake and possibly triggering a nuclear chain reaction.

And though each cask contains far less fissile material than one spent fuel pool, casks are still considered possible targets for terrorism. A TOW anti-tank missile would breach even the best dry cask [PDF], and with 25 percent of the nation’s spent fuel now stored in hundreds of casks across the country, all above ground, it provides a rich target environment.

Confidence Game

Two months after the Appeals Court found fault with the NRC’s imaginary waste mitigation scenario, the agency announced it would suspend the issuing of new reactor operating licenses, license renewals and construction licenses until the agency could craft a new plan for dealing with the nation’s growing spent nuclear fuel crisis. In drafting its new nuclear “Waste Confidence Decision” (NWCD) - the methodology used to assess the hazards of nuclear waste storage - the Commission said it would evaluate all possible options for resolving the issue.

At first, the NRC said this could include both generic and site-specific actions (remember, the court criticized the NRC’s generic appraisals of pool safety), but as the prescribed process now progresses, it appears any new rule will be designed to give the agency, and so, the industry, as much wiggle room as possible. At a public hearing in November 2012, and later at a pair of web conferences in early December, the regulator’s Waste Confidence Directorate (yes, that’s what it is called) outlined three scenarios [PDF] for any future rulemaking:

  • Storage until a repository becomes available at the middle of the century
  • Storage until a repository becomes available at the end of the century
  • Continued storage in the event a repository is not available.

And while, given the current state of affairs, the first option seems optimistic, the fact that their best scenario now projects a repository to be ready by about 2050 is a story in itself.

When the NWPA was signed into law by President Reagan early in 1983, it was expected the process it set in motion would present at least one (and preferably another) long-term repository by the late 1990s. But by the time the “Screw Nevada Bill” (as it is affectionately known in the Silver State) locked in Yucca Mountain as the only option for permanent nuclear waste storage, the projected opening was pushed back to 2007.

But Yucca encountered problems from its earliest days, so a mid-90s revision of the timeline postponed the official start, this time to 2010. By 2006, the DOE was pegging Yucca’s opening at 2017. And, when the NWPA was again revised in 2010 - after Yucca was deemed a non-option - it conveniently avoided setting a date for the opening of a national long-term waste repository altogether.

It was that 2010 revision that was thrown out by the courts in June.

“Interim Storage” and “Likely Reactors”

So, the waste panel now has three scenarios - but what are the underlying assumptions for those scenarios? Not, obviously, any particular site for a centralized, permanent home for the nation’s nuclear garbage - no new site has been chosen, and it can’t even be said there is an active process at work that will choose one.

There are the recommendations of a Blue Ribbon Commission (BRC) convened by the president after Yucca Mountain was off the table. Most notable there was arecommendation for interim waste storage, consolidated at a handful of locations across the country. But consolidated intermediate waste storage has its own difficulties, not the least of which is that no sites have yet been chosen for any such endeavor. (In fact, plans for the Skull Valley repository, thought to be the interim facility closest to approval, were abandoned by its sponsors just days before Christmas 2012.)

Just-retired Democratic senator from New Mexico Jeff Bingaman, the last chair of the Energy and Natural Resources Committee, tried to turn the BRC recommendations into law. When he introduced his bill in August, however, he had to do so without any cosponsors. Hearings on the Nuclear Waste Administration Act of 2012 were held in September, but the gavel came down on the 112th Congress without any further action.

In spite of the underdeveloped state of intermediate storage, however, when the waste confidence panel was questioned on the possibility, interim waste repositories seemed to emerge, almost on the fly, as an integral part of any revised waste policy rule.

“Will any of your scenarios include interim centralized above-ground storage?” we asked during the last public session. Paul Michalak, who heads the environmental impact statement branch of the Waste Confidence Directorate, first said temporary sites would be considered in the second and third options. Then, after a short pause,Mr. Michalak added [PDF, p40], “First one, too. All right. Right. That’s right. So we’re considering an interim consolidated storage facility [in] all three scenarios.”

The lack of certainty on any site or sites is, however, not the only fuzzy part of the picture. As mentioned earlier, the amount of high-level radioactive waste currently on hand in the United States and in need of a final resting place is upwards of 70,000 tons - already at the amount that was set as the initial top limit for the Yucca Mountain repository. Given that there are still over 100 domestic commercial nuclear reactors more or less in operation, producing something like an additional 2,000 tons of spent fuel every year, what happens to the Waste Confidence Directorate’s scenarios as the years and waste pile up? How much waste were regulators projecting they would have to deal with? How much spent fuel would a waste confidence decision assume the system could confidently handle?

There was initial confusion on what amount of waste - and at what point in time - was informing the process. Pressed for clarification on the last day of hearings, NRC officials finally posited that it was assumed there would be 150,000 metric tons of spent fuel - all deriving from the commercial reactor fleet - by 2050. By the end of the century, the NRC expects to face a mountain of waste weighing 270,000 metric tons [PDF, pp38-41] (though this figure was perplexingly termed both a “conservative number” and an “overestimate”).

How did the panel arrive at these numbers? Were they assuming all 104 (soon to be 103 - Wisconsin’s Kewaunee Power Station will shut down by mid-2013 for reasons its owner, Dominion Resources, says are based “purely on economics”) commercial reactors nominally in operation would continue to function for that entire timeframe - even though many are nearing the end of their design life and none are licensed to continue operation beyond the 2030s? Were they counting reactors like those at San Onofre, which have been offline for almost a year and are not expected to restart anytime soon? Or the troubled reactors at Ft. Calhoun in Nebraska and Florida’s Crystal River? Neither facility has been functional in recent years, and both have many hurdles to overcome if they are ever to produce power again. Were they factoring in the projected AP1000 reactors in the early stages of construction in Georgia, or the ones slated for South Carolina? Did the NRC expect more or fewer reactors generating waste over the course of the next 88 years?

The response: waste estimates include all existing facilities, plus “likely reactors” - but the NRC cannot say exactly how many reactors that is [PDF, p. 41].

Jamming It Through

Answers like those from the Waste Confidence Directorate do not inspire (pardon the expression) confidence for a country looking at a mountain of eternally toxic waste. Just what would the waste confidence decision (and the environmental impact survey that should result from it) actually cover? What would it mandate, and what would change as a result?

In past relicensing hearings - where the public could comment on proposed license extensions on plants already reaching the end of their 40-year design life - objections based on the mounting waste problem and already packed spent fuel pools were waved off by the NRC, which referenced the waste confidence decision as the basis of its rationale. Yet, when discussing the parameters of the process for the latest, court-ordered revision to the NWCD, Dr. Keith McConnell, director of the Waste Confidence Directorate, asserted that waste confidence was not connected to the site-specific licensed life of operations [PDF, p. 42], but only to a period defined as “post-licensed life storage” (which appears, if a chart in the directorate’s presentation [PDF, p. 12] is to be taken literally, to extend from 60 years after the initial creation of waste, to 120 years - at which point a phase labeled “disposal” begins). Issues of spent fuel pool and dry cask safety are the concerns of a specific plant’s relicensing process, said regulators in the latest hearings.

“It’s like dealing with the Mad Hatter,” commented Kevin Kamps, a radioactive waste specialist for industry watchdog Beyond Nuclear. “Jam yesterday, jam tomorrow, but never jam today.”

The edict originated with the White Queen in Lewis Carroll’s Through the Looking Glass, but it is all too appropriate - and no less maddening - when trying to motivate meaningful change at the NRC. The NRC has used the nuclear waste confidence decision in licensing inquiries, but in these latest scoping hearings, we are told the NWCD does not apply to on-site waste storage. The appeals court criticized the lack of site-specificity in the waste storage rules, but the directorate says they are now only working on a generic guideline. The court disapproved of the NRC’s continued relicensing of nuclear facilities based on the assumption of a long-term geologic repository that in reality did not exist - and the NRC said it was suspending licensing pending a new rule - but now regulators say they don’t anticipate the denial or even the delay of any reactor license application while they await the new waste confidence decision [PDF, pp. 49-50].

In fact, the NRC has continued the review process on pending applications, even though there is now no working NWCD - something deemed essential by the courts - against which to evaluate new licenses.

The period for public comment on the scope of the waste confidence decision ended January 2, and no more scoping hearings are planned. There will be other periods for civic involvement - during the environmental impact survey and rulemaking phases - but with each step, the areas open to input diminish. And the current schedule has the entire process greatly accelerated over previous revisions.

On January 3, a coalition of 24 grassroots environmental groups filed documents with the NRC [PDF] protesting “the ‘hurry up’ two-year timeframe” for this assessment, noting the time allotted for environmental review falls far short of the 2019 estimate set by the NRC’s own technical staff. The coalition observed that two years was also not enough time to integrate post-Fukushima recommendations, and that the NRC was narrowing the scope of the decision - ignoring specific instructions from the appeals court - in order to accelerate the drafting of a new waste-storage rule.

Speed might seem a valuable asset if the NRC were shepherding a Manhattan Project-style push for a solution to the ever-growing waste problem - the one that began with the original Manhattan Project - but that is not what is at work here. Instead, the NRC, under court order, is trying to set the rules for determining the risk of all that high-level radioactive waste if there is no new, feasible solution. The NRC is looking for a way to permit the continued operation of the US nuclear fleet - and so, the continued manufacture of nuclear waste - without an answer to the bigger, pressing question.

A Plan Called HOSS

While there is much to debate about what a true permanent solution to the nuclear waste problem might look like, there is little question that the status quo is unacceptable. Spent fuel pools were never intended to be used as they are now used - re-racked and densely packed with over a generation of fuel assemblies. Both the short- and long-term safety and security of the pools has now been questioned by the courts and laid bare by reality. Pools at numerous US facilities have leaked radioactive waste [PDF] into rivers, groundwater and soil. Sudden “drain downs” of water used for cooling have come perilously close to triggering major accidents in plants very near to major population centers. Recent hurricanes have knocked out power to cooling systems and flooded backup generators, and last fall’s superstorm came within inches of overwhelming the coolant intake structure at Oyster Creek in New Jersey.

The crisis at Japan’s Fukushima Daiichi facility was so dangerous, and remains dangerous to this day, in part because of the large amounts of spent fuel stored in pools next to the reactors but outside of containment - a design identical to 35 US nuclear reactors. A number of these GE Mark 1 Boiling Water Reactors - such as Oyster Creek and Vermont Yankee - have more spent fuel packed into their individual pools than all the waste in Fukushima Daiichi Units 1, 2, 3, and 4 combined.

Dry casks, the obvious next “less-bad” option for high-level radioactive waste, were also not supposed to be a permanent panacea. The design requirements and manufacturing regulations of casks - especially the earliest generations - do not guarantee their reliability anywhere near the 100 to 300 years now being casually tossed around by NRC officials. Some of the nation’s older dry casks (which in this case means 15 to 25 years) have already shown seal failures and structural wear [PDF]. Yet the government does not require direct monitoring of casks for excessive heat or radioactive leaks - only periodic “walkthroughs.”

Add in the reluctance of plant operators to spend money on dry cask transfer and the lack of any workable plan to quickly remove radioactive fuel from failed casks, and dry cask storage also appears to fail to attain any court-ordered level of confidence.

Interim plans, such as regional consolidated above-ground storage, remain just that - plans. There are no sites selected and no designs for such a facility up for public scrutiny. What is readily apparent, though, is that the frequent transport of nuclear waste increases the risk of nuclear accidents. There does not, as of now, exist a transfer container that is wholly leak proof, accident proof and impervious to terrorist attack. Moving high-level radioactive waste across the nation’s highways, rail lines and waterways has raised fears of “Mobile Chernobyls” and “Floating Fukushimas.”

More troubling still, if past (and present) is prologue, is the tendency of options designed as “interim” to morph into a default “permanent.” Can the nation afford to kick the can once more, spending tens (if not hundreds) of millions of dollars on a “solution” that will only add a collection of new challenges to the existing roster of problems? What will the interim facilities become beyond the next problem, the next site for costly mountains of poorly stored, dangerous waste?

If there is an interim option favored by many nuclear experts, engineers and environmentalists [PDF], it is something called HOSS - Hardened On-Site Storage [PDF]. HOSS is a version of dry cask storage that is designed and manufactured to last longer, is better protected against leaks and better shielded from potential attacks. Proposals [PDF] involve steel, concrete and earthen barriers incorporating proper ventilation and direct monitoring for heat and radiation.

But not all reactor sites are good candidates for HOSS. Some are too close to rivers that regularly flood, some are vulnerable to the rising seas and increasingly severe storms brought on by climate change, and others are close to active geologic fault zones. For facilities where hardened on-site storage would be an option, nuclear operators will no doubt fight the requirements because of the increased costs above and beyond the price of standard dry cask storage, which most plant owners already try to avoid or delay.

The First Rule of Holes

In a wooded park just outside Chicago sits a dirt mound, near a bike path, which contains parts of the still-highly-radioactive remains of CP-1, the world’s first atomic pile. Seven decades after that nuclear fuel was first buried, many health experts would not recommend that spot [PDF] for a long, languorous picnic, nor would they recommend drinking from nearby water fountains. To look at it in terms Arthur Compton might favor, when it comes to the products of nuclear chain reactions, the natives are restless … and will remain so for millennia to come.

One can perhaps forgive those working in the pressure cooker of the Manhattan Project and in the middle of a world war for ignoring the forest for the trees - for not considering waste disposal while pursuing a self-sustaining nuclear chain reaction. Perhaps. But, as the burial mound in Red Gate Woods reminds us, ignoring a problem does not make it go away.

And if that small pile or the mountains of spent fuel precariously stored around the nation are not enough of a prompt, the roughly $960 million that the federal government has had to pay private nuclear operators should be. For every year that the DOE does not provide a permanent waste repository - or at least some option that takes the burden of storing spent nuclear fuel off the hands (and off the books) of power companies - the government is obligated to reimburse the industry for the costs of onsite waste storage. By 2020, it is estimated that $11 billion in public money will have been transferred into the pockets of private nuclear companies. By law, these payments cannot be drawn from the ratepayer-fed fund that is earmarked for a permanent geologic repository, and so, these liabilities must be paid out of the federal budget. Legal fees for defending the DOE against these claims will add another 20 to 30 percent to settlement costs.

The Federal appeals court, too, has sent a clear message that the buck needs to stop somewhere at some point - and that such a time and place should be both explicit and realistic. The nuclear waste confidence scoping process, however, is already giving the impression that the NRC’s next move will be generic and improbable.

The late, great Texas journalist Molly Ivins once remarked, “The first rule of holes” is, “when you’re in one, stop digging.” For high-level radioactive waste, that hole is now a mountain, over 70 years in the making and over 70,000 tons high. If the history of the atomic age is not evidence enough, the implications of the waste confidence decision process put the current crisis in stark relief. There is, right now, no good option for dealing with the nuclear detritus currently on hand, and there is not even a plan to develop a good option in the near future. Without a way to safely store the mountain of waste already created, under what rationale can a responsible government permit the manufacture of so much more?

The federal government spends billions to perpetuate and protect the nuclear industry - and plans to spend billions more to expand the number of commercial reactors. Dozens of facilities are already past, or are fast approaching, the end of their design lives, but the NRC has yet to reject any request for an operating license extension - and it is poised to approve many more, nuclear waste confidence decision notwithstanding. Plant operators continue to balk at any additional regulations that would require better waste management.

The lesson of the first 70 years of fission is that we cannot endure more of the same. The government - from the DOE to the NRC - should reorient its priorities from creating more nuclear waste to safely and securely containing what is now here. Money slated for subsidizing current reactors and building new ones would be better spent on shuttering aging plants, designing better storage options for their waste, modernizing the electrical grid and developing sustainable energy alternatives. (And reducing demand through conservation programs should always be part of the conversation.)

Enrico Fermi might not have foreseen (or cared about) the mountain of waste that began with his first atomic pile, but current scientists, regulators and elected officials have the benefit of hindsight. If the first rule of holes says stop digging, then the dictum here should be that when you’re trying to summit a mountain, you don’t keep shoveling more garbage on top.

Facebook UK loses 600,000 users in December


A woman views her profile on Facebook

The UK ranks as the world’s sixth most active Facebook user base. Photograph: Linda Nylind for the Guardian

(Guardian) -The number of Facebook’s UK users dropped by 600,000 in December, according to data by social media monitoring firm SocialBakers.

 

Though representing a typical seasonal dip in use over the Christmas period, the UK was the only one of Facebook’s 10 busiest territories that saw a seasonal fall, with user numbers dropping 1.86%.

 

On SocialBakers’ index the UK ranks as the world’s sixth most active Facebook user base, with more than 33 million unique users in December, although that figure duplicates users who access from multiple devices.

 

That user base would be equivalent to 53% market penetration last month, second only to the US with 54%. The US tops the list with more than 169 million unique users per month, followed by Brazil with 65 million and India with 63 million.

 

SocialBakers, a Czech-based startup backed by Index and Earlybird venture funding, has gained significant traction among big business and brands keen to assess their impact on social networks.

 

Data from comScore shows Facebook’s UK monthly active users plateauing at just over 31 million between September and November, falling to 31,456,000 at the end of the three-month period.

 

The data chimes with speculation that Facebook is reaching saturation point among the web user population in its core markets, and that continued growth is increasingly dependent on the developing world.

Panasonic’s New TV Uses Facial Recognition To Identify You


(NYTimes) - Panasonic’s new product introductions at the Consumer Electronics Show in Las Vegas touch on what is becoming a common theme —–making your TV do the work of finding shows you want to watch.

The company announced a feature called “My Home Screen” that will show a viewer customized suggestions of TV shows, streaming shows and Internet content, all on one screen. The idea is to put all of the content in one place so a viewer does not have to search separately for TV shows and video on demand, for instance. Each family member can have a personalized screen, and will not have to sign in — the higher-end Panasonic TVs will have a built-in camera that will use face recognition to determine whose preferences to display.

 

Panasonic is not the only company taking this approach. Samsung is introducing a similar home page that also incorporates social networks.

The Panasonics will also have a simplified way of sharing content from mobile phones and tablets, which can be sent to the screen with a swipe. On the TV screen the images can be edited with a special pen, so you could touch up the colors and write a message, then save the changes on the mobile device the images came from.

The TVs are getting other computerlike features as well, including apps that will allow people to search and make purchases from the Home Shopping Network from the TV, and closer integration of YouTube.

Also on the way is a voice recognition program that will let you speak commands into your remote rather than tapping buttons.

In all, Panasonic will introduce 32 TVs, including 16 plasma and 16 LCD models.

In addition to new Blu-ray players, kitchen appliances, cameras, headphones, and phone and tablet audio docks, Panasonic will introduce two streaming video players, which will work much like a Roku, but use Panasonic-style menus and features.

Porn Troll, accused of ID theft, says defense lawyer made up client


 

(ArsTechnica)Prenda Law, the ethically challenged law firm whose antics have faced growing scrutiny in recent months, just keeps digging its hole deeper. The firm is facing charges that it named a Minnesota man as the CEO of two litigious shell companies without the man’s knowledge or permission. A California judge, Otis Wright, demanded more information about these allegations, and Prenda responded by seeking to boot him from the courtroom. Prenda claims Wright is too biased against copyright trolls to give the firm a fair hearing.

The identity theft allegations were brought to the judge’s attention by Morgan Pietz, who represents one of the “John Doe” defendants Prenda is currently suing. Or at least Pietz allegedly represents a John Doe. In a Monday court filing first spotted by the “Fight Copyright Trolls” blog, Prenda suggested Pietz shouldn’t be allowed to file a brief opposing the dismissal of Judge Wright because Pietz hasn’t proved that he actually represents a John Doe in the case.

“Mr. Pietz could very well be intervening in all of these cases for his own ends, with no real client that he is defending,” writes Prenda’s Brett Gibbs. “Mr. Pietz should have to submit evidence that he is, in fact, representing the actual individual he claims to represent, and not merely inserting himself into cases on the pretense of representing that individual.”

Gibbs continues: “Mr. Pietz has demonstrated repeated hostility toward Plaintiff and toward the undersigned, and, as such, would have sufficient motive to interfere with Plaintiff’s cases without the formality of actually having a client involved in the instant litigation.”

Of course, as the defense attorney, it’s Pietz’s job to be “hostile” to the plaintiffs. And it’s pretty rich for Prenda to demand that the defense attorney first prove that he is representing a real defendant. The firm, after all, is facing accusations that it is suing on behalf of imaginary plaintiffs.

Rapid DNA: Coming Soon to a Police Department or Immigration Office Near You


(EFF) - In the amount of time it takes to get lunch, the government can now collect your DNA and extract a profile that identifies you and your family members.

Rapid DNA Analyzers—machines with the ability to process DNA in 90 minutes or less—are an operational reality and are being marketed to the federal government and state and local law enforcement agencies around the country. These machines, each about the size of a laser printer, are designed to be used in the field by non-scientists, and—if you believe the hype from manufacturers like IntegenX and NetBio—will soon “revolutionize the use of DNA by making it a routine identification and investigational tool.”

From documents we received recently from US Citizenship and Immigration Services (USCIS) and DHS’s Science & Technology division, we’ve learned that the two agencies are working with outside venders NetBio, Lockheed Martin and IntegenX and have “earmarked substantial funds” to develop a Rapid DNA analyzer that can verify familial relationships for refugee and asylum applications.

In the refugee context—where people are often stranded in camps far from their homes with little access to the documentation needed to prove they should be granted asylum in the US—DNA identification could be useful for both the federal government and the asylum seeker.

However, DNA samples contain such sensitive, private and personal information that their indefinite storage and unlimited sharing create privacy risks far worse than other types of data. The United Nations High Commissioner for Refugees (UNHCR) stated in a 2008 Note titled DNA Testing to Establish Family Relationships in the Refugee Context that DNA testing “can have serious implications for the right to privacy and family unity” and should be used only as a “last resort.” The UNHCR also stated that, if DNA is collected, it “should not be used for any other purpose (for instance medical tests or criminal investigations) than the verification of family relationships” and that DNA associated with the test “should normally be destroyed once a decision has been made.”

It seems USCIS is not heeding the UNHCR’s recommendations; the documents show that USCIS wants to use Rapid DNA analysis for much broader purposes than just verifying refugee applications. The agency notes that DNA should be collected from all immigration applicants—possibly even infants—and then stored in the FBI’s criminal DNA database. The agency also supports sharing immigrant DNA with “local, state, tribal, international, and other federal partners” including the Department of Defense and Interpol on the off-chance the refugee or asylum seeker could be a criminal or terrorist or could commit a crime or act of terrorism in the future. This flow chart shows USCIS’s ideal DNA collection and sharing process.

USCIS is not alone in wanting to get the most out of DNA collection. Another document we received shows that the intelligence community and the military are interested in DNA analysis to reveal ethnicity, health status, age, and other factors. And while Rapid DNA analyzers are not currently set up to extract enough data to reveal this information, IntegenX representatives at the Biometrics Consortium Conference this past September said that setting up the machines to extract additional loci would not be difficult.

Some federal agencies interested in Rapid DNA may not be able to implement it widescale for some time. Currently USCIS “does not have the authority to require DNA testing, even when fraud is highly suspected.” For that to happen, the agency would have to update 8 C.F.R. 204.2(d)(vi),1 which it has discussed doing but hasn’t yet done. And although the FBI is also very interested in Rapid DNA analyzers, legal rules prevent the Bureau from using the machines to process any DNA that wil go into its CODIS (Combined DNA Index System) database.

This hasn’t stopped Rapid DNA manufacturers from aggressively marketing their products to state and local law enforcement agencies across the country. IntegenX and Lockhead Martin are both pushing local governments (pdf p.3) to create their own local DNA databases (p. 17, pdf here) instead of relying on CODIS. This has pluses and minuses—it means some chunk of the DNA collected by state or local cops may not end up in the FBI’s massive DNA database and become subject to repeated nationwide searching. However, it also means that cops may not follow the stringent DNA handling procedures currently required by the FBI2 and that, without oversight, collection procedures could become based on little or no real suspicion of criminal activity.

Whether the technology itself is accurate and appropriate to use for immigration populations may also be an issue. According to the documents, scientists at the National Institute of Standards and Technology are uncertain whether the “Likelihood Ratios”3 currently used by accredited labs would be applicable “to an immigration population, since the largest reference groups, whose characteristics feed into the calculations of the ratios, are American Caucasians and Hispanics.” DHS’s own Science & Technology Division noted at a January 4, 2011 Working Group meeting that it was concerned “that prototype equipment may not provide totally reliable results.” Science & Technology staff stated they could not “yet predict how accurate the non-match findings will be, since the error rate for the machines remains unknown.” This means that people could be excluded from refugee programs just because the machine determined—inaccurately—that their DNA did not match their family member’s DNA.

DHS and USCIS acknowledge that “DNA collection may create controversy.” One USCIS employee advocated for “DHS, with the help of expert public relation professionals,” to “launch a social conditioning campaign” to “dispel the myths and promote the benefits of DNA technology.” Another document feared that “If DHS fails to provide an adequate response to [inquiries about its Rapid DNA Test Program] quickly, civil rights/civil liberties organizations may attempt to shut down the test program.”

However, the real issues with expanded DNA collection—and the issues these documents don’t answer—are whether DNA collection is really necessary to solve the challenges inherent in proving refugee entitlement to benefits; what standards and laws will govern expanded federal, state and local DNA collection and subsequent searches; how DNA will be collected, stored and secured; who will have access to it after it’s collected; and what processes are in place to destroy the DNA sample and delete data from whatever database it’s stored in after it’s served the limited purpose for which it was originally collected. Without answers to these questions, no amount of “social conditioning” can convince those concerned about privacy and civil liberties that expanded DNA collection is a good idea.

California teen girls charged with drugging parents to evade Internet curfew


(ArsTechnica) -Two California teenagers were arrested on New Year’s Eve after allegedly spiking one of their parents’ milkshakes with sleeping medication. The girls did this, the local police said, because one girl felt her parents’ Internet curfew was too strict. The parents apparently restricted access to the family’s wireless Internet connection after 10pm.

“The unsuspecting parents consumed only about a quarter of their shakes thinking that they tasted very odd,” the police in Rocklin, California (22 miles northeast of the state capital, Sacramento) reported.

“However, they consumed enough of the medicine for it to take effect within an hour and fell asleep. The parents did not awake until the following morning and did not remember what had occurred.”

Police told the Sacramento Bee that after waking once during the night with headaches and grogginess that persisted until morning, the adults went to the police to get a $5 drug test kit.

“Many parents buy them and have their kids’ urine tested,” Lt. Lon Milka, a Rocklin police spokesperson, told the paper. When the parents found out they had been drugged, they alerted the police, who promptly arrested the teens on charges of conspiracy and willfully mingling a pharmaceutical with food.

The names of the 15- and 16-year-old girls—who were booked in Placer County Juvenile Hall on December 31, 2012—are being withheld as they are minors.

“The girls wanted to use the Internet, and they’d go to whatever means they had to,” Milka added. “If they were adults, they could be facing prison time.”

Divers Could Become Real-Life Aquamen if This Pentagon Project Works


(Wired) -Even casual divers know that diving too deep, or surfacing too quickly, can cause a host of complications from sickness to seizures and even sudden death. Now the Pentagon’s scientists want to build gear that can turn commandos into Aquaman, allowing them to plunge into the deeps without having to worry as much about getting ill. (Orange and green tights sold separately.)

According to a list of research proposals from the U.S. military’s blue-sky researchers at Darpa, the agency is seeking “integrated microsystems” to detect and control “warfighter physiology for military diver operations.” Essentially it comes down to hooking divers up to sensors that can read both their bio-physical signs and the presence of gases like nitric oxide, which help prevent decompression sickness, commonly known as “the bends.” If those levels dip too low, the Darpa devices will send small amounts of the gases into divers’ lungs to help keep them swimming.

The agency doesn’t specify what exactly the machine will look like, as it’s still in the research stage, but the plan is to make it portable enough for a diver to carry, of course. Darpa also wants the gear for bomb-disposal units and “expanded special operations.”

 

For an understandable reason. Decompression sickness can be extremely painful, and potentially lethal to divers in both the civilian world and the military. When underwater, a diver breathing compressed air out of a tank normally absorbs the air into fatty body tissues instead of breathing it all out, which is normally safe. But ascending to the surface too fast after a deep dive can cause those gases to form into bubbles inside the body — imagine yourself as the equivalent of a soda bottle, shaken really fast. That causes the body’s nervous system to go haywire and the joints to freeze up as if they were paralyzed. And that’s in addition to oxygen toxicity, nitrogen narcosis and a nasty problem called high-pressure nervous syndrome. None of these things are very pleasant, let alone for those who make a career deactivating underwater mines.

To avoid these problems, Navy divers are trained in “breathing static gas mixtures at prescribed pressures and durations,” according to the Darpa solicitation, as well as training in practical measures to avoid them, like divers would normally do. But to go further, Darpa’s plan is to use sensors to read “pressure-related physiologic conditions” and provide “constant physiological feedback.”

Then, the system will administer small amounts of nitric oxide into the diver’s lungs, which may reduce the bubbles that cause the bends. To clear up any confusion, nitric oxide — which helps our cells communicate with each other – is a different chemical than nitrous oxide, which is popularly known as a dental anesthetic. Darpa has also experimented with nitric oxide to see if it can prevent hypoxia in aircraft pilots.

Darpa also wants the gear to include a tiny gas chromatograph, which is used to analyze the gases, and another tool called CMUTs, or “capacitive micro-machined ultrasonic transducer arrays.” Basically, handheld ultrasound probes used by doctors to monitor body organs. But Darpa hopes the CMUTs can detect when bubbles form inside the body.

Finally, the agency wants the system to be built tough, and protect a diver during an “extreme combat dive profile.” This means the gear will have to work with a diver while jumping out of an airplane at six miles up, free-falling to the ocean before deploying a parachute, and diving down to 200 feet below the surface. Once the diver is underwater, they’ll need to be able to stay down for at least two hours, then surface, and dive again, although at a higher depth and for shorter periods of time. Not only that, but the system will have to protect the diver after he or she is picked up in an “unpressurized aircraft” like a helicopter. The reason that’s important? Taking to the air after diving can lead to decompression sickness even if you were safe coming out of the water, since the diver’s body is now reacting to an environment with plunging air pressure.

But there are also some civilian applications, and Darpa wants the gear to work with “exploration and extraction of undersea oil, gas, and minerals.” So super-powered oil divers searching for resources — in addition to bomb-disposal experts and special operations troops? Alright then. But it’s not certain whether the Aquaman would approve, being an environmentalist and all.

Biometrics, Immigration and How the US and Canada Collect Data on Citizens


Activist Post

The Immigration Sharing Treaty, an integral part of thePerimeter Security and Economic Competitiveness Action Plan (PSECAP), was signed by the US and Canada last week. David Jacobson, US Ambassador to Canada said:

This important agreement is the culmination of ten years of effort to advance the security of the United States and Canada, and to ensure the integrity of our immigration and visa systems. It reflects the commitment of President Obama and Prime Minister Harper to the Beyond the Border process, which will enhance North American security while facilitating the efficient movement of safe goods and well-intentioned travelers.

In 2011, Obama and Stephen Harper, Canadian Prime Minister, signed the PSECAP that allowed for the sharing of information on both Canadian and American citizens for the sake of immigration, improve border efficiency, border security and provide a network database to identify foreign national as well as stop illegals from crossing the border.

This includes biometric technologies to be used beginning in 2014.

Biometric border crossing cards (BCCs) have been used to identify Mexican citizens making short visits since 1997 with the approval of the Congress and in conjunction with the US State Department who employed DynCorp who is now owned by CSC.

Advancements in BBCs have led to laser visas which are “machine-readable, credit-card-sized documents with digitally encoded biometric data, including the bearer’s photograph and fingerprint.”
Those in the program were fingerprinted and photographed with their information entered into biometric databases with electronic verification of authenticity. Files were reviewed by the State Department. Once approved, the Bureau of Citizenship and Immigration Services (CIS) and the Department of Homeland Security (DHS) issued the individuals new laser visas.

Biometric technologies are defensible by the US government in use at border crossings as a quick and easy way to be identified. However, the price for entering into the US is now paid in private information about each individual who sets foot in the country. This gives the US the ability to know vast amounts of data about each person such as accurately distinguishing their characteristics:

  • Height
  • Weight
  • Gender
  • Nationality
  • Fingerprint
  • Disability

The Electronic System for Travel Authorization (ESTA), an agreed upon technology to be used under the PSECAP, was outlined in the Beyond the Border Declaration (BBD)which articulates the relationship between the US and Canada to address threats to their nations through secure borders as well as immigration, goods and services that travel through the two countries.

ESTA, an extension of the DHS through US Customs and Border Protection (CBP) oversees all applications for international travelers who enter the US. Their approval of passage is the deciding factor for entrance into America.

Stated in the BBD was the relationship between the US and Canada, “the purpose of interweaving the two nations to increase the resiliency of our networks, enhance public-private partnerships, and build a culture of shared responsibility,” according to Janet Napolitano, Secretary of DHS.

In November, both the US and Canadian governments revealed that they will combine efforts against cyber-attacks with the creation of an action plan between the DHS and Public Safety Canada (PSC) to improve digital infrastructure.

In Washington, DC and Ottawa, Canada there will be a collaboration of cyber security operation centers as well as shared information and the establishment of guidelines on private sector corporations. Added to this endeavor is the governmental alliance on propaganda methods to convince the citizens of both nations that cyber security must become an over-reaching control by the two governments.

Apple has filed a patent with the US Patent and Trademark Office for facial recognition systems that “analyzes the characteristics of an image’s subject and uses this data to create a ‘faceprint,’ to match with other photos to establish a person’s identity.”

According to the patent description:

In order to automatically recognise a person’s face that is detected in a digital image, facial detection/recognition software generates a set of features or a feature vector (referred to as a ‘faceprint’) that indicate characteristics of the person’s face. The generate faceprint is then compared to other faceprints to determine whether the generated faceprint matches (or is similar enough to) one or more of the other faceprints. If so, then the facial detection/recognition software determines that the person corresponding to the generated faceprint is likely to be the same person that corresponds to the ‘matched’ faceprints(s).

The federal government has released on a website, the information about their use of biometric technologies that they want the general public to know.
As far back as 2008, former President George W. Bush signed the National Security Presidential Directive (NSPD)-59 / Homeland Security Presidential Directive (HSPD) – 24, “Biometrics for Identification and Screening to Enhance National Security”. This NSPD explained the “framework to ensure Federal departments and agencies use compatible methods and procedures in the collection, storage, use, analysis, and sharing of biometric and associated biographic and contextual information of individuals in a lawful and appropriate manner, while respecting privacy and other legal rights under United States law.”

Biometrics, Immigration & How the US & Canada Collect Data on Citizens


 

(Occupy Corporatism) - The Immigration Sharing Treaty, an integral part of the Perimeter Security and Economic Competitiveness Action Plan (PSECAP), was signed by the US and Canada last week. David Jacobson, US Ambassador to Canada said: “This important agreement is the culmination of ten years of effort to advance the security of the United States and Canada, and to ensure the integrity of our immigration and visa systems. It reflects the commitment of President Obama and Prime Minister Harper to the Beyond the Border process, which will enhance North American security while facilitating the efficient movement of safe goods and well-intentioned travelers.

In 2011, Obama and Stephen Harper, Canadian Prime Minister, signed the PSECAP that allowed for the sharing of information on both Canadian and American citizens for the sake of immigration, improve border efficiency, border security and provide a network database to identify foreign national as well as stop illegals from crossing the border.

This includes biometric technologies to be used beginning in 2014.

Biometric border crossing cards (BCCs) have been used to identify Mexican citizens making short visits since 1997 with the approval of the Congress and in conjunction with the US State Department who employed DynCorp who is now owned by CSC.

Advancements in BBCs have led to laser visas which are “machine-readable, credit-card-sized documents with digitally encoded biometric data, including the bearer’s photograph and fingerprint.”

Those in the program were fingerprinted and photographed with their information entered into biometric databases with electronic verification of authenticity. Files were reviewed by the State Department. Once approved, the Bureau of Citizenship and Immigration Services (CIS) and the Department of Homeland Security (DHS) issued the individuals new laser visas.

Biometric technologies are defensible by the US government in use at border crossings as a quick and easy way to be identified. However the price for entering into the US is now paid in private information about each individual that sets foot in the country. This gives the US the ability to know vast amounts of data about each person such as accurately distinguishing their characteristics:

• Height • Weight • Gender • Nationality • Fingerprint • Disability

The Electronic System for Travel Authorization (ESTA), an agreed upon technology to be used under the PSECAP, was outlined in the Beyond the Border Declaration (BBD)which articulates the relationship between the US and Canada to address threats to their nations through secure borders as well as immigration, goods and services that travel through the two countries.

ESTA, an extension of the DHS through US Customs and Border Protection (CBP) oversees all applications for international travelers who enter the US. Their approval of passage is the deciding factor for entrance into America.

Stated in the BBD was the relationship between the US and Canada the purpose of interweaving the two nations to increase the resiliency of our networks, enhance public-private partnerships, and build a culture of shared responsibility,” according to Janet Napolitano, Secretary of DHS.

In November, both the US and Canadian governments revealed that they will combine efforts against cyber-attacks with the creation of an action plan between the DHS and Public Safety Canada (PSC) to improve digital infrastructure.

In Washington, DC and Ottawa, Canada there will be a collaboration of cyber security operation centers as well as shared information and the establishment of guidelines on private sector corporations. Add to this endeavor is the governmental alliance on propaganda methods to convince the citizens of both nations that cyber security must become an over-reaching control by the two governments.

Apple has filed a patent with the US Patent and Trademark Office for facial recognition systems that “analyzes the characteristics of an image’s subject and uses this data to create a “faceprint,” to match with other photos to establish a person’s identity.”

According to the patent description: “In order to automatically recognise a person’s face that is detected in a digital image, facial detection/recognition software generates a set of features or a feature vector (referred to as a “faceprint”) that indicate characteristics of the person’s face. The generate faceprint is then compared to other faceprints to determine whether the generated faceprint matches (or is similar enough to) one or more of the other faceprints. If so, then the facial detection/recognition software determines that the person corresponsing to the generated faceprint is likely to be the same person that corresponds to the “matched” faceprints(s).”

The federal government has released on a website, the information about their use of biometric technologies that they want the general public to know.

As far back as 2008, former President George W. Bush signed the National Security Presidential Directive (NSPD)-59 / Homeland Security Presidential Directive (HSPD) – 24, “Biometrics for Identification and Screening to Enhance National Security”. This NSPD explained the “framework to ensure Federal departments and agencies use compatible methods and procedures in the collection, storage, use, analysis, and sharing of biometric and associated biographic and contextual information of individuals in a lawful and appropriate manner, while respecting privacy and other legal rights under United States law.”

IBM: Computers Will See, Hear, Taste, Smell And Touch In 5 Years


(Beforeitsnews) -IBM (NYSE: IBM) unveiled the seventh annual “IBM 5 in 5″ a list of innovations that have the potential to change the way people work, live and interact during the next five years.

The IBM 5 in 5 is based on market and societal trends as well as emerging technologies from IBM’s R&D labs around the world that can make these transformations possible.

This year’s IBM 5 in 5 explores innovations that will be the underpinnings of the next era of computing, which IBM describes as the era of cognitive systems. This new generation of machines will learn, adapt, sense and begin to experience the world as it really is. This year’s predictions focus on one element of the new era, the ability of computers to mimic the human senses—in their own way, to see, smell, touch, taste and hear.

These sensing capabilities will help us become more aware, productive and help us think – but not think for us. Cognitive computing systems will help us see through complexity, keep up with the speed of information, make more informed decisions, improve our health and standard of living, enrich our lives and break down all kinds of barriers—including geographic distance, language, cost and inaccessibility.

“IBM scientists around the world are collaborating on advances that will help computers make sense of the world around them,” said Bernie Meyerson, IBM Fellow and VP of Innovation. “Just as the human brain relies on interacting with the world using multiple senses, by bringing combinations of these breakthroughs together, cognitive systems will bring even greater value and insights, helping us solve some of the most complicated challenges.”

Physical Analytics Research Manager Hendrik Hamann examines an array of wireless sensors used to detect environmental conditions such as temperature, humidity, gases and chemicals at IBM Research headquarters in Yorktown Heights, NY, Monday, December 17, 2012. In five years, technology advancements could enable sensors to analyze odors or the molecules in a person’s breath to help diagnose diseases. This innovation is part of IBM’s 5 in 5, a set of IBM annual predictions that have the potential to change the way people work, live and interact during the next five years. (Jon Simon/Feature Photo Service for IBM)

Here are five predictions that will define the future:

Touch: You will be able to touch through your phone

Imagine using your smartphone to shop for your wedding dress and being able to feel the satin or silk of the gown, or the lace on the veil, all from the surface of the screen? Or to feel the beading and weave of a blanket made by a local artisan half way around the world? In five years, industries such as retail will be transformed by the ability to “touch” a product through your mobile device.  IBM scientists are developing applications for the retail, healthcare and other sectors using haptic, infrared and pressure sensitive technologies to simulate touch, such as the texture and weave of a fabric — as a shopper brushes her finger over the image of the item on a device screen. Utilizing the vibration capabilities of the phone, every object will have a unique set of vibration patterns that represents the touch experience: short fast patterns, or longer and stronger strings of vibrations. The vibration pattern will differentiate silk from linen or cotton, helping simulate the physical sensation of actually touching the material.

Current uses of haptic and graphic technology in the gaming industry take the end user into a simulated environment. The opportunity and challenge here is to make the technology so ubiquitous and inter-woven into everyday experiences that it brings greater context to our lives by weaving technology in front and around us. This technology will become ubiquitous in our everyday lives, turning mobile phones into tools for natural and intuitive interaction with the world around us.

Sight: A pixel will be worth a thousand words

We take 500 billion photos a year[1]. 72 hours of video is uploaded to YouTube every minute[2]. The global medical diagnostic imaging market is expected to grow to $26.6 billion by 2016[3].

Computers today only understand pictures by the text we use to tag or title them; the majority of the information — the actual content of the image — is a mystery.  In the next five years, systems will not only be able to look at and recognize the contents of images and visual data, they will turn the pixels into meaning, beginning to make sense out of it similar to the way a human views and interprets a photograph. In the future, “brain-like” capabilities will let computers analyze features such as color, texture patterns or edge information and extract insights from visual media. This will have a profound impact for industries such as healthcare, retail and agriculture.

Within five years, these capabilities will be put to work in healthcare by making sense out of massive volumes of medical information such as MRIs, CT scans, X-Rays and ultrasound images to capture information tailored to particular anatomy or pathologies. What is critical in these images can be subtle or invisible to the human eye and requires careful measurement. By being trained to discriminate what to look for in images — such as differentiating healthy from diseased tissue — and correlating that with patient records and scientific literature, systems that can “see” will help doctors detect medical problems with far greater speed and accuracy.

Hearing: Computers will hear what matters

Ever wish you could make sense of the sounds all around you and be able to understand what’s not being said?

Within five years, a distributed system of clever sensors will detect elements of sound such as sound pressure, vibrations and sound waves at different frequencies. It will interpret these inputs to predict when trees will fall in a forest or when a landslide is imminent. Such a system will “listen” to our surroundings and measure movements, or the stress in a material, to warn us if danger lies ahead.  Raw sounds will be detected by sensors, much like the human brain. A system that receives this data will take into account other “modalities,” such as visual or tactile information, and classify and interpret the sounds based on what it has learned. When new sounds are detected, the system will form conclusions based on previous knowledge and the ability to recognize patterns.

For example, “baby talk” will be understood as a language, telling parents or doctors what infants are trying to communicate. Sounds can be a trigger for interpreting a baby’s behavior or needs. By being taught what baby sounds mean – whether fussing indicates a baby is hungry, hot, tired or in pain – a sophisticated speech recognition system would correlate sounds and babbles with other sensory or physiological information such as heart rate, pulse and temperature.

In the next five years, by learning about emotion and being able to sense mood, systems will pinpoint aspects of a conversation and analyze pitch, tone and hesitancy to help us have more productive dialogues that could improve customer call center interactions, or allow us to seamlessly interact with different cultures.

Today, IBM scientists are beginning to capture underwater noise levels in Galway Bay, Irelandto understand the sounds and vibrations of wave energy conversion machines, and the impact on sea life, by using underwater sensors that capture sound waves and transmit them to a receiving system to be analyzed.

Taste: Digital taste buds will help you to eat smarter

What if we could make healthy foods taste delicious using a different kind of computing system that is built for creativity?

IBM researchers are developing a computing system that actually experiences flavor, to be used with chefs to create the most tasty and novel recipes. It will break down ingredients to their molecular level and blend the chemistry of food compounds with the psychology behind what flavors and smells humans prefer. By comparing this with millions of recipes, the system will be able to create new flavor combinations that pair, for example, roasted chestnuts with other foods such as cooked beetroot, fresh caviar, and dry-cured ham.  A system like this can also be used to help us eat healthier, creating novel flavor combinations that will make us crave a vegetable casserole instead of potato chips.

The computer will be able to use algorithms to determine the precise chemical structure of food and why people like certain tastes. These algorithms will examine how chemicals interact with each other, the molecular complexity of flavor compounds and their bonding structure, and use that information, together with models of perception to predict the taste appeal of flavors.

Not only will it make healthy foods more palatable — it will also surprise us with unusual pairings of foods actually designed to maximize our experience of taste and flavor. In the case of people with special dietary needs such as individuals with diabetes, it would develop flavors and recipes to keep their blood sugar regulated, but satisfy their sweet tooth.

Smell: Computers will have a sense of smell

During the next five years, tiny sensors embedded in your computer or cell phone will detect if you’re coming down with a cold or other illness. By analyzing odors, biomarkers and thousands of molecules in someone’s breath, doctors will have help diagnosing and monitoring the onset of ailments such as liver and kidney disorders, asthma, diabetes and epilepsy by detecting which odors are normal and which are not.  Today IBM scientists are already sensing environmental conditions and gases to preserve works of art. This innovation is beginning to be applied to tackle clinical hygiene, one of the biggest challenges in healthcare today. For example, antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus (MRSA), which in 2005 was associated with almost 19,000 hospital stay-related deaths in the United States, is commonly found on the skin and can be easily transmitted wherever people are in close contact. One way of fighting MRSA exposure in healthcare institutions is by ensuring medical staff follow clinical hygiene guidelines. In the next five years, IBM technology will “smell” surfaces for disinfectants to determine whether rooms have been sanitized. Using novel wireless “mesh” networks, data on various chemicals will be gathered and measured by sensors, and continuously learn and adapt to new smells over time.

Due to advances in sensor and communication technologies in combination with deep learning systems, sensors can measure data in places never thought possible. For example, computer systems can be used in agriculture to “smell” or analyze the soil condition of crops. In urban environments, this technology will be used to monitor issues with refuge, sanitation and pollution – helping city agencies spot potential problems before they get out of hand.

U.S. Spies See Superhumans, Instant Cities by 2030


 

(Wired) -3-D printed organs. Brain chips providing superhuman abilities. Megacities, built from scratch. The U.S. intelligence community is taking a look at the world of 2030. And it is very, very sci-fi.

Every four or five years, the futurists at the National Intelligence Council take a stab at forecasting what the globe will be like two decades hence; the idea is to give some long-term, strategic guidance to the folks shaping America’s security and economic policies. (Full disclosure: I was once brought in as a consultant to evaluate one of the NIC’s interim reports.) On Monday, the Council released its newest findings, Global Trends 2030. Many of the prognostications are rather unsurprising: rising tides, a bigger data cloud, an aging population, and, of course, more drones. But tucked into the predictable predictions are some rather eye-opening assertions. Especially in the medical realm.

We’ve seen experimental prosthetics in recent years that are connected to the human neurological system. The Council says the link between man and machine is about to get way more cyborg-like. “As replacement limb technology advances, people may choose to enhance their physical selves as they do with cosmetic surgery today. Future retinal eye implants could enable night vision, and neuro-enhancements could provide superior memory recall or speed of thought,” the Council writes. “Brain-machine interfaces could provide ‘superhuman’ abilities, enhancing strength and speed, as well as providing functions not previously available.”

And if the machines can’t be embedded into the person, the person may embed himself in the robot. “Augmented reality systems can provide enhanced experiences of real-world situations. Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator,” the report adds. There’s no word about whether you’ll have to paint yourself blue to enjoy the benefits of this tech.

The Council’s futurists are less definitive about 3-D printing and other direct digital manufacturing processes. On one hand, they say that any changes brought about by these new ways of making things could be “relatively slow.” On the other, they rip a page out of Wired, comparing the emerging era of digital manufacturing to the “early days of personal computers and the internet.” Today, the machines may only be able to make simple objects. Tomorrow, that won’t be the case. And that shift will change not only manufacturing and electronics — but people, as well.

“By 2030, manufacturers may be able to combine some electrical components (such as electrical circuits, antennae, batteries, and memory) with structural components in one build, but integration with printed electronics manufacturing equipment will be necessary,” the Council writes. “Though printing of arteries or simple organs may be possible by 2030, bioprinting of complex organs will require significant technological breakthroughs.”

But not all of these biological developments will be good things, the Council notes. “Advances in synthetic biology also have the potential to be a double-edged sword and become a source of lethal weaponry accessible to do-it-yourself biologists or biohackers,” according to the report. Biology is becoming more and more like the open source software community, with “open-access repository of standardized and interchangeable building block or ‘biobrick’ biological parts that researchers can use” — for good or for bad.  ”This will be particularly true as technology becomes more accessible on a global basis and, as a result, makes it harder to track, regulate, or mitigate bioterror if not ‘bioerror.’”

Some of the Council’s predictions may give a few of Washington’s more sensitive politicians a rash. Although the Council does allow for the possibility of a “decisive re-assertion of U.S. power,” the futurists seem pretty well convinced that America is, relatively speaking, on the decline and that China is on the ascent. In fact, the Council believes nation-states in general are losing their oomph, in favor of “megacities [that will] flourish and take the lead in confronting global challenges.” And we’re not necessarily talking New York or Beijing here; some of these megacities could be somehow “built from scratch.”

Unlike some Congressmen, the Council takes climate change as a given. Unlike many in the environmental movement, the futurists believe that the discovery of cheap ways to harvest natural gas are going to relegate renewables to bit-player status in the energy game.

But most of the findings are apolitical bets on which tech will leap out the furthest over the next 17 years. People can check back in 2030 to see if the intelligence agencies are right — that is, if you still call the biomodded cyborgs roaming the planet people.