The Intercept https://theintercept.com/technology/ Wed, 06 Dec 2023 00:46:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 <![CDATA[Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist]]> https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/ https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/#respond Tue, 21 Nov 2023 18:10:11 +0000 https://theintercept.com/?p=452299 After the ad was discovered, digital rights advocates ran an experiment testing the limits of Facebook’s machine-learning moderation.

The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

]]>
A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people.”

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

“Our ad review system is designed to review all ads before they go live,” according to a Facebook ad policy overview. As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.

While these technologies allow the company to skirt the labor issues associated with human moderators, they also obscure how moderation decisions are made behind secret algorithms.

Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. Following the audit, Meta claimed it had “launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content.” Content, that is, like an ad espousing murder.

Incitement to Violence on Facebook

Amid the Israeli war on Palestinians in Gaza, Nashif was troubled enough by the explicit call in the ad to murder Larudee that he worried similar paid posts might contribute to violence against Palestinians.

Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide. (Last year, another group ran test ads inciting against Rohingya, a project along the same lines as 7amleh’s experiment; in that case, all the ads were also approved.)

The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. In light of assurances from Facebook that safeguards were in place, Nashif and 7amleh, which formally partners with Meta on censorship and free expression issues, were puzzled.

“Meta has a track record of not doing enough to protect marginalized communities.”

Curious if the approval was a fluke, 7amleh created and submitted 19 ads, in both Hebrew and Arabic, with text deliberately, flagrantly violating company rules — a test for Meta and Facebook. 7amleh’s ads were designed to test the approval process and see whether Meta’s ability to automatically screen violent and racist incitement had gotten better, even with unambiguous examples of violent incitement.

“We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalized communities,” Nashif said, “and that their ads manager system was particularly vulnerable.”

Meta’s appears to have failed 7amleh’s test.

The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality. Despite this, confirmation emails shared with The Intercept show Facebook approved every single ad.

Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users. (Meta retroactively rejected 7amleh’s Hebrew ads after The Intercept brought them to the company’s attention, but the Arabic versions remain approved within Facebook’s ad system.)

Facebook spokesperson Erin McPike confirmed the ads had been approved accidentally. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said. “That’s why ads can be reviewed multiple times, including once they go live.”

Just days after its own experimental ads were approved, 7amleh discovered an Arabic ad run by a group calling itself “Migrate Now” calling on “Arabs in Judea and Sumaria” — the name Israelis, particularly settlers, use to refer to the occupied Palestinian West Bank — to relocate to Jordan.

According to Facebook documentation, automated, software-based screening is the “primary method” used to approve or deny ads. But it’s unclear if the “hostile speech” algorithms used to detect violent or racist posts are also used in the ad approval process. In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine. Based on 7amleh’s experiment, however, this classifier either doesn’t work very well or is for some reason not being used to screen advertisements. (McPike did not answer when asked if the approval of 7amleh’s ads reflected an underlying issue with the hostile speech classifier.)

Either way, according to Nashif, the fact that these ads were approved points to an overall problem: Meta claims it can effectively use machine learning to deter explicit incitement to violence, while it clearly cannot.

“We know that Meta’s Hebrew classifiers are not operating effectively, and we have not seen the company respond to almost any of our concerns,” Nashif said in his statement. “Due to this lack of action, we feel that Meta may hold at least partial responsibility for some of the harm and violence Palestinians are suffering on the ground.”

The approval of the Arabic versions of the ads come as a particular surprise following a recent report by the Wall Street Journal that Meta had lowered the level of certainty its algorithmic censorship system needed to remove Arabic posts — from 80 percent confidence that the post broke the rules, to just 25 percent. In other words, Meta was less sure that the Arabic posts it was suppressing or deleting actually contained policy violations.

Nashif said, “There have been sustained actions resulting in the silencing of Palestinian voices.”

The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/feed/ 0
<![CDATA[Online Atrocity Database Exposed Thousands of Vulnerable People in Congo]]> https://theintercept.com/2023/11/17/congo-hrw-nyu-security-data/ https://theintercept.com/2023/11/17/congo-hrw-nyu-security-data/#respond Fri, 17 Nov 2023 19:54:46 +0000 https://theintercept.com/?p=451664 NYU and Human Rights Watch accidentally doxxed up to 8,000 victims, journalists, and activists due to a basic security error.

The post Online Atrocity Database Exposed Thousands of Vulnerable People in Congo appeared first on The Intercept.

]]>
A joint project of Human Rights Watch and New York University to document human rights abuses in the Democratic Republic of the Congo has been taken offline after exposing the identities of thousands of vulnerable people, including survivors of mass killings and sexual assaults.

The Kivu Security Tracker is a “data-centric crisis map” of atrocities in eastern Congo that has been used by policymakers, academics, journalists, and activists to “better understand trends, causes of insecurity and serious violations of international human rights and humanitarian law,” according to the deactivated site. This includes massacres, murders, rapes, and violence against activists and medical personnel by state security forces and armed groups, the site said.

But the KST’s lax security protocols appear to have accidentally doxxed up to 8,000 people, including activists, sexual assault survivors, United Nations staff, Congolese government officials, local journalists, and victims of attacks, an Intercept analysis found. Hundreds of documents — including 165 spreadsheets — that were on a public server contained the names, locations, phone numbers, and organizational affiliations of those sources, as well as sensitive information about some 17,000 “security incidents,” such as mass killings, torture, and attacks on peaceful protesters.

The data was available via KST’s main website, and anyone with an internet connection could access it. The information appears to have been publicly available on the internet for more than four years.

Experts told The Intercept that a leak of this magnitude would constitute one of the most egregious instances ever of the online exposure of personal data from a vulnerable, conflict-affected population.

“This was a serious violation of research ethics and privacy by KST and its sponsoring organizations,” said Daniel Fahey, former coordinator of the United Nations Security Council’s Group of Experts on the Democratic Republic of the Congo, after he was told about the error. “KST’s failure to secure its data poses serious risks to every person and entity listed in the database. The database puts thousands of people and hundreds of organizations at risk of retaliatory violence, harassment, and reputational damage.”

“If you’re trying to protect people but you’re doing more harm than good, then you shouldn’t be doing the work in the first place.”

“If you’re an NGO working in conflict zones with high-risk individuals and you’re not managing their data right, you’re putting the very people that you are trying to protect at risk of death,” said Adrien Ogée, the chief operations officer at the CyberPeace Institute, which provides cybersecurity assistance and threat detection and analysis to humanitarian nongovernmental organizations. Speaking generally about lax security protocols, Ogée added, “If you’re trying to protect people but you’re doing more harm than good, then you shouldn’t be doing the work in the first place.”

The dangers extend to what the database refers to as Congolese “focal points” who conducted field interviews and gathered information for the KST. “The level of risk that local KST staff have been exposed to is hard to describe,” said a researcher close to the project who asked not to be identified because they feared professional reprisal. “It’s unbelievable that a serious human rights or conflict research organization could ever throw their staff in the lion’s den just like that. Militias wanting to take revenge, governments of repressive neighboring states, ill-tempered security services — the list of the dangers that this exposes them to is very long.”

The spreadsheets, along with the main KST website, were taken offline on October 28, after investigative journalist Robert Flummerfelt, one of the authors of this story, discovered the leak and informed Human Rights Watch and New York University’s Center on International Cooperation. HRW subsequently assembled what one source close to the project described as a “crisis team.”

Last week, HRW and NYU’s Congo Research Group, the entity within the Center on International Cooperation that maintains the KST website, issued a statement that announced the takedown and referred in vague terms to “a security vulnerability in its database,” adding, “Our organizations are reviewing the security and privacy of our data and website, including how we gather and store information and our research methodology.” The statement made no mention of publicly exposing the identities of sources who provided information on a confidential basis.

In an internal statement sent to HRW employees on November 9 and obtained by The Intercept, Sari Bashi, the organization’s program director, informed staff of “a security vulnerability with respect to the KST database which contains personal data, such as the names and phone numbers of sources who provided information to KST researchers and some details of the incidents they reported.” She added that HRW had “convened a team to manage this incident,” including senior leadership, security and communications staff, and the organization’s general counsel.

The internal statement also noted that one of HRW’s partners in managing the KST had “hired a third-party cyber security company to investigate the extent of the exposure of the confidential data and to help us to better understand the potential implications.” 

“We are still discussing with our partner organizations the steps needed to fulfill our responsibilities to KST sources in the DRC whose personal information was compromised,” reads the statement, noting that HRW is working with staff in Congo to “understand, prepare for, and respond to any increase in security risks that may arise from this situation.” HRW directed staffers not to post on social media about the leak or publicly share any press stories about it due to “the very sensitive nature of the data and the possible security risks.”

The internal statement also said that “neither HRW, our partners, nor KST researchers in the DRC have received any information to suggest that anybody has been threatened or harmed as a result of this database vulnerability.”

The Intercept has not found any instances of individuals affected by the security failures, but it’s currently unknown if any of the thousands of people involved were harmed. 

“We deeply regret the security vulnerability in the KST database and share concerns about the wider security implications,” Human Rights Watch’s chief communications officer, Mei Fong, told The Intercept. Fong said in an email that the organization is “treating the data vulnerability in the KST database, and concerns around research methodology on the KST project, with the utmost seriousness.” Fong added, “Human Rights Watch did not set up or manage the KST website. We are working with our partners to support an investigation to establish how many people — other than the limited number we are so far aware of — may have accessed the KST data, what risks this may pose to others, and next steps. The security and confidentiality of those affected is our primary concern.” 

A peacekeeper of the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO) looks on at the force's base during a field training exercise in Sake, eastern Democratic Republic of Congo, November 06, 2023. UN peacekeepers in the Democratic Republic of Congo announced a joint operation with the national army on November 3, 2023 designed to stop M23 rebels from capturing key eastern cities. The announcement follows a surge in clashes with the M23 group since last month, which has forced 200,000 people from their homes according to the UN, after a period of relative calm. (Photo by Glody MURHABAZI / AFP) (Photo by GLODY MURHABAZI/AFP via Getty Images)
A peacekeeper of the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo looks on in Sake, Democratic Republic of the Congo, on Nov. 6, 2023.
Photo: Glody Murhabzi/AFP via Getty Images

Bridgeway Foundation

Two sources associated with the KST told The Intercept that, internally, KST staff are blaming the security lapse on the Bridgeway Foundation, one of the donors that helped conceive and fund the KST and has publicly taken credit for being a “founding partner” of the project.

Bridgeway is the philanthropic wing of a Texas-based investment firm. Best known for its support for the “Kony 2012” campaign, the organization was involved in what a U.S. Army Special Operations Command’s historian called “intense activism and lobbying” that paved the way for U.S. military intervention in Central Africa. Those efforts by Bridgeway and others helped facilitate a failed $780 million U.S. military effort to hunt down Joseph Kony, the leader of a Ugandan armed group known as the Lord’s Resistance Army, or LRA.

More recently, the foundation was accused of partnering with Uganda’s security forces in an effort to drag the United States into “another dangerous quagmire” in Congo. “Why,” asked Helen Epstein in a 2021 investigation for The Nation, “is Bridgeway, a foundation that claims to be working to end crimes against humanity, involved with one of Africa’s most ruthless security agencies?”

One Congo expert said that Bridgeway has played the role of a “humanitarian privateer” for the U.S. government and employed tactics such as “private intelligence and military training.” As part of Bridgeway’s efforts to track down Kony, it helped create the LRA Crisis Tracker, a platform nearly identical to the KST that tracks attacks by the Ugandan militia. After taking an interest in armed groups in Congo, Bridgeway quietly pushed for the creation of a similar platform for Congo, partnering with NYU and HRW to launch the KST in 2017.

While NYU’s Congo Research Group oversaw the “collection and triangulation of data” for the KST, and HRW provided training and other support to KST researchers, the Bridgeway Foundation offered “technical and financial support,” according to a 2022 report by top foundation personnel, including Tara Candland, Bridgeway’s vice president of research and analysis, and Laren Poole, its chief operations officer. In a report published earlier this year, Poole and others wrote that the foundation had “no role in the incident tracking process.” 

Several sources with ties to KST staff told The Intercept that Bridgeway was responsible for contracting the companies that designed the KST website and data collection system, including a tech company called Semantic AI. Semantic’s website mentions a partnership with Bridgeway to analyze violence in Congo, referring to their product as “intelligence software” that “allows Bridgeway and their partners to take action to protect the region.” The case study adds that the KST platform helps Bridgeway “track, analyze, and counter” armed groups in Congo.

Poole said that the KST had hired a cybersecurity firm to conduct a “comprehensive security assessment of the servers and hosting environment with the goal of better understanding the nature and extent of the exposure.” But it appears that answers to the most basic questions are not yet known. “We cannot currently determine when the security vulnerability occurred or how long the data was exposed,” Poole told The Intercept via email. “As recently as last year, an audit of the site was conducted that included assessing security threats, and this vulnerability was not identified.”

Like HRW, Bridgeway disclaimed direct responsibility for management of the KST’s website, attributing that work to two web development firms, Fifty and Fifty, which built and managed the KST from its inception until 2022, and Boldcode. That year, Poole said, “Boldcode was contracted to assume management and security responsibilities of the site.” But Poole said that “KST project leadership has had oversight over firms contracted for website development and maintenance since its inception.”

The Intercept did not receive a response to multiple messages sent to Fifty and Fifty. Boldcode did not immediately respond to a request for comment.

Warnings of Harm

Experts have been sounding the alarm about the dangers of humanitarian data leaks for years. “Critical incidents – such as breaches of platforms and networks, weaponisation of humanitarian data to aid attacks on vulnerable populations, and exploitation of humanitarian systems against responders and beneficiaries – may already be occurring and causing grievous harm without public accountability,” wrote a trio of researchers from the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative in 2017, the same year the KST was launched.

A 2022 analysis by the CyberPeace Institute identified 157 “cyber incidents” that affected the not-for-profit sector between July 2020 and June 2022. In at least 60 cases, personal data was exposed, and in at least 28, it was taken. “This type of sensitive personal information can be monetized or simply used to cause further harm,” the report says. “Such exploitation has a strong potential for re-victimization of individuals as well as the organizations themselves.”

In 2021, HRW itself criticized the United Nations Refugee Agency for having “improperly collected and shared personal information from ethnic Rohingya refugees.” In some cases, according to HRW, the agency had “failed to obtain refugees’ informed consent to share their data,” exposing refugees to further risk.

Earlier this year, HRW criticized the Egyptian government and a private British company, Academic Assessment, for leaving the personal information of children unprotected on the open web for at least eight months. “The exposure violates children’s privacy, exposes them to the risk of serious harm, and appears to violate the data protection laws in both Egypt and the United Kingdom,” reads the April report.

In that case, 72,000 records — including children’s names, birth dates, phone numbers, and photo identification — were left vulnerable. “By carelessly exposing children’s private information, the Egyptian government and Academic Assessment put children at risk of serious harm,” said Hye Jung Han, children’s rights and technology researcher and advocate at HRW at the time.

The threats posed by the release of the KST information are far greater than the Egyptian breach. For decades, Congo has been beset by armed violence, from wars involving the neighboring nations of Rwanda and Uganda to attacks by machete-wielding militias. More recently, in the country’s far east, millions have been killed, raped, or driven from their homes by more than 120 armed groups.

Almost all the individuals in the database, as well as their interviewers, appear to have confidentially provided sensitive information about armed groups, militias, or state security forces, all of which are implicated in grave human rights violations. Given the lawlessness and insecurity of eastern Congo, the most vulnerable individuals — members of local civil society organizations, activists, and residents living in conflict areas — are at risk of arrest, kidnapping, sexual assault, or death at the hands of these groups.

“For an organization working with people in a conflict zone, this is the most important type of data that they have, so it should be critically protected,” said CyberPeace Institute’s Ogée, who previously worked at European cybersecurity agencies and the World Economic Forum.

The KST’s sensitive files were hosted on an open “bucket”: a cloud storage server accessible to the open internet. Because the project posted monthly public reports on the same server that contained the sensitive information, the server’s URL was often produced in search engine results related to the project.

“The primary methodology in the humanitarian sector is ‘do no harm.’ If you’re not able to come into a conflict zone and do your work without creating any more harm, then you shouldn’t be doing it,” Ogée said. “The day that database is created and uploaded on that bucket, an NGO that is security-minded and thinks about ‘do no harm’ should have every process in place to make sure that this database never gets accessed from the outside.”

The leak exposed the identities of 6,000 to 8,000 individuals, according to The Intercept’s analysis. The dataset references thousands of sources labeled “civil society” and “inhabitants” of villages where violent incidents occurred, as well as hundreds of “youth” and “human rights defenders.” Congolese health professionals and teachers are cited hundreds of times, and there are multiple references to students, lawyers, psychologists, “women leaders,” magistrates, and Congolese civil society groups, including prominent activist organizations regularly targeted by the government.

“It’s really shocking,” said a humanitarian researcher with long experience conducting interviews with vulnerable people in African conflict zones. “The most important thing to me is the security of my sources. I would rather not document a massacre than endanger my sources. So to leave their information in the open is incredibly negligent. Someone needs to take responsibility.”

Residents of Bambo in Rutshuru territory, 60 kilometers north of Goma, the capital of North Kivu, eastern Democratic Republic of Congo, flee as the M23 attacked the town on October 26, 2023. Around noon, M23 rebels, supported by the Rwandan army according to the UN, the USA and the European Union, attacked the town of Bambo with mortars, causing several thousand inhabitants to flee. Hundreds of Congolese soldiers, police officers and proxy militiamen were seen joining the population as they tried to escape the fighting. Several civilians were killed and wounded in the fighting, according to medical sources on the spot. The M23 has captured swathes of territory in North Kivu province since 2021, forcing more than a million people to flee. (Photo by ALEXIS HUGUET / AFP) (Photo by ALEXIS HUGUET/AFP via Getty Images)
Residents of Bambo in Rutshuru territory in the Democratic Republic of the Congo flee rebel attacks on Oct. 26, 2023.
Photo: Alexis Huguet/AFP via Getty Images

Breach of Ethics

Since being contacted by The Intercept, the organizations involved have sought to distance themselves from the project’s lax security protocols. 

In its internal statement to staff, HRW emphasized that it was not responsible for collecting information or supervising activities for KST, but was “involved in designing the research methodology, provided training, guidance and logistical support to KST researchers, and spot-checked some information.”

“HRW does not manage the KST website and did not set up, manage or maintain the database,” the internal statement said.

The Intercept spoke with multiple people exposed in the data leak who said they did not consent to any information being stored in a database. This was confirmed by four sources who worked closely with the KST, who said that gaining informed consent from people who were interviewed, including advising them that they were being interviewed for the KST, was not a part of the research methodology.

Sources close to the KST noted that its researchers didn’t identify who they were working for. The failure to obtain consent to collect personal information was likely an institutional oversight, they said.

“Obtaining informed consent is an undisputed core principle of research ethics,” the researcher who collaborated with the KST told The Intercept. “Not telling people who you work for and what happens to the information you provide to them amounts to lying. And that’s what has happened here at an unimaginable scale.”

In an email to NYU’s Center on International Cooperation and their Human Research Protections Program obtained by The Intercept, Fahey, the former coordinator of the Group of Experts on the Democratic Republic of the Congo, charged that KST staff “apparently failed to disclose that they were working for KST when soliciting information and did not tell sources how their information would be cataloged or used.”

In response, Sarah Cliffe, the executive director of NYU’s Center on International Cooperation, did not acknowledge Fahey’s concerns about informed consent, but noted that the institution takes “very seriously” concerns about the security of sources and KST staff exposed in the leak, according to an email seen by The Intercept. “We can assure you that we are taking immediate steps to investigate this and decide on the best course of action,” Cliffe wrote on November 1. 

Fahey told The Intercept that NYU’s Human Research Protections Program did not respond to his questions about KST’s compliance with accepted academic standards and securing informed consent from Congolese informants. That NYU office includes the university’s institutional review board, or IRB, the body comprised of faculty and staff who review research protocols to ensure protection of human subjects and compliance with state and federal regulations as well as university policies.

NYU spokesperson John Beckman confirmed that while the KST’s researchers received training on security, research methodology, and research ethics, “including the importance of informed consent,” some of the people interviewed “were not informed that their personally identifiable information would be recorded in the database and were unaware that the information was to be used for the KST.” 

Beckman added, “NYU is convening an investigative panel to review these human subject-related issues.”

Beckman also stated that the failure of Congolese “focal points” to provide informed consent tended to occur in situations that may have affected their own security. “Nevertheless, this raises troubling issues,” Beckman said, noting that all the partners involved in the KST “will be working together to review what happened, to identify what needs to be corrected going forward, and to determine how best to safeguard those involved in collecting and providing information about the incidents the KST is meant to track.”

Fong, of HRW, also acknowledged failures to provide informed consent in all instances. “We are aware that, while the KST researchers appropriately identified themselves as working for Congolese civil society organizations, some KST researchers did not in all cases identify themselves as working for KST, for security reasons,” she told The Intercept. “We are reviewing the research protocols and their implementation.”

“The partners have been working hard to try to address what happened and mitigate it,” Beckman told The Intercept, specifying that all involved were working to determine the safest method to inform those exposed in the leak.

Both NYU and HRW named their Congolese partner organization as being involved in some of the original errors and the institutional response. 

The fallout from the exposure of the data may extend far beyond the breach of academic or NGO protocols. “Given the lack of security on KST’s website, it’s possible that intelligence agencies in Rwanda, Uganda, Burundi, DRC, and elsewhere have been accessing and mining this data for years,” Fahey said. “It is also possible that Congolese armed groups and national security forces have monitored who said what to KST staff.”

The post Online Atrocity Database Exposed Thousands of Vulnerable People in Congo appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/17/congo-hrw-nyu-security-data/feed/ 0 DRCONGO-UN-UNREST-CONFLICT A peacekeeper of the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo looks on in Sake, eastern Congo, on November 6, 2023. DRCONGO-UNREST-CONFLICT Residents of Bambo in Rutshuru territory, eastern Democratic Republic of Congo, flee rebel attacks on October 26, 2023.
<![CDATA[LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection]]> https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/ https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/#respond Thu, 16 Nov 2023 17:42:15 +0000 https://theintercept.com/?p=451615 The data brokerage giant sold face recognition, phone tracking, and other surveillance technology to the border guards, say government documents.

The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

]]>
The popular data broker LexisNexis began selling face recognition services and personal location data to U.S. Customs and Border Protection late last year, according to contract documents obtained through a Freedom of Information Act request.

According to the documents, obtained by the advocacy group Just Futures Law and shared with The Intercept, LexisNexis Risk Solutions began selling surveillance tools to the border enforcement agency in December 2022. The $15.9 million contract includes a broad menu of powerful tools for locating individuals throughout the United States using a vast array of personal data, much of it obtained and used without judicial oversight.

“This contract is mass surveillance in hyperdrive.”

Through LexisNexis, CBP investigators gained a convenient place to centralize, analyze, and search various databases containing enormous volumes of intimate personal information, both public and proprietary.

“This contract is mass surveillance in hyperdrive,” Julie Mao, an attorney and co-founder of Just Futures Law, told The Intercept. “It’s frightening that a rogue agency such as CBP has access to so many powerful technologies at the click of the button. Unfortunately, this is what LexisNexis appears now to be selling to thousands of police forces across the country. It’s now become a one-stop shop for accessing a range of invasive surveillance tools.”

A variety of CBP offices would make use of the surveillance tools, according to the documents. Among them is the U.S. Border Patrol, which would use LexisNexis to “help examine individuals and entities to determine their admissibility to the US. and their proclivity to violate U.S. laws and regulations.”

Among other tools, the contract shows LexisNexis is providing CBP with social media surveillance, access to jail booking data, face recognition and “geolocation analysis & geographic mapping” of cellphones. All this data can be queried in “large volume online batching,” allowing CBP investigators to target broad groups of people and discern “connections among individuals, incidents, activities, and locations,” handily visualized through Google Maps.

CBP declined to comment for this story, and LexisNexis did not respond to an inquiry. Despite the explicit reference to providing “LexisNexis Facial Recognition” in the contract, a fact sheet published by the company online says, “LexisNexis Risk Solutions does not provide the Department of Homeland Security” — CBP’s parent agency — “or US Immigration and Customs Enforcement with license plate images or facial recognition capabilities.”

The contract includes a variety of means for CBP to exploit the cellphones of those it targets. Accurint, a police and counterterror surveillance tool LexisNexis acquired in 2004, allows the agency to do analysis of real-time phone call records and phone geolocation through its “TraX” software.

While it’s unclear how exactly TraX pinpoints its targets, LexisNexis marketing materials cite “cellular providers live pings for geolocation tracking.” These materials also note that TraX incorporates both “call detail records obtained through legal process (i.e. search warrant or court order) and third-party device geolocation information.” A 2023 LexisNexis promotional brochure says, “The LexisNexis Risk Solutions Geolocation Investigative Team offers geolocation analysis and investigative case assistance to law enforcement and public safety customers.”

Any CBP use of geolocational data is controversial, given the agency’s recent history. Prior reporting found that, rather than request phone location data through a search warrant, CBP simply purchased such data from unregulated brokers — a practice that critics say allows the government to sidestep Fourth Amendment protections against police searches.

According to a September report by 404 Media, CBP recently told Sen. Ron Wyden, D-Ore., it “will not be utilizing Commercial Telemetry Data (CTD) after the conclusion of FY23 (September 30, 2023),” using a technical term for such commercially purchased location information.

The agency, however, also told Wyden that it could renew its use of commercial location data if there were “a critical mission need” in the future. It’s unclear if this contract provided commercial location data to CBP, or if it was affected by the agency’s commitment to Wyden. (LexisNexis did not respond to a question about whether it provides or provided the type of phone location data that CBP had sworn off.)

The contract also shows how LexisNexis operates as a reseller for surveillance tools created by other vendors. Its social media surveillance is “powered by” Babel X, a controversial firm that CBP and the FBI have previously used.

According to a May 2023 report by Motherboard, Babel X allows users to input one piece of information about a surveillance target, like a Social Security number, and receive large amounts of collated information back. The returned data can include “social media posts, linked IP address, employment history, and unique advertising identifiers associated with their mobile phone. The monitoring can apply to U.S. persons, including citizens and permanent residents, as well as refugees and asylum seekers.”

While LexisNexis is known to provide similar data services to U.S. Immigration and Customs Enforcement, another division of the Department of Homeland Security, details of its surveillance work with CBP were not previously known. Though both agencies enforce immigration law, CBP typically focuses on enforcement along the border, while ICE detains and deports migrants inland.

In recent years, CBP has drawn harsh criticism from civil libertarians and human rights advocates for its activities both at and far from the U.S.-Mexico border. In 2020, CBP was found to have flown a Predator surveillance drone over Minneapolis protests after the murder of George Floyd; two months later, CBP agents in unmarked vehicles seized racial justice protesters off the streets of Portland, Oregon — an act the American Civil Liberties Union condemned as a “blatant demonstration of unconstitutional authoritarianism.”

Just Futures Law is currently suing LexisNexis over claims it illegally obtains and sells personal data.

The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/feed/ 0
<![CDATA[Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR.]]> https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/ https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/#respond Wed, 15 Nov 2023 19:51:21 +0000 https://theintercept.com/?p=451208 Employees are internally protesting Google’s Project Nimbus, which they fear is being used by Israel to violate Palestinians’ human rights.

The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

]]>
A Google employee protesting the tech giant’s business with the Israeli government was questioned by Google’s human resources department over allegations that he endorsed terrorism, The Intercept has learned. The employee said he was the only Muslim and Middle Easterner who circulated the letter and also the only one who was confronted by HR about it.

The employee was objecting to Project Nimbus, Google’s controversial $1.2 billion contract with the Israeli government and its military to provide state-of-the-art cloud computing and machine learning tools.

Since its announcement two years ago, Project Nimbus has drawn widespread criticism both inside and outside Google, spurring employee-led protests and warnings from human rights groups and surveillance experts that it could bolster state repression of Palestinians.

Mohammad Khatami, a Google software engineer, sent an email to two internal listservs on October 18 saying Project Nimbus was implicated in human rights abuses against Palestinians — abuses that fit a 75-year pattern that had brought the conflict to the October 7 Hamas massacre of some 1,200 Israelis, mostly civilians. The letter, distributed internally by anti-Nimbus Google workers through company email lists, went on to say that Google could become “complicit in what history will remember as a genocide.”

“Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

Twelve days later, Google HR told Khatami they were scheduling a meeting with him, during which he says he was questioned about whether the letter was “justifying the terrorism on October 7th.”

In an interview, Khatami told The Intercept he was not only disturbed by what he considers an attempt by Google to stifle dissent on Nimbus, but also believes he was left feeling singled out because of his religion and ethnicity. The letter was drafted and internally circulated by a group of anti-Nimbus Google employees, but none of them other than Khatami were called by HR, according to Khatami and Josh Marxen, another anti-Nimbus organizer at Google who helped spread the letter. Though he declined to comment on the outcome of the HR meeting, Khatami said it left him shaken.

“It was very emotionally taxing,” Khatami said. “I was crying by the end of it.”

“I’m the only Muslim or Middle Eastern organizer who sent out that email,” he told The Intercept. “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

The Intercept reviewed a virtually identical email sent by Marxen, also on October 18. Though there are a few small changes — Marxen’s email refers to “a seige [sic] upon all of Gaza” whereas Khamati’s cites “the complete destitution of Gaza” — both contain verbatim language connecting the October 7 attack to Israel’s past treatment of Palestinians.

Google spokesperson Courtenay Mencini told The Intercept, “We follow up on every concern raised, and in this case, dozens of employees reported this individual’s email – not the sharing of the petition itself – for including language that did not follow our workplace policies.” Mencini declined to say which workplace policies Khatami’s email allegedly violated, whether other organizers had gotten HR calls, or if any other company personnel had been approached by Employee Relations for comments made about the war.

The incident comes just one year after former Google employee Ariel Koren said the company attempted to force her to relocate to Brazil in retaliation for her early anti-Nimbus organizing. Koren later quit in protest and remains active in advocating against the contract. Project Nimbus, despite the dissent, remains in place, in part because of contractual terms put in place by Israel forbidding Google from cutting off service in response to political pressure or boycott campaigns.

Dark Clouds Over Nimbus

Dissent at Google is neither rare nor ineffective. Employee opposition to controversial military contracts has previously pushed the company to drop plans to help with the Pentagon’s drone warfare program and a planned Chinese version of Google Search that would filter out results unwanted by the Chinese government. Nimbus, however, has managed to survive.

In the wake of the October 7 Hamas attacks against Israel and resulting Israeli counteroffensive, now in its second month of airstrikes and a more recent ground invasion, Project Nimbus is again a flashpoint within the company.

With the rank and file disturbed by the company’s role as a defense contractor, Google has attempted to downplay the military nature of the contract.

Mencini, the Google spokesperson, said that anti-Nimbus organizers were “misrepresenting” the contract’s military role.

“This is part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” Mencini said. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

Nimbus training documents published by The Intercept last year, however, show the company was pitching its use for the Ministry of Defense. Moreover, the Israeli government itself is open about the military applications of Project Nimbus: A 2023 press release by the Israeli Ministry of Finance specifically names the Israel Defense Forces as a beneficiary, while an overview written by the country’s National Digital Agency describes the contract as “a comprehensive and in-depth solution to the provision of public cloud services to the Government, the defense establishment and other public organizations.”

“If we do not speak out now, we are complicit in what history will remember as a genocide.”

Against this backdrop, Khatami, in coordination with others in the worker-led anti-Nimbus campaign, sent his October 18 note to internal Arab and Middle Eastern affinity groups laying out their argument against the project and asking like-minded colleagues to sign an employee petition.

“Through Project Nimbus, Google is complicit in the mass surveillance and other human rights abuses which Palestinians have been subject to daily for the past 75 years, and which is the root cause of the violence initiated on October 7th,” the letter said. “If we do not speak out now, we are complicit in what history will remember as a genocide.”

On October 30, Khatami received an email from Google’s Employee Relations division informing him that he would soon be questioned by company representatives regarding “a concern about your conduct that has been brought to our attention.”

According to Khatami, in the ensuing phone call, Google HR pressed him about the portion of his email that made a historical connection between the October 7 Hamas attack and the 75 years of Israeli rights abuses that preceded it, claiming some of his co-workers believed he was endorsing violence. Khatami recalled being asked, “Can you see how people are thinking you’re justifying the terrorism on October 7th?”

Khatami said he and his fellow anti-Nimbus organizers were in no way endorsing the violence against Israeli civilians — just as they now oppose the deaths of more than 10,000 Palestinians, according to the latest figures from Gaza’s Ministry of Health. Rather, the Google employees wanted to provide sociopolitical context for Project Nimbus, part of a broader employee-led effort of “demilitarizing our company that was never meant to be militarized.” To point out the relevant background leading to the October 7 attack, he said, is not to approve it.

“We wrote that the root cause of the violence is the occupation,” Khatami explained. “Analysis is not justification.”

Double Standard

Khatami also objects to what he said is a double standard within Google about what speech about the war is tolerated, a source of ongoing turmoil at the company. The day after his original email, a Google employee responded angrily to the email chain: “Accusing Israel of genocide and Google of being complicit is a grave accusation!” This employee, who works at the company’s cloud computing division, itself at the core of Project Nimbus, continued:

To break it down for you, project nimbus contributes to Israel’s security. Therefore, any calls to drop it are meant to weaken Israel’s security. If Israel’s security is weak, then the prospect of more terrorist attacks, like the one we saw on October 7, is high. Terrorist attacks will result in casualties that will affect YOUR Israeli colleagues and their family. Attacks will be retaliated by Israel which will result in casualties that will affect your Palestinian colleagues and their family (because they are used as shields by the terrorists)…bottom line, a secured Israel means less lives lost! Therefore if you have the good intention to preserve human lives then you MUST support project Nimbus!

While Khatami disagrees strongly with the overall argument in the response email, he objected in particular to the co-worker’s claim that Israel is killing Palestinians “because they are used as shields by the terrorists” — a justification of violence far more explicit than the one he was accused of, he said. Khatami questioned whether widespread references to the inviolability of Israeli self-defense by Google employees have provoked treatment from HR similar to what he received after his email about Nimbus.

Internal employee communications viewed by The Intercept show tensions within Google over the Israeli–Palestinian conflict aren’t limited to debates over Project Nimbus. A screenshots viewed by The Intercept shows an Israeli Google employee repeatedly asking Middle Eastern colleagues if they support Hamas, while another shows a Google engineer suggesting Palestinians worried about the welfare of their children should simply stop having kids. Another lamented “friends and family [who] are slaughtered by the Gaza-grown group of bloodthirsty animals.”

According to a recent New York Times report, which found “at least one” instance of “overtly antisemitic” content posted through internal Google channels, “one worker had been fired after writing in an internal company message board that Israelis living near Gaza ‘deserved to be impacted.’”

Another screenshot reviewed by The Intercept, taken from an email group for Israeli Google staff, shows employees discussing a post by a colleague criticizing the Israeli occupation and encouraging donations to a Gaza relief fund.

“During this time we all need to stay strong as a nation and united,” one Google employee replied in the email group. “As if we are not going through enough suffering, we will unfortunately see many emails, comments either internally or on social media that are pro Hamas and clearly anti semitic. report immediately!” Another added: “People like that make me sick. But she is a lost cause.” A third chimed in to say they had internally reported the colleague soliciting donations. A separate post soliciting donations for the same Gaza relief fund was downvoted 139 times on an internal message board, according to another screenshot, while a post stating only “Killing civilians is indefensible” received 51 downvotes.

While Khatami says he was unnerved and disheartened by the HR grilling, he’s still committed to organizing against Project Nimbus.

“It definitely emotionally affected me, it definitely made me significantly more fearful or organizing in this space,” he said. “But I think knowing that people are dying right now and slaughtered in a genocide that’s aided and abetted by my company, remembering that makes the fear go away.”

The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/feed/ 0 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort]]> https://theintercept.com/2023/11/10/nso-group-israel-gaza-blacklist/ https://theintercept.com/2023/11/10/nso-group-israel-gaza-blacklist/#respond Fri, 10 Nov 2023 16:20:18 +0000 https://theintercept.com/?p=450815 NSO Group has pushed to be taken off a U.S. blacklist since 2021. Now, citing the threat of Hamas, it’s trying to cozy up to the Americans.

The post Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort appeared first on The Intercept.

]]>
On November 7, NSO Group, the Israeli spyware company infamous for its Pegasus phone-tapping technology, sent an urgent email and letter by UPS to request a meeting with Secretary of State Antony Blinken and officials at the U.S. State Department. 

“I am writing on behalf of NSO Group to urgently request an opportunity to engage with Secretary Blinken and the officials at the State Department regarding the importance of cyber intelligence technology in the wake of the grave security threats posed by the recent Hamas terrorist attacks in Israel and their aftermath,” wrote Timothy Dickinson, partner at the Los Angeles-based law firm Paul Hastings, headquartered in Los Angeles, on behalf of NSO. 

In the last two years NSO’s reputation has taken a beating amid revelations about its spyware’s role in human rights abuses. 

As controversy was erupting over its role in authoritarian governments’ spying, NSO Group was blacklisted by the U.S. Department of Commerce in November 2021, “to put human rights at the center of US foreign policy,” the agency said at the time. A month after the blacklisting, it was revealed that Pegasus had been used to spy on American diplomats

NSO’s letter to Blinken — publicly filed as part of Paul Hastings’s obligation under the Foreign Agents Registration Act — is part of the company’s latest attempt to reinvent its image and, most importantly, a bid to reverse the blacklisting. (Neither the State Department nor Paul Hastings responded to requests for comment.)

For NSO, the blacklisting has been an existential threat. The push to reverse it, which included hiring multiple American public relations and law firms, has cost NSO $1.5 million in lobbying last year, more than the government of Israel itself spent. It focused heavily on Republican politicians, many of whom are now vocal in their support of Israel, and against a ceasefire in the brutal war being waged by the country in the Gaza Strip. 

Amid the Israeli war effort, NSO appears more convinced than ever that it is of use to the American government. 

“NSO’s technology is supporting the current global fight against terrorism in any and all forms,” said the letter to Blinken. “These efforts squarely align with the Biden-Harris administration’s repeated messages and actions of support for the Israeli government.” 

NSO is marketing itself as a volunteer in the Israeli war effort, allegedly helping track down missing Israelis and hostages. And at this moment, which half a dozen experts have described to The Intercept as NSO’s attempt at “crisis-washing,” some believe that the American government may create a space for NSO to come back to the table. 

“NSO’s participation in the Israeli government’s efforts to locate citizens in Gaza seems to be an effort by the company to rehabilitate its image in this crisis,” said Adam Shapiro, director of advocacy for Israel–Palestine at Democracy for the Arab World Now, a group founded by the slain journalist Jamal Khashoggi to advocate for human rights in the Middle East. “But alarm bells should be ringing that NSO Group has been recruited in Israel’s war effort.”

Documents obtained by The Intercept through FARA and public records requests illustrate the company’s intense lobbying efforts — especially among hawkish, pro-Israel Republicans.

Working on NSO’s behalf, Pillsbury Winthrop Shaw Pittman, a New York-based law firm, held over half a dozen meetings between March and August with Rep. Pete Sessions, R-Texas, who sits on the House Financial Services Committee as well as Oversight and Reform. One was to “discuss status of Bureau of Industry and Security Communications, U.S. Department of Commerce appeal.” (Pillsbury did not respond to a request for comment.)

“NSO’s participation in the Israeli government’s efforts to locate citizens in Gaza seems to be an effort by the company to rehabilitate its image in this crisis.”

The lobbyists also had three meetings in March and April with Justin Discigil, then chief of staff to the far-right Rep. Dan Crenshaw, R-Texas, who sits on the House Permanent Select Committee on Intelligence. (Neither Sessions nor Crenshaw responded to requests for comment.)

Public records about NSO’s push also offer concrete examples of something the company has been at pains to evade, and that the American government has routinely overlooked: the existing relationship between the Israeli state and the spyware company. 

“NSO’s Pegasus tool is treated in Israel as a defense article subject to regulation by the country’s regulators, which conducts its own assessment of human rights risks in countries across the world,” the letter to Blinken said. 

A previously unreported May 2022 email from Department of Commerce official Elena Love to lobbyists for NSO also draws a connection between the Israeli government and NSO. In her email, Love asked the lobbyists working to undo NSO’s blacklisting for permission to send a list of questions directly to Israeli officials. (The Department of Commerce said there is no change to the status of NSO on the blacklist and declined to comment further. NSO Group and the Israeli government did not respond to requests for comment.)

Currently, in the war effort, the Israeli government is letting NSO sit upfront. In a podcast by the Israeli news outlet Haaretz from October 19 — podcasts are less heavily censored by the government than written articles — a reporter discusses how NSO has reported for duty, in essence taken on work for the Ministry of Defense.

“What’s really, really important to understand is that these companies,” said Haaretz journalist Omer Benjakob in the podcast, “some of them have already been working with the state of Israel.”

Hiring Lobbyists in D.C.

By selling its spyware to authoritarian governments, NSO has facilitated a variety of human rights abuses: from use by the United Arab Emirates to spy on Khashoggi, the journalist later killed by Saudi Arabia, to reporting just this week on its use to spy on Indian journalists. According to the research group Forensic Architecture, the use of NSO Group’s products has contributed to over 150 physical attacks against journalists, rights advocates, and other civil society actors, including some of their deaths. 

Now the company is mounting a rapacious public relations push to undo the harm to its reputation. 

NSO’s recent hiring of two lobbyists, Jeffrey Weiss and Stewart Baker, from the Washington-based white-shoe law firm Steptoe & Johnson, was made public at the end of October in a filing with the House of Representatives. On behalf of NSO, the firm was to address “US national security and export control policy in an international context.”

Baker, former assistant secretary for policy at the Department of Homeland Security and a former National Security Agency general counsel, previously told The Associated Press, before representing NSO, that the blacklisting of the company “certainly isn’t a death penalty and may over time just be really aggravating.” 

Weiss, for his part, had relevant experience to help get NSO off the Department of Commerce blacklist: He was deputy director of policy and strategic planning at the agency from 2013 to 2017. 

Weiss and another Steptoe & Johnson partner, Eric Emerson, had also been hired by the Israeli government a few months earlier, according to previously unreported FARA documents. Weiss registered to provide both services to the Economic and Trade Mission at the Embassy of Israel in July, and then NSO in October. 

Emerson, who has worked at Steptoe for over 30 years specializing in international trade law and policy issues, registered to engage with Natalie Gutman-Chen, Israeli minister of trade and economic affairs. Documents show that Steptoe’s annual budget for this work is $180,000.

10/18/2023, Washington DC, U.S. Hundreds of protesters attend a pro-Palestinian demonstration outside Embassy of Israel in Washington DC, Greece, on Wednesday, Oct. 18, 2023. A day after a deadly blast tore through Al-Ahli Baptist Hospital in Gaza sparking protests across the region and western countries. (Photo by Ali Khaligh / Middle East Images / Middle East Images via AFP) (Photo by ALI KHALIGH/Middle East Images/AFP via Getty Images)
Demonstrators in support of Palestine gather at the Israeli Embassy in Washington, D.C., on Oct. 18, 2023.
Photo: Ali Khaligh /Middle East Images/AFP via Getty Images

Steptoe’s description of its work for the Israeli mission is similar to its goals for the NSO contract: to “provide advice on various international trade related matters affecting the State of Israel” which will be used “to develop its position w/re various U.S. policies.” 

It is not illegal to register to lobby for two affiliated clients, and powerful law firms doing lobbying work often do so for purposes of efficiency and holding meetings together.

“It is not uncommon to kill two birds with one stone,” said Anna Massoglia, editorial and investigations manager at OpenSecrets, which tracks lobbying money in Washington. “It’s possible NSO got a discount because they already had Israel.” 

“It’s possible NSO got a discount because they already had Israel.”

On October 30, amid the Israeli onslaught against Gaza, Steptoe filed their supplemental statement, in which lobbyists are supposed to detail their meetings and outreach to the Department of Justice. It was left curiously blank, perhaps portending a later amendment to the filing. (“The filing covers what we have been asked to advise on and we can’t comment any further at this time,” Steptoe said in a statement.)

“It’s hard to prove it’s deliberate,” Massoglia said. “But the timing is interesting.” 

Ties to Israeli Government 

Last year, a previously unreported email, obtained through a public records request, provided another example of the interweaving relationship between the Israeli government and NSO.

In May 2022, Love, the acting chair of the End-User Review Committee at the Department of Commerce, emailed lobbyists at Steptoe and Pillsbury. Love sent along a list of questions for their client, NSO, about the company’s appeal to be removed from the blacklist.

“We are also requesting permission to provide these questions to the government of Israel,” Love wrote. 

The email, however, had been sent about a year and half before Steptoe filed FARA registrations for its staff to lobby on behalf of NSO — and raises questions about adherence to the foreign lobbying law. (Pillsbury was registered under FARA at the time.)

FARA requires lobbyists to register with the Department of Justice when taking on foreign principals — both governments and companies — as clients.

“What has never been a gray area under FARA is if you are communicating directly with the U.S. government on behalf of a foreign principal, that’s a political activity,” said Ben Freeman, director of the democratizing foreign policy program at the Quincy Institute. Of the period when Steptoe was working for NSO but hadn’t registered yet, Freeman said, “By skirting FARA registration, they are really playing with fire.” 

Though FARA cases have increased since 2016, charges brought by the Justice Department remain relatively rare. The statute itself is forgiving, the enforcement mechanisms like warning letters often render failures to register moot, and, with so little case law owing to so few indictments, prosecutors are loath to try their hand at bringing charges. (The Department of Justice did not respond to a request for comment.)

“By skirting FARA registration, they are really playing with fire.”

In a letter sent to the Justice Department in July of last year, Democracy for the Arab World Now called on the government to investigate what, at the time, was described as the firms’ lack of registration as agents for Israel under FARA. “We believe that misrepresentation to be intentional,” the letter said.

None of the four companies hired by NSO said in their registrations that there is any Israeli government control over the spyware group, despite the evidence laid out by Democracy for the Arab World Now of Israeli influence on the company that meets the U.S. definition of government control. This includes the fact that all of NSO’s contracts are determined by the government of Israel, allegedly to serve political interests.

The Department of Justice, however, does not give updates or responses to such referrals. Neither has it published an opinion or issued a penalty. 

“Based on FARA filings, one would be under the impression that NSO was a run of the mill private corporate entity,” said Shapiro, of Democracy for the Arab World Now. “But given its role in spyware, understanding the government’s control is really important.”

The post Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/10/nso-group-israel-gaza-blacklist/feed/ 0 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images) Demonstration-outside-the-Embassy-of-Israel-in-Washington-DC Demonstrators in support of Palestine gather at the Israeli Embassy in Washington D.C. on October 18, 2023.
<![CDATA[Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets]]> https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/ https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/#respond Mon, 06 Nov 2023 22:53:54 +0000 https://theintercept.com/?p=449844 According to internal materials reviewed by The Intercept, Cruise cars were also in danger of driving into holes in the road.

The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

]]>
In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.

AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.

Despite the popular skepticism, Cruise insists its robots are profoundly safer than what they’re aiming to replace: cars driven by people. In an interview last month, Cruise CEO Kyle Vogt downplayed safety concerns: “Anything that we do differently than humans is being sensationalized.”

The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.

In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.

“This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

In a statement, a spokesperson for Cruise reiterated the company’s position that a future of autonomous cars will reduce collisions and road deaths. “Our driverless operations have always performed higher than a human benchmark, and we constantly evaluate and mitigate new risks to continuously improve,” said Erik Moser, Cruise’s director of communications. “We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle — human operated or autonomous — will have zero risk of collision.”

“These are not self-driving cars. These are cars driven by their companies.”

Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible. With its parent company General Motors, which purchased Cruise in 2016 for $1.1 billion, hemorrhaging money on the venture, any setback for the company’s robo-safety regimen could threaten its business.

Instead of seeing public accidents and internal concerns as yellow flags, Cruise sped ahead with its business plan. Before its permitting crisis in California, the company was, according to Bloomberg, exploring expansion to 11 new cities.

“These are not self-driving cars,” said Smith. “These are cars driven by their companies.”

Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, U.S., on Tuesday, Jan. 21, 2020. The shuttle is designed to be more spacious and passenger-friendly than a conventional, human-driven car. The silver, squared-off vehicle lacks traditional controls like pedals and a steering wheel, freeing up room for multiple people to share rides, Cruise Chief Executive Officer Dan Ammann said at the event. Photographer: David Paul Morris/Bloomberg via Getty Images
Kyle Vogt — co-founder, president, chief executive officer, and chief technology officer of Cruise — holds an articulating radar as he speaks during a reveal event in San Francisco on Jan. 21, 2020.
Photo: David Paul Morris/Bloomberg via Getty Images

“May Not Exercise Additional Care Around Children”

Several months ago, Vogt became choked up when talking about a 4-year-old girl who had recently been killed in San Francisco. A 71-year-old woman had taken what local residents described as a low-visibility right turn, striking a stroller and killing the child. “It barely made the news,” Vogt told the New York Times. “Sorry. I get emotional.” Vogt offered that self-driving cars would make for safer streets.

Behind the scenes, meanwhile, Cruise was grappling with its own safety issues around hitting kids with cars. One of the problems addressed in the internal, previously unreported safety assessment materials is the failure of Cruise’s autonomous vehicles to, under certain conditions, effectively detect children so that they can exercise extra caution. “Cruise AVs may not exercise additional care around children,” reads one internal safety assessment. The company’s robotic cars, it says, still “need the ability to distinguish children from adults so we can display additional caution around children.”

In particular, the materials say, Cruise worried its vehicles might drive too fast at crosswalks or near a child who could move abruptly into the street. The materials also say Cruise lacks data around kid-centric scenarios, like children suddenly separating from their accompanying adult, falling down, riding bicycles, or wearing costumes.

The materials note results from simulated tests in which a Cruise vehicle is in the vicinity of a small child. “Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child,” reads one assessment. In another test drive, a Cruise vehicle successfully detected a toddler-sized dummy but still struck it with its side mirror at 28 miles per hour.

The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.

In its statement, Cruise said, “It is inaccurate to say that our AVs were not detecting or exercising appropriate caution around pedestrian children” — a claim undermined by internal Cruise materials reviewed by The Intercept and the company’s statement itself. In its response to The Intercept’s request for comment, Cruise went on to concede that, this past summer during simulation testing, it discovered that its vehicles sometimes temporarily lost track of children on the side of the road. The statement said the problem was fixed and only encountered during testing, not on public streets, but Cruise did not say how long the issue lasted. Cruise did not specify what changes it had implemented to mitigate the risks.

Despite Cruise’s claim that its cars are designed to identify children to treat them as special hazards, spokesperson Navideh Forghani said that the company’s driving software hadn’t failed to detect children but merely failed to classify them as children.

Moser, the Cruise spokesperson, said the company’s cars treat children as a special category of pedestrians because they can behave unpredictably. “Before we deployed any driverless vehicles on the road, we conducted rigorous testing in a simulated and closed-course environment against available industry benchmarks,” he said. “These tests showed our vehicles exceed the human benchmark with regard to the critical collision avoidance scenarios involving children.”

“Based on our latest assessment this summer,” Moser continued, “we determined from observed performance on-road, the risk of the potential collision with a child could occur once every 300 million miles at fleet driving, which we have since improved upon. There have been no on-road collisions with children.”

Do you have a tip to share about safety issues at Cruise? The Intercept welcomes whistleblowers. Use a personal device to contact Sam Biddle on Signal at +1 (978) 261-7389, by email at sam.biddle@theintercept.com, or by SecureDrop.

Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years. Without fixes to the problems, those rates would presumably increase as more AVs were put on the streets.

It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside. Though the construction site was surrounded by orange cones, the Cruise vehicle drives directly toward it, coming to an abrupt halt. Though it can’t be discerned from the footage whether the car entered the pit or stopped at its edge, the vehicle appears to be only inches away from several workers, one of whom attempted to stop the car by waving a “SLOW” sign across its driverless windshield.

“Enhancing our AV’s ability to detect potential hazards around construction zones has been an area of focus, and over the last several years we have conducted extensive human-supervised testing and simulations resulting in continued improvements,” Moser said. “These include enhanced cone detection, full avoidance of construction zones with digging or other complex operations, and immediate enablement of the AV’s Remote Assistance support/supervision by human observers.”

Known Hazards

Cruise’s undisclosed struggles with perceiving and navigating the outside world illustrate the perils of leaning heavily on machine learning to safely transport humans. “At Cruise, you can’t have a company without AI,” the company’s artificial intelligence chief told Insider in 2021. Cruise regularly touts its AI prowess in the tech media, describing it as central to preempting road hazards. “We take a machine-learning-first approach to prediction,” a Cruise engineer wrote in 2020.

The fact that Cruise is even cataloguing and assessing its safety risks is a positive sign, said Phil Koopman, an engineering professor at Carnegie Mellon, emphasizing that the safety issues that worried Cruise internally have been known to the field of autonomous robotics for decades. Koopman, who has a long career working on AV safety, faulted the data-driven culture of machine learning that leads tech companies to contemplate hazards only after they’ve encountered them, rather than before. The fact that robots have difficulty detecting “negative obstacles” — AV jargon for a hole — is nothing new.

“Safety is about the bad day, not the good day, and it only takes one bad day.”

“They should have had that hazard on their hazard list from day one,” Koopman said. “If you were only training it how to handle things you’ve already seen, there’s an infinite supply of things that you won’t see until it happens to your car. And so machine learning is fundamentally poorly suited to safety for this reason.”

The safety materials from Cruise raise an uncomfortable question for the company about whether robot cars should be on the road if it’s known they might drive into a hole or a child.

“If you can’t see kids, it’s very hard for you to accept that not being high risk — no matter how infrequent you think it’s going to happen,” Koopman explained. “Because history shows us people almost always underestimate the risk of high severity because they’re too optimistic. Safety is about the bad day, not the good day, and it only takes one bad day.”

Koopman said the answer rests largely on what steps, if any, Cruise has taken to mitigate that risk. According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem. In August, Cruise announced the cuts to daytime ride operations in San Francisco but made no mention of its attempt to lower risk to local children. (“Risk mitigation measures incorporate more than AV behavior, and include operational measures like alternative routing and avoidance areas, daytime or nighttime deployment and fleet reductions among other solutions,” said Moser. “Materials viewed by The Intercept may not reflect the full scope of our evaluation and mitigation measures for a specific situation.”)

A quick fix like shifting hours of operation presents an engineering paradox: How can the company be so sure it’s avoiding a thing it concedes it can’t always see? “You kind of can’t,” said Koopman, “and that may be a Catch-22, but they’re the ones who decided to deploy in San Francisco.”

“The reason you remove safety drivers is for publicity and optics and investor confidence.”

Precautions like reduced daytime operations will only lower the chance that a Cruise AV will have a dangerous encounter with a child, not eliminate that possibility. In a large American city, where it’s next to impossible to run a taxi business that will never need to drive anywhere a child might possibly appear, Koopman argues Cruise should have kept safety drivers in place while it knew this flaw persisted. “The reason you remove safety drivers is for publicity and optics and investor confidence,” he told The Intercept.

Koopman also noted that there’s not always linear progress in fixing safety issues. In the course of trying to fine-tune its navigation, Cruise’s simulated tests showed its AV software missed children at an increased rate, despite attempts to fix the issues, according to materials reviewed by The Intercept.

The two larger issues of kids and holes weren’t the only robot flaws potentially imperiling nearby humans. According to other internal materials, some vehicles in the company’s fleet suddenly began making unprotected left turns at intersections, something Cruise cars are supposed to be forbidden from attempting. The potentially dangerous maneuvers were chalked up to a botched software update.

The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on October 25, 2023. (Photo by Kazuhiro NOGI / AFP) (Photo by KAZUHIRO NOGI/AFP via Getty Images)
The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at Honda’s booth during the press day of the Japan Mobility Show in Tokyo on Oct. 25, 2023.
Photo: Kazuhiro Nog/AFP via Getty Images

The Future of Road Safety?

Part of the self-driving industry’s techno-libertarian promise to society — and a large part of how it justifies beta-testing its robots on public roads — is the claim that someday, eventually, streets dominated by robot drivers will be safer than their flesh-based predecessors.

Cruise cited a RAND Corporation study to make its case. “It projected deploying AVs that are on average ten percent safer than the average human driver could prevent 600,000 fatalities in the United States over 35 years,” wrote Vice President for Safety Louise Zhang in a company blog post. “Based on our first million driverless miles of operation, it appears we are on track to far exceed this projected safety benefit.”

During General Motors’ quarterly earnings call — the same day California suspended Cruise’s operating permit — CEO Mary Barra told financial analysts that Cruise “is safer than a human driver and is constantly improving and getting better.”

In the 2022 “Cruise Safety Report,” the company outlines a deeply unflattering comparison of fallible human drivers to hyper-intelligent robot cars. The report pointed out that driver distraction was responsible for more than 3,000 traffic fatalities in 2020, whereas “Cruise AVs cannot be distracted.” Crucially, the report claims, a “Cruise AV only operates in conditions that it is designed to handle.”

“It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver.”

When it comes to hitting kids, however, internal materials indicate the company’s machines were struggling to match the safety performance of even an average human: Cruise’s goal was, at the time, for its robots to merely drive as safely around children at the same rate as an average Uber driver — a goal the internal materials note it was failing to meet.

“It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver,” said Smith, the University of South Carolina law professor. “It’s pretty striking that there’s a memo that says we could hit more kids than an average rideshare driver, and the apparent response of management is, keep going.”

In a statement to The Intercept, Cruise confirmed its goal of performing better than ride-hail drivers. “Cruise always strives to go beyond existing safety benchmarks, continuing to raise our own internal standards while we collaborate with regulators to define industry standards,” said Moser. “Our safety approach combines a focus on better-than-human behavior in collision imminent situations, and expands to predictions and behaviors to proactively avoid scenarios with risk of collision.”

Cruise and its competitors have worked hard to keep going despite safety concerns, public and nonpublic. Before the California Public Utilities Commission voted to allow Cruise to offer driverless rides in San Francisco, where Cruise is headquartered, the city’s public safety and traffic agencies lobbied for a slower, more cautious approach to AVs. The commission didn’t agree with the agencies’ worries. “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway,” said commissioner John Reynolds, who previously worked as a lawyer for Cruise.

Had there always been human safety drivers accompanying all robot rides — which California regulators let Cruise ditch in 2021 — Smith said there would be less cause for alarm. A human behind the wheel could, for example, intervene to quickly steer a Cruise AV out of the path of a child or construction crew that the robot failed to detect. Though the company has put them back in place for now, dispensing entirely with human backups is ultimately crucial to Cruise’s long-term business, part of its pitch to the public that steering wheels will become a relic. With the wheel still there and a human behind it, Cruise would struggle to tout its technology as groundbreaking.

“We’re not in a world of testing with in-vehicle safety drivers, we’re in a world of testing through deployment without this level of backup and with a whole lot of public decisions and claims that are in pretty stark contrast to this,” Smith explained. “Any time that you’re faced with imposing a risk that is greater than would otherwise exist and you’re opting not to provide a human safety driver, that strikes me as pretty indefensible.”

The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/feed/ 0 GM’s Cruise Reveals First Vehicle Made To Run Without Driver Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, Jan. 21, 2020. JAPAN-AUTO-SHOW The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on Oct.25, 2023.
<![CDATA[Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis.]]> https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/ https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/#respond Sat, 28 Oct 2023 19:01:37 +0000 https://theintercept.com/?p=449406 Meta acknowledged that Instagram was burying some flag emoji comments in “offensive” contexts.

The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

]]>
As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook.

Since Israel launched retaliatory airstrikes in Gaza after the October 7 Hamas attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word “terrorist” into Palestinian Instagram profiles, and suppressed hashtags. Instagram comments containing the Palestinian flag emoji have also been hidden, according to 7amleh, a Palestinian digital rights group that formally collaborates with Meta, which owns Instagram and Facebook, on regional speech issues.

Numerous users have reported to 7amleh that their comments were moved to the bottom of the comments section and require a click to display. Many of the remarks have something in common: “It often seemed to coincide with having a Palestinian flag in the comment,” 7amleh’s U.S. national organizer Eric Sype told The Intercept.

Users report that Instagram had flagged and hidden comments containing the emoji as “potentially offensive,” TechCrunch first reported last week. Meta has routinely attributed similar instances of alleged censorship to technical glitches. Meta spokesperson Andy Stone confirmed to The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain “offensive” contexts that violate the company’s rules. He added that Meta has not created any new policies specific to flag emojis.

“The notion of finding a flag offensive is deeply distressing for Palestinians,” Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy who follows Meta’s policymaking on speech, told The Intercept.

“The notion of finding a flag offensive is deeply distressing for Palestinians.”

Asked about the contexts in which Meta hides the flag, Stone pointed to the Dangerous Organizations and Individuals policy, which designates Hamas as a terrorist organization, and cited a section of the community standards rulebook that prohibits any content “praising, celebrating or mocking anyone’s death.” He said Meta does not have different standards for enforcing its rules for the Palestinian flag emoji.

It remains unclear, however, precisely how Meta determines whether the use of the flag emoji is offensive enough to suppress. The Intercept reviewed several hidden comments containing the Palestinian flag emoji that had no reference to Hamas or any other banned group. The Palestinian flag itself has no formal association with Hamas and predates the militant group by decades.

Some of the hidden comments reviewed by The Intercept only contained emojis and no other text. In one, a user commented on an Instagram video of a pro-Palestinian demonstration in Jordan with green, white, and black heart emojis corresponding to the colors of the Palestinian flag, along with emojis of the Moroccan and Palestinian flags. In another, a user posted just three Palestinian flag emojis. Another screenshot seen by The Intercept showed two hidden comments consisting only of the hashtags #Gaza, #gazaunderattack, #freepalestine, and #ceasefirenow.

“Throughout our long history, we’ve endured moments where our right to display the Palestinian flag has been denied by Israeli authorities. Decades ago, Palestinian artists Nabil Anani and Suleiman Mansour ingeniously used a watermelon as a symbol of our flag,” Shtaya said. “When Meta engages in such practices, it echoes the oppressive measures imposed on Palestinians.”

Faulty Content Moderation

Instagram and Facebook users have taken to other social media platforms to report other instances of censorship. On X, formerly known as Twitter, one user posted that Facebook blocked a screenshot of a popular Palestinian Instagram account he tried to share with a friend via private message. The message was flagged as containing nonconsensual sexual images, and his account was suspended.

On Bluesky, Facebook and Instagram users reported that attempts to share national security reporter Spencer Ackerman’s recent article criticizing President Joe Biden’s support of Israel were blocked and flagged as cybersecurity risks.

On Friday, the news site Mondoweiss tweeted a screenshot of an Instagram video about Israeli arrests of Palestinians in the West Bank that was removed because it violated the dangerous organizations policy.

Meta’s increasing reliance on automated, software-based content moderation may prevent people from having to sort through extremely disturbing and potentially traumatizing images. The technology, however, relies on opaque, unaccountable algorithms that introduce the potential to misfire, censoring content without explanation. The issue appears to extend to posts related to the Israel–Palestine conflict.

An independent audit commissioned by Meta last year determined that the company’s moderation practices amounted to a violation of Palestinian users’ human rights. The audit also concluded that the Dangerous Organizations and Individuals policy — which speech advocates have criticized for its opacity and overrepresentation of Middle Easterners, Muslims, and South Asians — was “more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

Last week, the Wall Street Journal reported that Meta recently dialed down the level of confidence its automated systems require before suppressing “hostile speech” to 25 percent for the Palestinian market, a significant decrease from the standard threshold of 80 percent.

The audit also faulted Meta for implementing a software scanning tool to detect violent or racist incitement in Arabic, but not for posts in Hebrew. “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects … due to lack of linguistic and cultural competence,” the report found.

“Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew.”

Despite Meta’s claim that the company developed a speech classifier for Hebrew in response to the audit, hostile speech and violent incitement in Hebrew are rampant on Instagram and Facebook, according to 7amleh.

“Based on our monitoring and documentation, it seems to be very ineffective,” 7amleh executive director and co-founder Nadim Nashif said of the Hebrew classifier. “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew, that clearly violate Meta’s policies, but are still on the platforms.”

An Instagram search for a Hebrew-language hashtag roughly meaning “erase Gaza” produced dozens of results at the time of publication. Meta could not be immediately reached for comment on the accuracy of its Hebrew speech classifier.

The Wall Street Journal shed light on why hostile speech in Hebrew still appears on Instagram. “Earlier this month,” the paper reported, “the company internally acknowledged that it hadn’t been using its Hebrew hostile speech classifier on Instagram comments because it didn’t have enough data for the system to function adequately.”

Correction: October 30, 2023
Due to an editing error, Meta’s statement that there are no company policies specific to the Palestinian flag emoji was removed from the story. It has been restored.

The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/feed/ 0 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[One Year After Elon Musk Bought Twitter, His Hilarious Nightmare Continues]]> https://theintercept.com/2023/10/27/elon-musk-twitter-purchase/ https://theintercept.com/2023/10/27/elon-musk-twitter-purchase/#respond Fri, 27 Oct 2023 10:00:00 +0000 https://theintercept.com/?p=449205 I underestimated Musk’s lust for tormenting himself, and us.

The post One Year After Elon Musk Bought Twitter, His Hilarious Nightmare Continues appeared first on The Intercept.

]]>
Elon Musk, chief executive officer of Tesla, speaks to members of the media following Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, US, on Wednesday, Sept. 13, 2023. The gathering is part of the Senate majority leader's strategy to give Congress more influence over the future of artificial intelligence as it takes on a growing role in the professional and personal lives of Americans. Photographer: Al Drago/Bloomberg via Getty Images
Elon Musk speaks to members of the media following the Senate AI Insight Forum on Capitol Hill in Washington, D.C., on Sept. 13, 2023.
Photo: Al Drago/Bloomberg via Getty Images

After Elon Musk finalized his purchase of Twitter on October 27, 2022, I wrote an article in which I warned, “We need to take seriously the possibility that this will end up being one of the funniest things that’s ever happened.”

Today, I have to issue an apology: I was wrong. Musk’s ownership of Twitter may well be — at least for people who manage to enjoy catastrophic human folly — the funniest thing that’s ever happened. 

Let’s take a look back and see how I was so mistaken.

Musk began his tenure as Twitter’s owner by posting this message to the company’s advertisers, in which he said, “Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. … Twitter obviously cannot be a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all.”

Musk had to say this for obvious reasons: 90 percent of Twitter’s revenues came from ads, and corporate America gets nervous about its ads appearing in an environment that’s completely unpredictable. 

I assumed that Musk would make a serious effort here. But this was based on my belief that, while he might be a deeply sincere ultra-right-wing crank, he surely had the level of self-control possessed by a 6-year-old. He does not. Big corporations now comprehend this and are understandably anxious about advertising with a company run by a man who, at any moment, may see user @JGoebbels1488 posting excerpts from “The Protocols of the Elders of Zion” and reply “concerning!”

The consequences of this have been what you’d expect. The marketing consultancy Ebiquity represents 70 of the 100 companies that spend the most on ads, including Google and General Motors. Before Musk’s takeover, 31 of their big clients bought space on Twitter. Last month, just two did. Ebiquity’s chief strategy officer told Business Insider that “this is a drop we have not seen before for any major advertising platform.” 

This is why Twitter users now largely see ads from micro-entrepreneurs who are, say, selling 1/100th scale papier-mâché models of the Eiffel Tower. The good news for Twitter is that such companies don’t worry much about brand safety. But the bad news is that their annual advertising budget is $25. Hence, Twitter’s advertising revenue in the U.S. is apparently down 60 percent year over year.

I also never imagined it possible that Musk would rename Twitter — which had become an incredibly well-known brand — to “X” just because he’s been obsessed with the idea of a company with that name since he was a kid. It’s as though he bought Coca-Cola and changed its name to that of his beloved childhood pet tortoise Zoinks. The people who try to measure this kind of thing claim that this has destroyed between $4 and $20 billion of Twitter’s value. (As you see in this article, I refuse to refer to Twitter as X just out of pure orneriness.)

Another of my mistaken beliefs was that Musk understood the basic facts about Twitter. The numbers have gone down somewhat since Musk’s purchase of the company, but right now, about 500 million people log on to Twitter at least once a month. Perhaps 120 million check it out daily; these average users spend about 15 minutes on it. A tenth of these numbers — that is, about 12 million people — are heavy users, who account for 70 percent of all the time spent by anyone on the app.

Musk is one of these heavy users. He adores Twitter, as do some other troubled souls. But this led him to wildly overestimate its popularity among normal humans. A company with 50 million fanatically devoted users could possibly survive a collapse in ad revenue by enticing them to pay a subscription fee. But Twitter does not have such users and now never will, given Musk’s relentless antagonizing of the largely progressive Twitterati. 

So how much is Twitter worth today? When Musk became involved with the company in the first months of 2022, its market capitalization was about $28 billion. He then offered to pay $44 billion for it, which was so much more than the company was worth that its executives had to accept the offer or they would have been sued by their shareholders. Now that the company’s no longer publicly traded — and so its basic financials don’t have to be disclosed — it’s more difficult to know what’s going on. However, Fidelity Investments, a financial services company, holds a stake in Twitter and has marked down its valuation of this stake by about two-thirds since Musk bought it. This implies that Twitter is now worth around $15 billion.

The significance of this is that Musk and his co-investors only put up $31 billion or so of the $44 billion purchase price. The remaining $13 billion was borrowed by Twitter at high interest rates from Wall Street. In other words, Musk and company are perilously close to having lost their entire $31 billion.

In the end, I did not understand Musk’s determination to torment himself by forcing his entire existence into an extremely painful Procrustean bed. The results have been bleak and awful for Twitter and the world, but not just bleak and awful: They have also been hilarious. Anyone who likes to laugh about human vanity and hubris has to appreciate his commitment to the bit.

The post One Year After Elon Musk Bought Twitter, His Hilarious Nightmare Continues appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/27/elon-musk-twitter-purchase/feed/ 0 Senate Majority Leader Holds Artificial Intelligence Insight Forum Elon Musk, speaks to members of the media following Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, D.C., Sept. 13, 2023.
<![CDATA[Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe]]> https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/ https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/#respond Thu, 26 Oct 2023 18:00:00 +0000 https://theintercept.com/?p=448997 By focusing on the potential dangers of Chinese spy tech, we’ve ignored how roaming itself creates massive vulnerabilities, a new Citizen Lab report says.

The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

]]>
The very obscure, archaic technologies that make cellphone roaming possible also makes it possible to track phone owners across the world, according to a new investigation by the University of Toronto’s Citizen Lab. The roaming tech is riddled with security oversights that make it a ripe target for those who might want to trace the locations of phone users.

As the report explains, the flexibility that made cellphones so popular in the first place is largely to blame for their near-inescapable vulnerability to unwanted location tracking: When you move away from a cellular tower owned by one company to one owned by another, your connection is handed off seamlessly, preventing any interruption to your phone call or streaming video. To accomplish this handoff, the cellular networks involved need to relay messages about who — and, crucially, precisely where — you are.

“Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information.”

While most of these network-hopping messages are sent to facilitate legitimate customer roaming, the very same system can be easily manipulated to trick a network into divulging your location to governments, fraudsters, or private sector snoops.

“Foreign intelligence and security services, as well as private intelligence firms, often attempt to obtain location information, as do domestic state actors such as law enforcement,” states the report from Citizen Lab, which researches the internet and tech from the Munk School of Global Affairs and Public Policy at the University of Toronto. “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information with high degrees of secrecy.”

The sheer complexity required to allow phones to easily hop from one network to another creates a host of opportunities for intelligence snoops and hackers to poke around for weak spots, Citizen Lab says. Today, there are simply so many companies involved in the cellular ecosystem that opportunities abound for bad actors.

Citizen Lab highlights the IP Exchange, or IPX, a network that helps cellular companies swap data about their customers. “The IPX is used by over 750 mobile networks spanning 195 countries around the world,” the report explains. “There are a variety of companies with connections to the IPX which may be willing to be explicitly complicit with, or turn a blind eye to, surveillance actors taking advantage of networking vulnerabilities and one-to-many interconnection points to facilitate geolocation tracking.”

This network, however, is even more promiscuous than those numbers suggest, as telecom companies can privately sell and resell access to the IPX — “creating further opportunities for a surveillance actor to use an IPX connection while concealing its identity through a number of leases and subleases.” All of this, of course, remains invisible and inscrutable to the person holding the phone.

Citizen Lab was able to document several efforts to exploit this system for surveillance purposes. In many cases, cellular roaming allows for turnkey spying across vast distances: In Vietnam, researchers identified a seven-month location surveillance campaign using the network of the state-owned GTel Mobile to track the movements of African cellular customers. “Given its ownership by the Ministry of Public Security the targeting was either undertaken with the Ministry’s awareness or permission, or was undertaken in spite of the telecommunications operator being owned by the state,” the report concludes.

African telecoms seem to be a particular hotbed of roaming-based location tracking. Gary Miller, a mobile security researcher with Citizen Lab who co-authored the report, told The Intercept that, so far this year, he’d tracked over 11 million geolocation attacks originating from just two telecoms in Chad and the Democratic Republic of the Congo alone.

In another case, Citizen Lab details a “likely state-sponsored activity intended to identify the mobility patterns of Saudi Arabia users who were traveling in the United States,” wherein Saudi phone owners were geolocated roughly every 11 minutes.

The exploitation of the global cellular system is, indeed, truly global: Citizen Lab cites location surveillance efforts originating in India, Iceland, Sweden, Italy, and beyond.

While the report notes a variety of factors, Citizen Lab places particular blame with the laissez-faire nature of global telecommunications, generally lax security standards, and lack of legal and regulatory consequences.

As governments throughout the West have been preoccupied for years with the purported surveillance threats of Chinese technologies, the rest of the world appears to have comparatively avoided scrutiny. “While a great deal of attention has been spent on whether or not to include Huawei networking equipment in telecommunications networks,” the report authors add, “comparatively little has been said about ensuring non-Chinese equipment is well secured and not used to facilitate surveillance activities.”

The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/feed/ 0
<![CDATA[Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual]]> https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/ https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/#respond Wed, 18 Oct 2023 16:44:01 +0000 https://theintercept.com/?p=448241 In responses to users who tried to post an alleged picture of the Gaza hospital bombing, Instagram and Facebook said it violated guidelines for sexual content or nudity.

The post Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual appeared first on The Intercept.

]]>
Instagram and Facebook users attempting to share scenes of devastation from a crowded hospital in Gaza City claim their posts are being suppressed, despite previous company policies protecting the publication of violent, newsworthy scenes of civilian death.

Late Tuesday, amid a 10-day bombing campaign by Israel, the Gaza Strip’s al-Ahli Hospital was rocked by an explosion that left hundreds of civilians killed and wounded. Footage of the flaming exterior of the hospital, as well as dead and wounded civilians, including children, quickly emerged on social media in the aftermath of the attack.

While the Palestinian Ministry of Health in the Hamas-run Gaza Strip blamed the explosion on an Israeli airstrike, the Israeli military later said the blast was caused by an errant rocket misfired by militants from the Gaza-based group Islamic Jihad.

While widespread electrical outages and Israel’s destruction of Gaza’s telecommunications infrastructure have made getting documentation out of the besieged territory difficult, some purported imagery of the hospital attack making its way to the internet appears to be activating the censorship tripwires of Meta, the social media giant that owns Instagram and Facebook.

Since Hamas’s surprise attack against Israel on October 7 and amid the resulting Israeli bombardment of Gaza, groups monitoring regional social media activity say censorship of Palestinian users is at a level not seen since May 2021, when violence flared between Israel and Gaza following Israeli police incursions into Muslim holy sites in Jerusalem.

Two years ago, Meta blamed the abrupt deletion of Instagram posts about Israeli military violence on a technical glitch. On October 15, Meta spokesperson Andy Stone again attributed claims of wartime censorship on a “bug” affecting Instagram. (Meta could not be immediately reached for comment.)

“It’s censorship mayhem like 2021. But it’s more sinister given the internet shutdown in Gaza.”

Since the latest war began, Instagram and Facebook users inside and outside of the Gaza Strip have complained of deleted posts, locked accounts, blocked searches, and other impediments to sharing timely information about the Israeli bombardment and general conditions on the ground. 7amleh, a Palestinian digital rights group that collaborates directly with Meta on speech issues, has documented over hundreds user complaints of censored posts about the war, according to spokesperson Eric Sype, far outpacing deletion levels seen two years ago.

“It’s censorship mayhem like 2021,” Marwa Fatafta, a policy analyst with the digital rights group Access Now, told The Intercept. “But it’s more sinister given the internet shutdown in Gaza.”

In other cases, users have successfully uploaded graphic imagery from al-Ahli to Instagram, suggesting that takedowns are not due to any formal policy on Meta’s end, but a product of the company’s at times erratic combination of outsourced human moderation and automated image-flagging software.

An Instagram notification shows a story depicting a widely circulated image was removed by the platform on the basis of violating guidelines on nudity or sexual activity.
Screenshot: Obtained by The Intercept

Alleged Photo of Gaza Hospital Bombing

One image rapidly circulating social media platforms following the blast depicts what appears to be the flaming exterior of the hospital, where a clothed man is lying beside a pool of blood, his torso bloodied.

According to screenshots shared with The Intercept by Fatafta, Meta platform users who shared this image had their posts removed or were prompted to remove them themselves because the picture violated policies forbidding “nudity or sexual activity.” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, confirmed she had also gotten reports of two instances of this same image deleted. (The Intercept could not independently verify that the image was of al-Ahli Hospital.)

One screenshot shows a user notified that Instagram had removed their upload of the photo, noting that the platform forbids “showing someone’s genitals or buttocks” or “implying sexual activity.” The underlying photo does not appear to show anything resembling either category of image.

In another screenshot, a Facebook user who shared the same image was told their post had been uploaded, “but it looks similar to other posts that were removed because they don’t follow our standards on nudity or sexual activity.” The user was prompted to delete the post. The language in the notification suggests the image may have triggered one of the company’s automated, software-based content moderation systems, as opposed to a human review.

Meta has previously distributed internal policy language instructing its moderators to not remove gruesome documentation of Russian airstrikes against Ukrainian civilians, though no such carveout is known to have been provided for Palestinians, whether today or in the past. Last year, a third-party audit commissioned by Meta found that systemic, unwarranted censorship of Palestinian users amounted to a violation of their human rights.

The post Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/feed/ 0 An Instagram account notification shows a story depicting a widely circulated image was removed by the platform on the basis of violating guidelines on nudity or sexual activity. DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[Why Big Tech, Cops, and Spies Were Made for One Another]]> https://theintercept.com/2023/10/16/surveillance-state-big-tech/ https://theintercept.com/2023/10/16/surveillance-state-big-tech/#respond Mon, 16 Oct 2023 10:00:00 +0000 https://theintercept.com/?p=447464 The American surveillance state is a public-private partnership.

The post Why Big Tech, Cops, and Spies Were Made for One Another appeared first on The Intercept.

]]>
Illustration: Jovana Mugosa for The Intercept

Cory Doctorow’s latest book is “The Internet Con: How to Seize the Means of Computation.

The techlash has finally reached the courts. Amazon’s in court. Google’s in court. Apple’s under EU investigation. The French authorities just kicked down Nvidia’s doors and went through their files looking for evidence of crimes against competition. People are pissed at tech: about moderation, about monopolization, about price gouging, about labor abuses, and — everywhere and always — about privacy.

From experience, I can tell you that Silicon Valley techies are pretty sanguine about commercial surveillance: “Why should I care if Google wants to show me better ads?” But they are much less cool about government spying: “The NSA? Those are the losers who weren’t smart enough to get an interview at Google.”

And likewise from experience, I can tell you that government employees and contractors are pretty cool with state surveillance: “Why would I worry about the NSA spying on me? I already gave the Office of Personnel Management a comprehensive dossier of all possible kompromat in my past when I got my security clearance.” But they are far less cool with commercial surveillance: “Google? Those creeps would sell their mothers for a nickel. To the Chinese.”

What are they both missing? That American surveillance is a public-private partnership: a symbiosis between a concentrated tech sector that has the means, motive, and opportunity to spy on every person in the world and a state that loves surveillance as much as it hates checks and balances.

Big Tech, cops, and surveillance agencies were made for one another.

The Privacy Deficit

America has a privacy law deficit. While U.S. trading rivals like the EU and even China have enacted muscular privacy laws in response to digital commercial surveillance, the U.S. has slept through a quarter-century of increasing corporate spying without any federal legislative action.

It’s really something. America has stronger laws protecting you from video store clerks who gossip about your porn rentals than we do protecting you from digital spies who nonconsensually follow you into an abortion clinic and then sell the data.

In place of democratically accountable privacy laws, we have the imperial fiat of giant tech companies. Apple unilaterally decided that in-app surveillance should be limited to instances in which users explicitly opted in. Unsurprisingly, more than 96 percent of iOS users did not opt into surveillance (presumably the remaining 4 percent were either confused, or Facebook employees, or both).

When Apple finally allowed its users to block Facebook surveillance, they cut off a torrent of valuable data that Facebook had nonconsensually acquired from Apple device owners, without those owners’ permission. But — crucially — it was Apple that decided when consent was and wasn’t needed to spy on it customers. After 96 percent of iOS device owners opted out of Facebook spying, Apple continued to spy on those users, in precisely the same way that Facebook had, without telling them, and when they were caught doing it, they lied about it.

Which raises a question: Why don’t Apple customers simply block Apple’s surveillance? Why don’t they install software that prevents their devices from ratting them out to Apple? Because that would be illegal. Very, very illegal.

One in four web users has installed an ad blocker (which also blocks commercial surveillance). It’s the “biggest boycott in world history.” The reason you can modify your browser to ignore demands from servers to fetch ads — and reveal facts about you in the process — is that the web is an “open platform.” All the major browsers have robust interfaces for aftermarket blockers to plug into, and they’re also all open source, meaning that if a browser vendor restricts those interfaces to make it harder to block ads, other companies can “fork the code” to bypass those restrictions.

By contrast, apps are encrypted, which triggers a quarter-century-old law: the Digital Millennium Copyright Act of 1998, whose Section 1201 makes it a felony to provide someone with a tool to bypass an “access control” for a copyrighted work. By encrypting apps and locking the keys away from the device owner, Apple can make it a crime for you to reconfigure your own phone to protect your privacy, with penalties of a five-year prison sentence and a $500,000 fine — for a first offense.

The Rise of Big Tech

An app is best understood as “a webpage wrapped in just enough IP to make it a crime to install an ad blocker” (or anything else the app’s shareholders disapprove of).

DMCA 1201 is only one of a slew of laws that restrict the ability of technology users to modify the tools they own and use to favor their interests over manufacturers’: laws governing cybersecurity, trademarks, patents, contracts, and other legal constructs can be woven together to block the normal activities that the tech giants themselves once pursued.

Yes, there was a time when tech companies waged guerrilla warfare upon one another: reverse-engineering, scraping, and hacking each others’ products so that disgruntled users could switch from one service to another without incurring steep switching costs. For example, Facebook offered departing MySpace users a “bot” that would impersonate them to MySpace, scrape their inboxes, and import the messages to Facebook so users could maintain contact with friends they’d left behind on the older platform.

That all changed as tech consolidated, shrinking the internet to what software developer Tom Eastman calls “five giant websites, filled with screenshots of text from the other four.” This consolidation was not unique to tech. The 40-year drawdown of antitrust has led to mass consolidation across nearly every sector of the global economy, from bottle caps to banking. Tech companies merged, gobbled up hundreds of small startups, and burned billions of investor dollars offering products and services below cost, making it impossible for anyone else to get a foothold.

Tech was the first industry born in the post-antitrust age. The Apple ][+ hit shelves the same year Ronald Reagan hit the campaign trail. When tech hit its first inter-industry squabble, jousting with the much more mature and concentrated entertainment industry during the Napster wars of the early 2000s, it was trounced, losing every court, regulatory, and legislative fight.

By all rights, tech should have won those fights. After all, the tech sector in the go-go early internet years was massive, an order of magnitude larger than the entertainment companies challenging them in the halls of power. But Big Content was well-established, having boiled itself down to seven or so companies (depending on how you count), while tech was still a rabble of hundreds of small and medium-sized companies that couldn’t agree on its legislative priorities. Tech couldn’t even agree on the catering for a meeting where these priorities might be debated. Concentrated sectors find it comparatively easy to come to agreements, including agreements about what to tell Congress and federal judges. And since those concentrated sectors also find it easy to agree on whose turf belongs to whom, they are able to avoid the “wasteful competition” that erodes their profit margins, leaving them with vast war chests with which to pursue their legislative agenda.

As tech consolidated, it began to feel its oats. Narrow interpretations of existing laws were broadened. New, absurd gambits were invented and then accepted by authorities with straight faces.

Just as important as the new laws that tech got for itself were the laws they kept at bay. Labor laws were treated as nonexistent, provided that your boss was an app. Consumer protection laws were likewise jettisoned.

And, of course, the U.S. never passed a federal privacy law, and the EU struggled to enforce its privacy law.

Slide showing companies participating in the PRISM program and the types of data they provide.
Slide showing companies participating in the Prism program and the types of data they provide.
National Security Agency, public domain, via Wikimedia Commons

Cops and Spies

Concentrated sectors of large, highly profitable firms inevitably seek to fuse their power with that of the state, securing from the government forbearance for their own actions and prohibitions on the activities they disfavor. When it comes to surveillance, the tech sector has powerful allies in government: cops and spies.

It goes without saying that cops and spies love commercial surveillance. The very first Snowden revelation concerned a public-private surveillance partnership called Prism, in which the NSA plundered large internet companies’ data with their knowledge and cooperation. The subsequent revelation about the “Upstream” program revealed that the NSA was also plundering tech giants’ data without their knowledge, and using Prism as a “plausible deniability” fig leaf so that the tech firms didn’t get suspicious when the NSA acted on its stolen intelligence.

No government agency could ever hope to match the efficiency and scale of commercial surveillance. The NSA couldn’t order us to carry pocket location beacons at all times — hell, the Centers for Disease Control and Prevention couldn’t even get us to run an exposure notification app in the early days of the Covid pandemic. No government agency could order us to put all our conversations in writing to be captured, stored, and mined. And not even the U.S. government could afford to run the data centers and software development to store and make sense of it all.

Meanwhile, the private sector relies on cops and spies to go to bat for them, lobbying against new privacy laws and for lax enforcement of existing ones. Think of Amazon’s Ring cameras, which have blanketed entire neighborhoods in CCTV surveillance, which Ring shares with law enforcement agencies, sometimes without the consent or knowledge of the cameras’ owners. Ring marketing recruits cops as street teams, showering them with freebies to distribute to local homeowners.

And when local activists and town councils ponder limitations on this kind of commercial surveillance, the cops go to bat for Ring, insisting that every citizen should have the inalienable right to contribute to an off-the-books video surveillance grid that the cops can access at will.

Google, for its part, has managed to play both sides of the culture war with its location surveillance, thanks to the “reverse warrants” that cops have used to identify all the participants at both Black Lives Matter protests and the January 6 coup.

Distinguishing between state and private surveillance is a fool’s errand. Cops and spies need the surveillance industry, and the surveillance industry needs cops and spies. Since the days of the East India Company, monopolists have understood the importance of recruiting powerful state actors to go to bat for commercial interests.

AT&T — the central node in the Snowden revelations — has been playing this game for a century, foiling regulators attempts to break up its monopoly for 69 years before the Department of Justice finally eked out a win in 1982 (whereupon antitrust was promptly neutered, allowing the “Baby Bells” to merge into new monopolies like Verizon).

In the 1950s, AT&T came within a whisker of being broken up, but the Pentagon stepped up to defend Ma Bell, telling the Justice Department that America would lose the Korean War if they didn’t have an intact AT&T to supply and operate their high-tech backend. America lost the Korean War, but AT&T won: It got a 30-year reprieve.

Stumping for his eponymous antitrust law in 1890, Sen. John Sherman thundered, “If we will not endure a King as a political power we should not endure a King over the production, transportation, and sale of the necessaries of life. If we would not submit to an emperor we should not submit to an autocrat of trade.”

Today, as our snoopy tech firms hide in the skirts of our spies and law enforcement agencies, we have to get beyond the idea that this is surveillance capitalism. Truly, it’s more akin to surveillance mercantilism: a fusion of state and commercial power.

The post Why Big Tech, Cops, and Spies Were Made for One Another appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/16/surveillance-state-big-tech/feed/ 0 Slide showing companies participating in the PRISM program and the types of data they provide. Slide showing companies participating in the PRISM program and the types of data they provide.
<![CDATA[Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access]]> https://theintercept.com/2023/10/12/israel-gaza-internet-access/ https://theintercept.com/2023/10/12/israel-gaza-internet-access/#respond Fri, 13 Oct 2023 00:00:08 +0000 https://theintercept.com/?p=447530 During a war, when access to the internet could save lives, Palestinians are struggling to reach the outside world and each other.

The post Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access appeared first on The Intercept.

]]>
Amid a heavy retaliatory air and artillery assault by Israel against the Gaza Strip on October 10, Israel Defense Forces spokesperson Avichay Adraee posted a message on Facebook to residents of the al-Daraj neighborhood, urging them to leave their homes in advance of impending airstrikes.

It’s not clear how most people in al-Daraj were supposed to see the warning: Intense fighting and electrical shortages have strangled Palestinian access to the internet, putting besieged civilians at even greater risk.

Following Hamas’s grisly surprise attack across the Gaza border on October 7, the Israeli counterattack — a widespread and indiscriminate bombardment of the besieged Gaza Strip — left the two million Palestinians who call the area home struggling to connect to the internet at a time when access to current information is crucial and potentially lifesaving.

“Shutting down the internet in armed conflict is putting civilians at risk.”

“Shutting down the internet in armed conflict is putting civilians at risk,” Deborah Brown, a senior researcher at Human Rights Watch, told The Intercept. “It could help contribute to injury or death because people communicate around what are safe places and conditions.”

According to companies and research organizations that monitor the global flow of internet traffic, Gazan access to the internet has dramatically dropped since Israeli strikes began, with data service cut entirely for some customers.

“My sense is that very few people in Gaza have internet service,” Doug Madory of the internet monitoring firm Kentik told The Intercept. Madory said he spoke to a contact working with an internet service provider, or ISP, in Gaza who told him that internet access has been reduced by 80 to 90 percent because of a lack of fuel and power, and airstrikes.

As for causes of the outages, Marwa Fatafta, a policy analyst with the digital rights group Access Now, cited Israeli strikes against office buildings housing Gazan telecommunications firms, such as the now-demolished Al-Watan Tower, as a major factor, in addition to damage to the electrical grid.

Fatafta told The Intercept, “There is a near complete information blackout from Gaza.”

Most Gaza ISPs Are Gone

With communications infrastructure left in rubble, Gazans now increasingly find themselves in a digital void at a time when data access is most crucial.

“People in Gaza need access to the internet and telecommunications to check on their family and loved ones, seek life-saving information amidst the ongoing Israeli barrage on the strip; it’s crucial to document the war crimes and human rights abuses committed by Israeli forces at a time when disinformation is going haywire on social media,” Fatafta said.

“There is some slight connectivity,” Alp Toker of the internet outage monitoring firm NetBlocks told The Intercept, but “most of the ISPs based inside of Gaza are gone.”

Though it’s difficult to be certain whether these outages are due to electrical shortages, Israeli ordnance, or both, Toker said that, based on reports he has received from Gazan internet providers, the root cause is the Israeli destruction of fiber optic cables connecting Gaza. The ISPs are generally aware of where their infrastructure is damaged or destroyed, Toker said, but ongoing Israeli airstrikes will make sending a crew to patch them too dangerous to attempt. Still, one popular Gazan internet provider, Fusion, wrote in a Facebook post to its customers that efforts to repair damaged infrastructure were ongoing.

That Gazan internet access remains in place at all, Toker said, is probably due to the use of backup generators that could soon run out of fuel in the face of an intensified Israeli military blockade. (Toker also said that, while it’s unclear if it was due to damage from Hamas rockets or a manual blackout, NetBlocks detected an internet service disruption inside Israel at the start of the attack, but that it quickly subsided.)

Amanda Meng, a research scientist at Georgia Tech who works on the university’s Internet Outage Detection and Analysis project, or IODA, estimated Gazan internet connectivity has dropped by around 55 percent in the recent days, meaning over half the networks inside Gaza have gone dark and no longer respond to the outside internet. Meng compared this level of access disruption to what’s been previously observed in Ukraine and Sudan during recent warfare in those countries. In Gaza, Border Gateway Protocol activity, an obscure system that routes data from one computer to another and undergirds the entire internet, has also seen disruptions.

“On the ground, this looks like people not being able to use networked communication devices that rely on the Internet,” Meng explained.

Organizations like NetBlocks and IODA all used varying techniques to measure internet traffic, and their results tend to vary. It’s also nearly impossible to tell from the other side of the world whether a sudden dip in service is due to an explosion or something else. In addition to methodological differences and the fog of war, however, is an added wrinkle: Like almost everything else in Gaza, ISPs connect to the broader internet through Israeli infrastructure.

“By law, Gaza internet connectivity must go through Israeli infrastructure to connect to the outside world, so there is a possibility that the Israelis could leave it up because they are able to intercept communications,” said Madory of Kentik.

Fatafta, the policy analyst, also cited Israel’s power to keep Gaza offline — but both in this war and in general. “Israel’s full control of Palestinian telecommunications infrastructure and long-standing ban on technology upgrades” is an immense impediment, she said. With the wider internet blockaded, she said, “people in Gaza can only access slow and unreliable 2G services” — a cellular standard from 1991.

While Israel is reportedly also using analog means to warn Palestinians, their effectiveness is not always clear: “Palestinian residents of the city of Beit Lahiya in the northern region of the Gaza Strip said Thursday that Israeli planes dropped flyers warning them to evacuate their homes,” according to the Associated Press. “The area had already been heavily struck by the time the flyers were dropped.”

The post Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/12/israel-gaza-internet-access/feed/ 0 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza]]> https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/ https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/#respond Wed, 11 Oct 2023 17:44:06 +0000 https://theintercept.com/?p=447300 Periods of Israeli–Palestinian violence have regularly resulted in the corporate suppression of Palestinian social media users.

The post TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza appeared first on The Intercept.

]]>
As Israel escalates its bombardment of the Gaza Strip in retaliation for a surprise attack from Hamas, TikTok and Instagram have come after a news site dedicated to providing coverage on Palestine and Israel.

On Tuesday, a Mondoweiss West Bank correspondent’s Instagram account was suspended, while the news outlet’s TikTok account was temporarily taken down on Monday. Other Instagram users have reported restrictions on their accounts after posting about Palestine, including an inability to livestream or to comment on other’s posts. And on Instagram and Facebook (both owned by the same company, Meta), hashtags relating to Hamas and “Al-Aqsa Flood,” the group’s name for its attack on Israel, are being hidden from search. The death toll from the attack continues to rise, with Israeli officials reporting 1,200 deaths as of Wednesday afternoon.

The platforms’ targeting of accounts reporting on Palestine comes as information from people in Gaza is harder to come by amid Israel’s total siege on its 2 million residents and as Israel keeps foreign media out of the coastal enclave. Israel’s indiscriminate bombing campaign has killed more than 1,100 people and injured thousands more, Gaza’s Health Ministry said Wednesday.

Periods of Israeli–Palestinian violence have regularly resulted in the corporate suppression of Palestinian social media users. In 2021, for instance, Instagram temporarily censored posts that mentioned Jerusalem’s Al-Aqsa Mosque, one of Islam’s most revered sites. Social media policy observers have criticized Meta’s censorship policies on the grounds that they unduly affect Palestinian users while granting leeway to civilian populations in other conflict zones.

“The censorship of Palestinian voices, those who support Palestine, and alternative news media who report on the crimes of Israel’s occupation, by social media networks and giants like Meta and TikTok is well documented,” said Yumna Patel, Palestine news director of Mondoweiss, noting that it includes account bans, content removal, and even limiting the reach of posts. “We often see these violations become more frequent during times like this, where there is an uptick in violence and international attention on Palestine. We saw it with the censorship of Palestinian accounts on Instagram during the Sheikh Jarrah protests in 2021, the Israeli army’s deadly raids on Jenin in the West Bank in 2023, and now once again as Israel declares war on Gaza.”

Instagram and TikTok did not respond to requests for comment. 

Mondoweiss correspondent Leila Warah, who is based in the West Bank, reported on Tuesday that Instagram suspended her account and gave her 180 days to appeal, with the possibility of permanent suspension. After Mondoweiss publicized the suspension, her account was quickly reinstated. Later in the day, however, Mondoweiss reported that Warah’s account was suspended once again, only to be reinstated on Wednesday. 

The news outlet tweeted that the first suspension came “after several Israeli soldiers shared Leila’s account on Facebook pages, asking others to submit fraudulent reports of guideline violations.” 

A day earlier, the outlet tweeted that its TikTok account was “permanently banned” amid its “ongoing coverage of the events in Palestine.” Since the outbreak of war on Saturday, the outlet had posted a viral video about Hamas’s attack on Israel and another about Hamas’s abduction of Israeli civilians. Again, within a couple of hours, and after Mondoweiss publicized the ban, the outlet’s account was back up. 

“We have consistently reviewed all communication from TikTok regarding the content we publish there and made adjustments if necessary,” the outlet wrote. The magazine’s staff did not believe they violated any TikTok guidelines in their coverage in recent days. “This can only be seen as censorship of news coverage that is critical of the prevailing narratives around the events unfolding in Palestine.”

Even though the account has been reinstated, Mondoweiss’s first viral TikTok about the eruption of violence cannot be viewed in the West Bank and some parts of Europe, according to the outlet. Other West Bank residents independently confirmed to The Intercept that they could not access the video, in which Warah describes Hamas’s attack and Israel’s bombing of Gaza as a result, connecting the assault to Israel’s ongoing 16-year siege of Gaza. TikTok did not respond to The Intercept’s questions about access to the video. 

On Instagram, meanwhile, Palestinian creator Adnan Barq reported that the platform blocked him from livestreaming, removed his content, and even prevented his account from being shown to users who don’t follow him. Also on Instagram, hashtags including #alaqsaflood and #hamas are being suppressed; Facebook is suppressing Arabic-language hashtags of the operation’s name too. On paper, Meta’s rules prohibit glorifying Hamas’s violence, but they do not bar users from discussing the group in the context of the news, though the distinction is often collapsed in the real world.

Last year, following a spate of Israeli airstrikes against the Gaza Strip, Palestinian users who photographed the destruction on Instagram complained that their posts were being removed for violating Meta’s “community standards,” while Ukrainian users had received a special carve-out to post similar imagery on the grounds it was “newsworthy.” 

A September 2022 external audit commissioned by Meta found the company’s rulebook “had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.” Similarly, Meta’s Dangerous Organizations and Individuals policy, which maintains a secret blacklist of banned organizations and people, is disproportionately made up of Muslim, Middle Eastern, and South Asian entities, a factor that contributed to over-enforcement against Palestinians.

Big Tech’s content moderation during conflict is increasingly significant as unverified information runs rampant on X, Elon Musk’s information free-for-all diluted version of Twitter, once a crucial source during breaking news events. Musk himself has led his 160 million followers astray, encouraging users on Sunday to follow @WarMonitors and @sentdefender to learn about the war “in real-time.” The former account had posted things like “mind your own business, jew,” while the latter mocked Palestinian civilians trapped from Israel’s siege, writing, “Better find a Boat or get to Swimming lol.” And both have previously circulated fake news, such as false reports of an explosion at the Pentagon in May.

Musk later deleted his post endorsing the accounts.

For now, Musk’s innovative Community Notes fact-checking operation is leaving lies unchallenged for days during a time when decisions and snap judgments are made by the minute. And that says nothing of inflammatory content on X and elsewhere. “In the past few days we have seen open calls for genocide and mass violence against [Palestinians] and Arabs made by official Israeli social media accounts, and parroted by Zionist accounts and pro-Israel bots on platforms like X with absolutely no consequence,” Mondoweiss’s Patel said. “Meanwhile Palestinian journalists & news outlets have had their accounts outright suspended on Instagram and Tiktok simply for reporting the news.”

The post TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/feed/ 0 DEIR AL-BALAH, GAZA - NOVEMBER 7: Civil defense teams and citizens continue search and rescue operations after an airstrike hits the building belonging to the Maslah family during the 32nd day of Israeli attacks in Deir Al-Balah, Gaza on November 7, 2023. (Photo by Ashraf Amra/Anadolu via Getty Images)
<![CDATA[New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network]]> https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/ https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/#respond Sun, 01 Oct 2023 10:00:00 +0000 https://theintercept.com/?p=446051 A new, well-funded pressure group is fighting to get Apple to weaken iPhone privacy protections in the name of catching child predators.

The post New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network appeared first on The Intercept.

]]>
The Heat Initiative, a nonprofit child safety advocacy group, was formed earlier this year to campaign against some of the strong privacy protections Apple provides customers. The group says these protections help enable child exploitation, objecting to the fact that pedophiles can encrypt their personal data just like everyone else.

When Apple launched its new iPhone this September, the Heat Initiative seized on the occasion, taking out a full-page New York Times ad, using digital billboard trucks, and even hiring a plane to fly over Apple headquarters with a banner message. The message on the banner appeared simple: “Dear Apple, Detect Child Sexual Abuse in iCloud” — Apple’s cloud storage system, which today employs a range of powerful encryption technologies aimed at preventing hackers, spies, and Tim Cook from knowing anything about your private files.

Something the Heat Initiative has not placed on giant airborne banners is who’s behind it: a controversial billionaire philanthropy network whose influence and tactics have drawn unfavorable comparisons to the right-wing Koch network. Though it does not publicize this fact, the Heat Initiative is a project of the Hopewell Fund, an organization that helps privately and often secretly direct the largesse — and political will — of billionaires. Hopewell is part of a giant, tightly connected web of largely anonymous, Democratic Party-aligned dark-money groups, in an ironic turn, campaigning to undermine the privacy of ordinary people.

“None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing.”

For experts on transparency about money in politics, the Hopewell Fund’s place in the wider network of Democratic dark money raises questions that groups in the network are disinclined to answer.

“None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing,” said Robert Maguire, of Citizens for Responsibility and Ethics in Washington, or CREW. Maguire said the way the network operated called to mind perhaps the most famous right-wing philanthropy and dark-money political network: the constellation of groups run and supported by the billionaire owners of Koch Industries. Of the Hopewell network, Maguire said, “They also take on some of the structural calling cards of the Koch network; it is a convoluted group, sometimes even intentionally so.”

The decadeslong political and technological campaign to diminish encryption for the sake of public safety — known as the “Crypto Wars” — has in recent years pivoted from stoking fears of terrorists chatting in secret to child predators evading police scrutiny. No matter the subject area, the battle is being waged between those who think privacy is an absolute right and those who believe it ought to be limited for expanded oversight from law enforcement and intelligence agencies. The ideological lines pit privacy advocates, computer scientists, and cryptographers against the FBI, the U.S. Congress, the European Union, and other governmental bodies around the world. Apple’s complex 2021 proposal to scan cloud-bound images before they ever left your phone became divisive even within the field of cryptography itself.

While the motives on both sides tend to be clear — there’s little mystery as to why the FBI doesn’t like encryption — the Heat Initiative, as opaque as it is new, introduces the obscured interests of billionaires to a dispute over the rights of ordinary individuals. 

“I’m uncomfortable with anonymous rich people with unknown agendas pushing these massive invasions of our privacy,” Matthew Green, a cryptographer at Johns Hopkins University and a critic of the plan to have Apple scan private files on its devices, told The Intercept. “There are huge implications for national security as well as consumer privacy against corporations. Plenty of unsavory reasons for people to push this technology that have nothing to do with protecting children.”

Apple’s Aborted Scanning Scheme

Last month, Wired reported the previously unknown Heat Initiative was pressing Apple to reconsider its highly controversial 2021 proposal to have iPhones constantly scan their owners’ photos as they were uploaded to iCloud, checking to see if they were in possession of child sexual abuse material, known as CSAM. If a scan turned up CSAM, police would be alerted. While most large internet companies check files their users upload and share against a centralized database of known CSAM, Apple’s plan went a step further, proposing to check for illegal files not just on the company’s servers, but directly on its customers’ phones.

“In the hierarchy of human privacy, your private files and photos should be your most important confidential possessions,” Green said. “We even wrote this into the U.S. Constitution.”

The backlash was swift and effective. Computer scientists, cryptographers, digital rights advocates, and civil libertarians immediately protested, claiming the scanning would create a deeply dangerous precedent. The ability to scan users’ devices could open up iPhones around the world to snooping by authoritarian governments, hackers, corporations, and security agencies. A year later, Apple reversed course and said it was shelving the idea.

Green said that efforts to push Apple to monitor the private files of iPhone owners are part of a broader effort against encryption, whether used to safeguard your photographs or speak privately with others — rights that were taken for granted before the digital revolution. “We have to have some principles about what we’ll give up to fight even heinous crime,” he said. “And these proposals give up everything.”

“We have to have some principles about what we’ll give up to fight even heinous crime. And these proposals give up everything.”

In an unusual move justifying its position, Apple provided Wired with a copy of the letter it sent to the Heat Initiative in reply to its demands. “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit,” the letter read. “It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”

The strong encryption built into iPhones, which shields sensitive data like your photos and iMessage conversations even from Apple itself, is frequently criticized by police agencies and national security hawks as providing shelter to dangerous criminals. In a 2014 speech, then-FBI Director James Comey singled out Apple’s encryption specifically, warning that “encryption threatens to lead all of us to a very dark place.”

Some cryptographers respond that it’s impossible to filter possible criminal use of encryption without defeating the whole point of encryption in the first place: keeping out prying eyes.

Similarly, any attempt to craft special access for police to use to view encrypted conversations when they claim they need to — a “backdoor” mechanism for law enforcement access — would be impossible to safeguard against abuse, a stance Apple now says it shares.

LOS ANGELES CA - SEPTEMBER 01, 2023: Apple is facing pressure from child safety advocates and shareholders to improve its policies for policing child sexual abuse material in iCloud. Photographed here is Sarah Gardner, head of the Heat Initiative, who is leading the campaign. CREDIT: Jessica Pons for The New York Times
Sarah Gardner, head of the Heat Initiative, on Sept. 1, 2023, in Los Angeles.
Photo: Jessica Pons for the New York Times

Dark-Money Network

For an organization demanding that Apple scour the private information of its customers, the Heat Initiative discloses extremely little about itself. According to a report in the New York Times, the Heat Initiative is armed with $2 million from donors including the Children’s Investment Fund Foundation, an organization founded by British billionaire hedge fund manager and Google activist investor Chris Cohn, and the Oak Foundation, also founded by a British billionaire. The Oak Foundation previously provided $250,000 to a group attempting to weaken end-to-end encryption protections in EU legislation, according to a 2020 annual report.



The Heat Initiative is helmed by Sarah Gardner, who joined from Thorn, an anti-child trafficking organization founded by actor Ashton Kutcher. (Earlier this month, Kutcher stepped down from Thorn following reports that he’d asked a California court for leniency in the sentencing of convicted rapist Danny Masterson.) Thorn has drawn scrutiny for its partnership with Palantir and efforts to provide police with advanced facial recognition software and other sophisticated surveillance tools. Critics say these technologies aren’t just uncovering trafficked children, but ensnaring adults engaging in consensual sex work.

In an interview, Gardner declined to name the Heat Initiative’s funders, but she said the group hadn’t received any money from governmental or law enforcement organizations. “My goal is for child sexual abuse images to not be freely shared on the internet, and I’m here to advocate for the children who cannot make the case for themselves,” Gardner added.

She said she disagreed with “privacy absolutists” — a group now apparently including Apple — who say CSAM-scanning iPhones would have imperiled user safety. “I think data privacy is vital,” she said. “I think there’s a conflation between user privacy and known illegal content.”

Heat Initiative spokesperson Kevin Liao told The Intercept that, while the group does want Apple to re-implement its 2021 plan, it would be open to other approaches to screening everyone’s iCloud storage for CSAM. Since Apple began allowing iCloud users to protect their photos with end-to-end encryption last December, however, this objective is far trickier now than it was back in 2021; to scan iCloud images today would still require the mass-scrutinizing of personal data in some manner. As Apple put it in its response letter, “Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users.”

Both the Oak Foundation and Thorn were cited in a recent report revealing the extent to which law enforcement and private corporate interests have influenced European efforts to weaken encryption in the name of child safety.

Beyond those groups and a handful of names, however, there is vanishingly little information available about what the Heat Initiative is, where it came from, or who exactly is paying its bills and why. Its website, which describes the group only as a “collective effort of concerned child safety experts and advocates” — who go unnamed — contains no information about funding, staff, or leadership.

One crucial detail, however, can be found buried in the “terms of use” section of the Heat Initiative’s website: “THIS WEBSITE IS OWNED AND OPERATED BY Hopewell Fund AND ITS AFFILIATES.” Other than a similarly brief citation in the site’s privacy policy, there is no other mention of the Hopewell Fund or explanation of its role. The omission is significant, given Hopewell’s widely reported role as part of a shadowy cluster of Democratic dark-money groups that funnel billions from anonymous sources into American politics.

Hopewell is part of a labyrinthine billionaire-backed network that receives and distributes philanthropic cash while largely obscuring its origin. The groups in this network include New Venture Fund (which has previously paid salaries at Hopewell), the Sixteen Thirty Fund, and Arabella Advisors, a for-profit company that helps administer these and other Democratic-leaning nonprofits and philanthropies. The groups have poured money into a wide variety of causes ranging from abortion access to opposing Republican tax policy, along the way spending big on elections — about $1.2 billion total in 2020 alone, according to a New York Times investigation.

The deep pockets of this network and mystery surrounding the ultimate source of its donations have drawn comparisons — by Maguire, the Times, and others — to the Koch brothers’ network, whose influence over electoral politics from the right long outraged Democrats. When asked by The Atlantic in 2021 whether she felt good “that you’re the left’s equivalent of the Koch brothers,” Sampriti Ganguli, at the time the CEO of Arabella Advisors, replied in the affirmative.

“Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country. We’re talking here about hundreds of millions of dollars.”

“Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country,” Maguire of CREW told The Intercept. “We’re talking here about hundreds of millions of dollars.”

Liao told The Intercept that Hopewell serves as the organization’s “fiscal sponsor,” an arrangement that allows tax-deductible donations to pass through a registered nonprofit on its way to an organization without tax-exempt status. Liao declined to provide a list of the Heat Initiative’s funders beyond the two mentioned by the New York Times. Owing to this fiscal sponsorship, Liao continued, “the Hopewell Fund’s board is Heat Initiative’s board.” Hopewell’s board includes New Venture Fund President Lee Bodner and Michael Slaby, a veteran of Barack Obama’s 2008 and 2012 campaigns and former chief technology strategist at an investment fund operated by ex-Google chair Eric Schmidt.

When asked who exactly was leading the Heat Initiative, Liao told The Intercept that “it’s just the CEO Sarah Gardner.” According to LinkedIn, however, Lily Rhodes, also previously with Thorn, now works as Heat Initiative’s director of strategic operations. Liao later said Rhodes and Gardner are the Heat Initiative’s only two employees. When asked to name the “concerned child safety experts and advocates” referred to on the Heat Initiative’s website, Liao declined.

“When you take on a big corporation like Apple,” he said, “you probably don’t want your name out front.”

Hopewell’s Hopes

Given the stakes — nothing less than the question of whether people have an absolute right to communicate in private — the murkiness surrounding a monied pressure campaign against Apple is likely to concern privacy advocates. The Heat Initiative’s efforts also give heart to those aligned with law enforcement interests. Following the campaign’s debut, former Georgia Bureau of Investigations Special Agent in Charge Debbie Garner, who has also previously worked for iPhone-hacking tech firm Grayshift, hailed the Heat Initiative’s launch in a LinkedIn group for Homeland Security alumni, encouraging them to learn more.

The larger Hopewell network’s efforts to influence political discourse have attracted criticism and controversy in the past. In 2021, OpenSecrets, a group that tracks money in politics, reported that New Venture Fund and the Sixteen Thirty Fund were behind a nationwide Facebook ad campaign pushing political messaging from Courier News, a network of websites designed to look like legitimate, independent political news outlets.

Despite its work with ostensibly progressive causes, Hopewell has taken on conservative campaigns: In 2017, Deadspin reported with bemusement an NFL proposal in which the league would donate money into a pool administered by the Hopewell Fund as part of an incentive to get players to stop protesting the national anthem.

Past campaigns connected to Hopewell and its close affiliates have been suffused with Big Tech money. Hopewell is also the fiscal sponsor of the Economic Security Project, an organization that promotes universal basic income founded by Facebook co-founder Chris Hughes. In 2016, SiliconBeat reported that New Venture Fund, which is bankrolled in part by major donations from the Bill and Melinda Gates Foundation and William and Flora Hewlett Foundation, was behind the Google Transparency Project, an organization that publishes unflattering research relating to Google. Arabella has also helped Microsoft channel money to its causes of choice, the report noted. Billionaire eBay founder Pierre Omidyar has also provided large cash gifts to both Hopewell and New Venture Fund, according to the New York Times (Omidyar is a major funder of The Intercept).

According to Riana Pfefferkorn, a research scholar at Stanford University’s Internet Observatory program, the existence of the Heat Initiative is ultimately the result of an “unforced error” by Apple in 2021, when it announced it was exploring using CSAM scanning for its cloud service.

“And now they’re seeing that they can’t put the genie back in the bottle,” Pfefferkorn said. “Whatever measures they take to combat the cloud storage of CSAM, child safety orgs — and repressive governments — will remember that they’d built a tool that snoops on the user at the device level, and they’ll never be satisfied with anything less.”

The post New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/feed/ 0 Sarah Gardner Heat Initiative Sarah Gardner, head of the Heat Initiative, Sept. 1, 2023, Los Angeles, California.
<![CDATA[Pentagon’s Budget Is So Bloated That It Needs an AI Program to Navigate It]]> https://theintercept.com/2023/09/20/pentagon-ai-budget-gamechanger/ https://theintercept.com/2023/09/20/pentagon-ai-budget-gamechanger/#respond Wed, 20 Sep 2023 16:35:46 +0000 https://theintercept.com/?p=445349 Codenamed GAMECHANGER, an AI program helps the military make sense of its own “byzantine” and “tedious” bureaucracy.

The post Pentagon’s Budget Is So Bloated That It Needs an AI Program to Navigate It appeared first on The Intercept.

]]>
As tech luminaries like Elon Musk issue solemn warnings about artificial intelligence’s threat of “civilizational destruction,” the U.S. military is using it for a decidedly more mundane purpose: understanding its sprawling $816.7 billion budget and figuring out its own policies.

Thanks to its bloat and political wrangling, the annual Department of Defense budget legislation includes hundreds of revisions and limitations telling the Pentagon what it can and cannot do. To make sense of all those provisions, the Pentagon created an AI program, codenamed GAMECHANGER. 

“In my comptroller role, I am, of course, the most excited about applying GAMECHANGER to gain better visibility and understanding across our various budget exhibits,” said Gregory Little, the deputy comptroller of the Pentagon, shortly after the program’s creation last year. 

“The fact that they have to go to such extraordinary measures to understand what their own policies are is an indictment of how they operate.”

“The fact that they have to go to such extraordinary measures to understand what their own policies are is an indictment of how they operate,” said William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft and expert on the defense budget. “It’s kind of similar to the problem with the budget as a whole: They don’t make tough decisions, they just layer on more policies, more weapons systems, more spending. Between the Pentagon and Congress, they’re not really getting rid of old stuff, they’re just adding more.”

House Republicans reportedly aim to pass their defense budget later this week. They had planned to vote on an $826 billion proposal last week before the far-right Freedom Caucus blocked the proposal, demanding cuts to non-defense spending.

“The fact that the Pentagon developed an AI program to navigate its own policies should be a stark wake-up call for lawmakers who throw more money at the department than it even asks for nearly every year,” said Julia Gledhill, an analyst at the Project on Government Oversight’s Center for Defense Information. “It’s unsurprising, though: The DOD couldn’t adequately account for 61 percent of its $3.5 trillion in assets in the most recent audit, and those are physical!”

The Pentagon did not respond to a request for comment.

Military brass use GAMECHANGER to help them navigate what the Defense Department itself points to as an absurd amount of “tedious” policies. The program contains over 15,000 policy documents governing how the Pentagon operates, according to its GitHub entry.

“Did you know that if you read all the Department of Defense’s policies, it would be the equivalent of reading through ‘War and Peace’ more than 100 times?” a press release about GAMECHANGER from the Defense Intelligence Agency, the military’s spy wing, says. “For most people, policy is a tedious and [elusive] concept, making the idea of understanding and synthesizing tens of thousands of policy requirements a daunting task. But in the midst of the chaos that is the policy world, one DIA officer and a team at the Office of the Undersecretary of Defense for Intelligence & Security saw an opportunity.” 

The press release went on to decry the Pentagon’s “mountain of policies and requirements.” 

As unusual as it is for the military to publicly air its contempt for its own sprawling bureaucracy, members of Congress have been similarly harsh. In its portrayal of U.S. military policy — which it also had a hand in creating — the Senate Armed Services Committee called rules governing the department “byzantine” and “labyrinthine.”

“The committee notes that the Joint Artificial Intelligence Center developed an artificial intelligence-enabled tool, GAMECHANGER, to make sense of the byzantine and labyrinthine ecosystem of Department guidance,” the committee said in a report for National Defense Authorization Act — the law that appropriates cash for the Pentagon budget — for fiscal year 2023. (Amid the critique of the Pentagon’s bloated bureaucracy, the NDAA would later become law, authorizing $802.4 billion in funding for the defense budget.)

Though announced in February of last year, GAMECHANGER has received scant media attention. The military’s Joint Artificial Intelligence Center, a subdivision of the U.S. Air Force created in 2018, developed the program. Upon its completion, the Joint Artificial Intelligence Center transferred ownership of it to the office of the Defense Department comptroller, which handles budgetary and fiscal matters for the Pentagon. 

Shortly after its release, GAMECHANGER was already used by over 6,000 Defense Department users conducting over 100,000 queries, according to the Defense Intelligence Agency. 

Described as a natural language processing application — a broad term in computer science generally referring the use of machine learning to allow computers to interpret human speech and writing — GAMECHANGER is just one of a vast suite of AI programs bankrolled by the Pentagon in recent months. 

The Pentagon is currently funding 686 such AI projects, according to the National Academy of Sciences, a nonprofit that frequently conducts research into the government. The figure does not include the Department of Defense’s classified efforts.

Before it was formally released, GAMECHANGER was granted an award by the Office of Personnel Management, the federal government’s human resources agency for civil servants.

“GAMECHANGER is an ironic name: They’re patting themselves on the back for, in the best case, figuring out what they’ve said in the past, which is pretty modest,” said Hartung, the Quincy Institute defense budget expert. “It’s more a problem of how they make policy and not a problem of how to surf through it.”

The post Pentagon’s Budget Is So Bloated That It Needs an AI Program to Navigate It appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/20/pentagon-ai-budget-gamechanger/feed/ 0
<![CDATA[New York Times Doesn’t Want Its Stories Archived]]> https://theintercept.com/2023/09/17/new-york-times-website-internet-archive/ https://theintercept.com/2023/09/17/new-york-times-website-internet-archive/#respond Sun, 17 Sep 2023 10:00:00 +0000 https://theintercept.com/?p=444418 The Times blocked a bot that had given the Internet Archive’s Wayback Machine huge troves of websites.

The post New York Times Doesn’t Want Its Stories Archived appeared first on The Intercept.

]]>
The New York Times tried to block a web crawler that was affiliated with the famous Internet Archive, a project whose easy-to-use comparisons of article versions has sometimes led to embarrassment for the newspaper.

In 2021, the New York Times added “ia_archiver” — a bot that, in the past, captured huge numbers of websites for the Internet Archive — to a list that instructs certain crawlers to stay out of its website.

Crawlers are programs that work as automated bots to trawl websites, collecting data and sending it back to a repository, a process known as scraping. Such bots power search engines and the Internet Archive’s Wayback Machine, a service that facilitates the archiving and viewing of historic versions of websites going back to 1996.

The New York Times has, in the past, faced public criticisms over some of its stealth edits.

The Internet Archive’s Wayback Machine has long been used to compare webpages as they are updated over time, clearly delineating the differences between two iterations of any given page. Several years ago, the archive added a feature called “Changes” that lets users compare two archived versions of a website from different dates or times on a single display. The tool can be used to uncover changes in news stories that have been made without any accompanying editorial notes, so-called stealth edits.

The Times has, in the past, faced public criticisms over some of its stealth edits. In a notorious 2016 incident, the paper revised an article about then-Democratic presidential candidate Sen. Bernie Sanders, I-Vt., so drastically after publication — changing the tone from one of praise to skepticism — that it came in for a round of opprobrium from other outlets as well as the Times’s own public editor. The blogger who first noticed the revisions and set off the firestorm demonstrated the changes by using the Wayback Machine.

More recently, the Times stealth-edited an article that originally listed “death” as one of six ways “you can still cancel your federal student loan debt.” Following the edit, the “death” section title was changed to a more opaque heading of “debt won’t carry on.”

A service called NewsDiffs — which provides a similar comparative service but focuses on news outlets such as the New York Times, CNN, the Washington Post, and others — has also chronicled a long list of significant examples of articles that have undergone stealth edits, though the service appears to not have been updated in several years.

The New York Times declined to comment on why it is barring the ia_archiver bot from crawling its website.

Robots.txt Files

The mechanism that websites use to block certain crawlers is a robots.txt file. If website owners want to request that a particular search engine or other automated bot not scan their website, they can add the crawler’s name to the file, which the website owner then uploads to their site where it can be publicly accessed.

Based on a web standard known as the Robots Exclusion Protocol, a robots.txt file allows site owners to specify whether they want to allow a bot to crawl either part of or their whole websites. Though bots can always choose to ignore the presence of the file, many crawler services respect the requests.

The current robots.txt file on the New York Times’s website includes an instruction to disallow all site access to the ia_archiver bot.

The relationship between ia_archiver and the Internet Archive is not completely straightforward. While the Internet Archive crawls the web itself, it also receives data from other entities. Ia_archiver was, for more than a decade, a prolific supplier of website data to the archive.

The bot belonged to Alexa Internet, a web traffic analysis company co-founded by Brewster Kahle, who went on to create the Internet Archive right after Alexa. Alexa Internet went on to be acquired by Amazon in 1999 — its trademark name was later used for Amazon’s signature voice-activated assistant — and was eventually sunset in 2022.

Throughout its existence, Alexa Internet was intricately intertwined with the Internet Archive. From 1996 to the end of 2020, the Internet Archive received over 3 petabytes — more than 3,000 terabytes — of crawled website data from Alexa. Its role in helping to fill the archive with material led users to urge website owners not to block ia_archiver under the mistaken notion that it was unrelated to the Internet Archive.

As late as 2015, the Wayback Machine offered instructions for preventing a site from being ingested into the Wayback Machine — by using the site’s robots.txt file. News websites such as the Washington Post proceeded to take full advantage of this and disallowed the ia_archiver bot.

By 2017, however, the Internet Archive announced its intention to stop abiding by the dictates of a site’s robots.txt. While the Internet Archive had already been disregarding the robots.txt for military and government sites, the new update expanded the move to disregard robots.txt for all sites. Instead, website owners could make manual exclusion requests by email.

Reputation management firms, for one, are keenly aware of the change. The New York Times, too, appears to have mobilized the more selective manual exclusion process, as certain Times stories are not available via the Wayback Machine.

Some news sites such as the Washington Post have since removed ia_archiver from their list of blocked crawlers. While other websites removed their ia_archiver blocks, however, in 2021, the New York Times decided to add it.

The post New York Times Doesn’t Want Its Stories Archived appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/17/new-york-times-website-internet-archive/feed/ 0
<![CDATA[Tech Companies and Governments Are Censoring the Journalist Collective DDoSecrets]]> https://theintercept.com/2023/09/12/ddosecrets-censorship-reddit-twitter/ https://theintercept.com/2023/09/12/ddosecrets-censorship-reddit-twitter/#respond Tue, 12 Sep 2023 19:45:10 +0000 https://theintercept.com/?p=444440 X and Reddit prevent users from sharing links to Distributed Denial of Secrets. Russia and Indonesia are also blocking access.

The post Tech Companies and Governments Are Censoring the Journalist Collective DDoSecrets appeared first on The Intercept.

]]>
Distributed Denial of Secrets — the nonprofit transparency collective that hosts an ever-growing public library of leaked and hacked datasets for journalists and researchers to investigate — has been a major source of news for organizations like the New York Times, the Washington Post, the Wall Street Journal, The Guardian, BBC News, Al Jazeera, the Associated Press, Reuters, and Fox News, among others.

It has published datasets that shed light on law enforcement fusion centers spying on Black Lives Matter activists, revealed Oath Keepers supporters among law enforcement and elected officials, and exposed thousands of videos from January 6 rioters, including many that were used as evidence in Donald Trump’s second impeachment inquiry. (Disclosure: I’m an adviser to DDoSecrets.)

But not everyone is a fan. DDoSecrets has powerful enemies and has found itself censored by some of the world’s biggest tech companies, including X (formerly Twitter) and Reddit. The governments of Russia and Indonesia are also censoring access to its website.

Shortly before the 2020 election, Twitter prevented users from posting links to a New York Post article based on documents stolen from Hunter Biden’s laptop, citing a violation of the company’s hacked materials policy. After intense pressure from Republicans, Twitter reversed course two days later. This was widely covered in the media and even led to congressional hearings.

What’s less well known is that earlier in 2020, in the midst of the Black Lives Matter uprising, Twitter used the same hacked materials policy to not only permanently ban the @DDoSecrets account, but also prevent users from posting any links to ddosecrets.com. This was in response to the collective publishing the BlueLeaks dataset, a collection of 270GB of documents from over 200 law enforcement agencies. (German authorities also seized a DDoSecrets server after the release of BlueLeaks, bringing the collective’s data server temporarily offline.)

When Elon Musk bought Twitter, which he has since renamed X, he promised that he would restore “free speech” to the platform. But Musk’s company is still censoring DDoSecrets; links to the website have been blocked on the platform for over three years. Lorax Horne, an editor at DDoSecrets, told The Intercept that they are “not surprised” that Musk isn’t interested in ending the censorship. “We afflict the comfortable, and we include a lot of trans people,” they said. “Transparency is not comforting to the richest people in the world.”

DDoSecrets censorship
X prevents users from posting links to the DDoSecrets website.
Screenshot: The Intercept

If you try to post a DDoSecrets link to X, you’ll receive an error message stating, “We can’t complete this request because this link has been identified by Twitter or our partners as being potentially harmful.” The same thing happens if you try sending a DDoSecrets link in a direct message. X did not respond to a request for comment.

“There’s no doubt that ddosecrets.com being blocked on Twitter impacts our ability to connect with journalists,” Horne told The Intercept. “In the last week, I’ve had to explain to new reporters why they can’t post our link.”

Reddit Shadow-Bans DDoSecrets

X isn’t the only company that has been censoring DDoSecrets since it published BlueLeaks in 2020. The popular social news aggregator Reddit has been doing the same, only more subtly.

As an example, I posted a link to the DDoSecrets website in the r/journalism subreddit. I also posted two comments on that post, one that included a link to the DDoSecrets BlueLeaks page and another that didn’t. While logged in to my Reddit account, I can see my post in the subreddit, and I can view both comments.

DDoSecrets censorship
Users can see their own Reddit posts with links to the DDoSecrets website.
Screenshot: The Intercept

However, when I view the r/journalism subreddit while logged in to a different Reddit account, or while not logged in at all, my post isn’t displayed. If I load the post link directly, I can see it, but the link to ddosecrets.com isn’t there, and the comment that included the link to BlueLeaks is hidden.

DDoSecrets censorship
Other Reddit users are prevented from seeing links to the DDoSecrets website.
Screenshot: The Intercept

“People can link to news articles that use our documents but can’t link to the source,” Horne said when asked about Reddit’s censorship, which “impedes people finding verified links to our archive” and “inevitably will stop some people from finding us.”

In October 2020, while I was in the midst of reporting on BlueLeaks, I did a Reddit “ask me anything,” an open conversation for members of the r/privacy community to ask about my work. At the time, we had trouble getting the AMA started because of Reddit’s censorship of DDoSecrets. Eventually, we had to start the AMA over with a new post that did not include any DDoSecrets links in the description, and I had to refrain from posting links in the comments.

“Reddit’s sitewide policies strictly prohibit posting someone’s personal information,” a Reddit spokesperson told The Intercept. “Our dedicated internal Safety teams enforce these policies through a combination of automated tooling and human review. This includes blocking links to offsite domains that break our policies.”

Like X, Reddit is inconsistent in enforcing its policy. After receiving Reddit’s statement, I posted a link in the r/journalism subreddit to the WikiLeaks website. Unlike DDoSecrets, which distributes most datasets that contain people’s private information only to journalists and researchers who request access, WikiLeaks published everything for anyone to download. In 2016, for example, the group published a dataset that included private information, including addresses and cellphone numbers, for 20 million female voters in Turkey.

But Reddit doesn’t censor links to WikiLeaks like it does with DDoSecrets; if I view the r/journalism subreddit while not logged in to a Reddit account, my post with the link shows up.

DDoSecrets censorship
Reddit users can freely post links to the WikiLeaks website.
Screenshot: The Intercept

Russia and Indonesia Bar Access

After Russia invaded Ukraine in February 2022, hackers, most claiming to be hacktivists, compromised dozens of Russian organizations, including government agencies, oil and gas companies, and financial institutions. They flooded DDoSecrets with terabytes of Russian data, which the collective published.

One of the hacked organizations was Roskomnadzor, the Russian government agency responsible for spying on and censoring the internet and other mass media in Russia. The most recent leak of data from this agency (DDoSecrets hosts three separate leaks) includes information about Russia censoring DDoSecrets itself.

“Colleagues, good morning! Please include links in the register of violators,” a Russian censor wrote in an August 2022 email buried in a collection of 335GB of data from the General Radio Frequency Center of Roskomnadzor. A scanned court document adding ddosecrets.com to Russia’s censorship list was attached to the email.

“It was only a matter of time,” Horne said of Russia blocking access to DDoSecrets. “Our partners like IStories, OCCRP, and Meduza have it worse and have been placed on the undesirable organizations list. We are lucky that we have no staff in Russia and haven’t had to move anyone out of the country.”

Roskomnadzor did not respond to a request for comment.

Indonesia has also blocked access to the DDoSecrets website since July 21, 2023, according to data collected by the Open Observatory of Network Interference, a project that monitors internet censorship by analyzing data from probes located around the world.

Indonesia’s Ministry of Communication and Informatics, the government agency responsible for internet censorship, did not respond to a request for comment.

On July 18, three days before the block went into effect, DDoSecrets published over half a million emails from the Jhonlin Group, a coal mining and palm oil conglomerate that has been criticized by Reporters Without Borders and Human Rights Watch for using police to jail journalists.

The post Tech Companies and Governments Are Censoring the Journalist Collective DDoSecrets appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/12/ddosecrets-censorship-reddit-twitter/feed/ 0 DDoSecrets censorship X prevents users from posting links to ddosecrets.com DDoSecrets censorship You can see your own Reddit posts to ddosecrets.com DDoSecrets censorship Other Reddit users are prevented from seeing ddosecrets.com links DDoSecrets censorship Reddit uses can freely post links to wikileaks.org
<![CDATA[Vice Pulled a Documentary Critical of Saudi Arabia. But Here It Is.]]> https://theintercept.com/2023/09/09/vice-deleted-documentary-saudi-arabia/ https://theintercept.com/2023/09/09/vice-deleted-documentary-saudi-arabia/#respond Sat, 09 Sep 2023 11:00:00 +0000 https://theintercept.com/?p=444000 Vice’s hard-nosed coverage on Saudi Arabia changed after investment deals with the repressive kingdom. A deleted documentary is not completely gone, however.

The post Vice Pulled a Documentary Critical of Saudi Arabia. But Here It Is. appeared first on The Intercept.

]]>
In the past, Vice has documented the history of censorship on YouTube. More recently, since the company’s near implosion, it became an active participant in making things disappear.

In June, six months after announcing a partnership deal with a Saudi Arabian government-owned media company, Vice uploaded but then quickly removed a documentary critical of the Persian Gulf monarchy’s notorious dictator, Crown Prince Mohammed bin Salman, or MBS.

The nearly nine-minute film, titled “Inside Saudi Crown Prince’s Ruthless Quest for Power,” was uploaded to the Vice News YouTube channel on June 19, 2023. It garnered more than three-quarters of a million views before being set to “private” within four days of being posted. It can no longer be seen at its original link on Vice’s YouTube channel; visitors see a message that says “video unavailable.” Vice did not respond to a request for comment on why the video was published and then made private or if there are any plans to make the video public again.

The Guardian first reported that a “film in the Vice world news Investigators series about Saudi crown prince Mohammed bin Salman was deleted from the internet after being uploaded.” Though Vice did remove the film from its public YouTube channel, it is, in fact, not “deleted from the internet” and presently remains publicly accessible via web archival services.

Vice’s description of the video, now also unavailable on YouTube, previously stated that Saudi Crown Prince Mohammed “orchestrates The Ritz Purge, kidnaps Saudi’s elites and royal relatives with allegations of torture inside, and his own men linked to the brutal hacking of Journalist Khashoggi – a murder that stunned the world.” The description goes on to state that Wall Street Journal reporters Bradley Hope and Justin Scheck “attempt to unfold the motivations of the prince’s most reckless decision-making.” Hope and Scheck are the co-authors of the 2020 book “Blood and Oil: Mohammed bin Salman’s Ruthless Quest for Global Power.”

A screenshot from the documentary “Inside Saudi Crown Prince’s Ruthless Quest for Power,” which Vice News deleted from its YouTube channel.
Image: The Intercept; Source: Vice News

In the documentary, Hope states that Crown Prince Mohammed is “disgraced internationally” owing to the Jamal Khashoggi murder, a topic which Vice critically covered at length in the past. More recently, however, Vice has shifted its coverage of Saudi Arabia, apparently due to the growth of its commercial relationship with the kingdom. The relationship appears to have begun in 2017, owing to MBS’s younger brother, Khalid bin Salman, being infatuated with the brand: Bin Salman reportedly set up a meeting between Vice co-founder Shane Smith and MBS.

By the end of 2018, Vice had worked with the Saudi Research and Media Group to produce promotional videos for Saudi Arabia. A few days after the Guardian piece detailing the deal came out, an “industry source” told Variety (whose parent company, Penske Media Corporation, received $200 million from the Saudi sovereign wealth fund earlier that year) that Vice was “reviewing” its contract with SRMG.

A subsequent Guardian investigation revealed that in 2020, Vice helped organize a Saudi music festival subsidized by the Saudi government. Vice’s name was not listed on publicity materials for the event, and contractors working on the event were presented with nondisclosure agreements.

In 2021, Vice opened an office in Riyadh, Saudi Arabia. The media company has gone from being “banned from filming in Riyadh” in 2018 to now actively recruiting for a producer “responsible for developing and assisting the producing of video content from short form content to long-form for our new media brand, headquartered in Riyadh.” The company lists 11 other Riyadh-based openings.

Commenting on the opening of the Riyadh office, a Vice spokesperson told the Guardian that “our editorial voice has and always will report with complete autonomy and independence.” In response to the Guardian recently asking about the rationale for the removal of the film, a Vice source stated that this was partially owing to concerns about the safety of Saudi-based staff.

In September 2022, the New York Times reported that Vice was considering engaging in a deal with the Saudi media company MBC. The deal was officially announced at the start of 2023. Most recently, the Guardian reported that Vice shelved a story which stated that the “Saudi state is helping families to harass and threaten transgender Saudis based overseas.” In response to this latest instance of apparent capitulation to advancing Saudi interests, the Vice Union issued a statement saying that it was “horrified but not shocked.” It added, “We know the company is financially bankrupt, but it shouldn’t be morally bankrupt too.”

Meanwhile, a map of Saudi Arabia reportedly hangs on a wall in Vice’s London office.

The post Vice Pulled a Documentary Critical of Saudi Arabia. But Here It Is. appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/09/vice-deleted-documentary-saudi-arabia/feed/ 0 A screenshot from the documentary "Inside Saudi Crown Prince’s Ruthless Quest for Power," which Vice News deleted from its YouTube channel.
<![CDATA[U.S. Spy Agency Dreams of Surveillance Underwear It’s Calling “SMART ePANTS”]]> https://theintercept.com/2023/09/02/smart-epants-wearable-technology/ https://theintercept.com/2023/09/02/smart-epants-wearable-technology/#respond Sat, 02 Sep 2023 10:00:00 +0000 https://theintercept.com/?p=443504 The Office of the Director of National Intelligence is throwing $22 million in taxpayer money at developing clothing that records audio, video, and location data.

The post U.S. Spy Agency Dreams of Surveillance Underwear It’s Calling “SMART ePANTS” appeared first on The Intercept.

]]>
The future of wearable technology, beyond now-standard accessories like smartwatches and fitness tracking rings, is ePANTS, according to the intelligence community. 

The federal government has shelled out at least $22 million in an effort to develop “smart” clothing that spies on the wearer and its surroundings. Similar to previous moonshot projects funded by military and intelligence agencies, the inspiration may have come from science fiction and superpowers, but the basic applications are on brand for the government: surveillance and data collection.

Billed as the “largest single investment to develop Active Smart Textiles,” the SMART ePANTS — Smart Electrically Powered and Networked Textile Systems — program aims to develop clothing capable of recording audio, video, and geolocation data, the Office of the Director of National Intelligence announced in an August 22 press release. Garments slated for production include shirts, pants, socks, and underwear, all of which are intended to be washable.

The project is being undertaken by the Intelligence Advanced Research Projects Activity, the intelligence community’s secretive counterpart to the military’s better-known Defense Advanced Research Projects Agency, or DARPA. IARPA’s website says it “invests federal funding into high-risk, high reward projects to address challenges facing the intelligence community.” Its tolerance for risk has led to both impressive achievements, like a Nobel Prize awarded to physicist David Wineland for his research on quantum computing funded by IARPA, as well as costly failures.

“A lot of the IARPA and DARPA programs are like throwing spaghetti against the refrigerator,” Annie Jacobsen, author of a book about DARPA, “The Pentagon’s Brain,” told The Intercept. “It may or may not stick.”

According to the Office of the Director of National Intelligence’s press release, “This eTextile technology could also assist personnel and first responders in dangerous, high-stress environments, such as crime scenes and arms control inspections without impeding their ability to swiftly and safely operate.”

IARPA contracts for the SMART ePANTS program have gone to five entities. As the Pentagon disclosed this month along with other contracts it routinely announces, IARPA has awarded $11.6 million and $10.6 million to defense contractors Nautilus Defense and Leidos, respectively. The Pentagon did not disclose the value of the contracts with the other three: Massachusetts Institute of Technology, SRI International, and Areté. “IARPA does not publicly disclose our funding numbers,” IARPA spokesperson Nicole de Haay told The Intercept.

Dawson Cagle, a former Booz Allen Hamilton associate, serves as the IARPA program manager leading SMART ePANTS. Cagle invoked his time serving as a United Nations weapons inspector in Iraq between 2002 and 2006 as important experience for his current role.

“As a former weapons inspector myself, I know how much hand-carried electronics can interfere with my situational awareness at inspection sites,” Cagle recently told Homeland Security Today. “In unknown environments, I’d rather have my hands free to grab ladders and handrails more firmly and keep from hitting my head than holding some device.”

SMART ePANTS is not the national security community’s first foray into high-tech wearables. In 2013, Adm. William McRaven, then-commander of U.S. Special Operations Command, presented the Tactical Assault Light Operator Suit. Called TALOS for short, the proposal sought to develop a powered exoskeleton “supersuit” similar to that worn by Matt Damon’s character in “Elysium,” a sci-fi action movie released that year. The proposal also drew comparisons to the suit worn by Iron Man, played by Robert Downey Jr., in a string of blockbuster films released in the run-up to TALOS’s formation.

“Science fiction has always played a role in DARPA,” Jacobsen said.

The TALOS project ended in 2019 without a demonstrable prototype, but not before racking up $80 million in costs.

As IARPA works to develop SMART ePANTS over the next three and a half years, Jacobsen stressed that the advent of smart wearables could usher in troubling new forms of government biometric surveillance.

“They’re now in a position of serious authority over you. In TSA, they can swab your hands for explosives,” Jacobsen said. “Now suppose SMART ePANTS detects a chemical on your skin — imagine where that can lead.” With consumer wearables already capable of monitoring your heartbeat, further breakthroughs could give rise to more invasive biometrics.

“IARPA programs are designed and executed in accordance with, and adhere to, strict civil liberties and privacy protection protocols. Further, IARPA performs civil liberties and privacy protection compliance reviews throughout our research efforts,” de Haay, the spokesperson, said.

There is already evidence that private industry outside of the national security community are interested in smart clothing. Meta, Facebook’s parent company, is looking to hire a researcher “with broad knowledge in smart textiles and garment construction, integration of electronics into soft and flexible systems, and who can work with a team of researchers working in haptics, sensing, tracking, and materials science.”

The spy world is no stranger to lavish investments in moonshot technology. The CIA’s venture capital arm, In-Q-Tel, recently invested in Colossal Biosciences, a wooly mammoth resurrection startup, as The Intercept reported last year.

If SMART ePANTS succeeds, it’s likely to become a tool in IARPA’s arsenal to “create the vast intelligence, surveillance, and reconnaissance systems of the future,” said Jacobsen. “They want to know more about you than you.”

The post U.S. Spy Agency Dreams of Surveillance Underwear It’s Calling “SMART ePANTS” appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/02/smart-epants-wearable-technology/feed/ 0
<![CDATA[Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy]]> https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/ https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/#respond Wed, 30 Aug 2023 16:32:15 +0000 https://theintercept.com/?p=442999 In an internal update obtained by The Intercept, Facebook and Instagram’s parent company admits its rules stifled legitimate political speech.

The post Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy appeared first on The Intercept.

]]>
The social media giant Meta recently updated the rulebook it uses to censor online discussion of people and groups it deems “dangerous,” according to internal materials obtained by The Intercept. The policy had come under fire in the past for casting an overly wide net that ended up removing legitimate, nonviolent content.

The goal of the change is to remove less of this material. In updating the policy, Meta, the parent company of Facebook and Instagram, also made an internal admission that the policy has censored speech beyond what the company intended.

Meta’s “Dangerous Organizations and Individuals,” or DOI, policy is based around a secret blacklist of thousands of people and groups, spanning everything from terrorists and drug cartels to rebel armies and musical acts. For years, the policy prohibited the more than one billion people using Facebook and Instagram from engaging in “praise, support or representation” of anyone on the list.

Now, Meta will provide a greater allowance for discussion of these banned people and groups — so long as it takes place in the context of “social and political discourse,” according to the updated policy, which also replaces the blanket prohibition against “praise” of blacklisted entities with a new ban on “glorification” of them.

The updated policy language has been distributed internally, but Meta has yet to disclose it publicly beyond a mention of the “social and political discourse” exception on the community standards page. Blacklisted people and organizations are still banned from having an official presence on Meta’s platforms.

The revision follows years of criticism of the policy. Last year, a third-party audit commissioned by Meta found the company’s censorship rules systematically violated the human rights of Palestinians by stifling political speech, and singled out the DOI policy. The new changes, however, leave major problems unresolved, experts told The Intercept. The “glorification” adjustment, for instance, is well intentioned but likely to suffer from the same ambiguity that created issues with the “praise” standard.

“Changing the DOI policy is a step in the right direction, one that digital rights defenders and civil society globally have been requesting for a long time,” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, told The Intercept.

Observers like Shtaya have long objected to how the DOI policy has tended to disproportionately censor political discourse in places like Palestine — where discussing a Meta-banned organization like Hamas is unavoidable — in contrast to how Meta rapidly adjusted its rules to allow praise of the Ukrainian Azov Battalion despite its neo-Nazi sympathies.

“The recent edits illustrate that Meta acknowledges the participation of certain DOI members in elections,” Shtaya said. “However, it still bars them from its platforms, which can significantly impact political discourse in these countries and potentially hinder citizens’ equal and free interaction with various political campaigns.”

Acknowledged Failings

Meta has long maintained the original DOI policy is intended to curtail the ability of terrorists and other violent extremists from causing real-world harm. Content moderation scholars and free expression advocates, however, maintain that the way the policy operates in practice creates a tendency to indiscriminately swallow up and delete entirely nonviolent speech. (Meta declined to comment for this story.)

In the new internal language, Meta acknowledged the failings of its rigid approach and said the company is attempting to improve the rule. “A catch-all policy approach helped us remove any praise of designated entities and individuals on the platform,” read an internal memo announcing the change. “However, this approach also removes social and political discourse and causes enforcement challenges.”

Meta’s proposed solution is “recategorizing the definition of ‘Praise’ into two areas: ‘References to a DOI,’ and ‘Glorification of DOIs.’ These fundamentally different types of content should be treated differently.” Mere “references” to a terrorist group or cartel kingpin will be permitted so long as they fall into one of 11 new categories of discourse Meta deems acceptable:

Elections, Parliamentary and executive functions, Peace and Conflict Resolution (truce/ceasefire/peace agreements), International agreements or treaties, Disaster response and humanitarian relief, Human Rights and humanitarian discourse, Local community services, Neutral and informative descriptions of DOI activity or behavior, News reporting, Condemnation and criticism, Satire and humor.

Posters will still face strict requirements to avoid running afoul of the policy, even if they’re attempting to participate in one of the above categories. To stay online, any Facebook or Instagram posts mentioning banned groups and people must “explicitly mention” one of the permissible contexts or face deletion. The memo says “the onus is on the user to prove” that they’re fitting into one of the 11 acceptable categories.

According to Shtaya, the Tahrir Institute fellow, the revised approach continues to put Meta’s users at the mercy of a deeply flawed system. She said, “Meta’s approach places the burden of content moderation on its users, who are neither language experts nor historians.”

Unclear Guidance

Instagram and Facebook users will still have to hope their words aren’t interpreted by Meta’s outsourced legion of overworked, poorly paid moderators as “glorification.” The term is defined internally in almost exactly the same language as its predecessor, “praise”: “Legitimizing or defending violent or hateful acts by claiming that those acts or any type of harm resulting from them have a moral, political, logical, or other justification that makes them appear acceptable or reasonable.” Another section defines glorification as any content that “justifies or amplifies” the “hateful or violent” beliefs or actions of a banned entity, or describes them as “effective, legitimate or defensible.”

Though Meta intends this language to be universal, equitably and accurately applying labels as subjective as “legitimate” or “hateful” to the entirety of global online discourse has proven impossible to date.

“Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term,” according to Ángel Díaz, a professor at University of Southern California’s Gould School of Law and a scholar of social media content policy. “The policy still overburdens legitimate discourse.”

“Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term. The policy still overburdens legitimate discourse.”

The notions of “legitimization” or “justification” are deeply complex, philosophical matters that would be difficult to address by anyone, let alone a contractor responsible for making hundreds of judgments each day.

The revision does little to address the heavily racialized way in which Meta assesses and attempts to thwart dangerous groups, Díaz added. While the company still refuses to disclose the blacklist or how entries are added to it, The Intercept published a full copy in 2021. The document revealed that the overwhelming majority of the “Tier 1” dangerous people and groups — who are still subject to the harshest speech restrictions under the new policy — are Muslim, Arab, or South Asian. White, American militant groups, meanwhile, are overrepresented in the far more lenient “Tier 3” category.

Díaz said, “Tier 3 groups, which appear to be largely made up of right-wing militia groups or conspiracy networks like QAnon, are not subject to bans on glorification.”

Meta’s own internal rulebook seems unclear about how enforcement is supposed to work, seemingly still dogged by the same inconsistencies and self-contradictions that have muddled its implementation for years.

For instance, the rule permits “analysis and commentary” about a banned group, but a hypothetical post arguing that the September 11 attacks would not have happened absent U.S. aggression abroad is considered a form of glorification, presumably of Al Qaeda, and should be deleted, according to one example provided in the policy materials. Though one might vehemently disagree with that premise, it’s difficult to claim it’s not a form of analysis and commentary.

Another hypothetical post in the internal language says, in response to Taliban territorial gains in the Afghanistan war, “I think it’s time the U.S. government started reassessing their strategy in Afghanistan.” The post, the rule says, should be labeled as nonviolating, despite what appears to be a clear-cut characterization of the banned group’s actions as “effective.”

David Greene, civil liberties director at the Electronic Frontier Foundation, told The Intercept these examples illustrate how difficult it will be to consistently enforce the new policy. “They run through a ton of scenarios,” Greene said, “but for me it’s hard to see a through-line in them that indicates generally applicable principles.”

The post Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy appeared first on The Intercept.

]]>
https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/feed/ 0