SILICON VALLEY'S BLOOD MONEY: How Tech Giants Are Profiting from AI-Powered Mass Murder"
Peter Thiel, and the Israelis
I had read some things about this, but they were fairly vague. And I had the sense that Minority Report both the Phillip K Dick book and the Tom Cruise film were becoming reality. However, I was unaware just how foul and evil these tech giants are, And how evil they can become when they are combined with countries like Israel, The United States, and international agencies. I knew that many of our current billionaires display traits of sociopathy, but this seems to be sociopathy extreme. And what's frightening is these are the people running the United States.
This was thrust back into my mind today as I listen to Alistair Crooke discuss the role of Mosaic AI and other programs in the murder of Iranian scientists and the lead up to the sneak attack on Iran by Israel, the USA, The UK and others.
Listen here as Crooke opens your mind… and then read on!
Who or what better to ask about this Then perplexity AI. I thought it might dance around the issue, but it was quite straightforward..Here is what it provided.
Unless you've been very up on this. This is going to shock you!
The Allegations Against Palantir's AI Systems and Their Implications for AI Ethics
The Mosaic System and Iranian Nuclear Intelligence
According to recent reports, Palantir's Mosaic AI system has played a controversial role in international nuclear intelligence operations. The system, originally developed for counterinsurgency operations in Iraq and Afghanistan, has been used by the International Atomic Energy Agency (IAEA) since 2015 to monitor Iran's nuclear activities.
Mosaic processes over 400 million data points including satellite imagery, social media posts, personnel logs, and metadata to predict nuclear threats and hostile intent. The system was designed not just to monitor what a state has done, but to predict what it might do based on behavioral models derived from counterterrorism doctrine. This predictive capability has raised significant concerns among nuclear experts who argue that "predictive AI and safeguards don't mix" and that it "turns monitoring into pretext".
The controversy deepened in June 2025 when Iran leaked documents allegedly showing IAEA chief Rafael Grossi sharing Mosaic outputs with Israel before a crucial censure vote. This intelligence sharing reportedly provided diplomatic cover for Israel's subsequent military strikes on Iranian nuclear facilities, including the Natanz enrichment facility.
AI Targeting Systems in Gaza
Israel has deployed multiple AI systems in its military operations in Gaza, including systems that appear to involve Palantir technology. Three primary AI systems have been identified:
Lavender is an AI targeting system that has marked tens of thousands of Palestinians as suspected militants for potential assassination. According to Israeli intelligence officers who spoke to +972 Magazine, the system was used to identify up to 37,000 Palestinians during the early stages of the war, with officers given sweeping approval to adopt Lavender's kill lists with minimal human oversight.
"Where's Daddy?" is a tracking system that follows suspected targets to their homes, often leading to strikes on family residences. One Israeli officer stated: "We were not interested in killing operatives only when they were in a military building... On the contrary, the IDF bombed them in homes without hesitation, as a first option".
The Gospel automatically reviews surveillance data to recommend bombing targets to human analysts. Former IDF chief Aviv Kohavi stated the system could produce 100 bombing targets per day compared to the 50 targets human analysts might produce per year.
Palantir's Direct Involvement with Israeli Military
Palantir has maintained a strategic partnership with the Israeli Defense Ministry, formalized in January 2024 following a meeting between Israeli defense officials and Palantir co-founders Peter Thiel and Alex Karp in Tel Aviv. The company has publicly stated "We stand with Israel" and held board meetings in Tel Aviv, describing their work in the region as "vital".
Ethical Concerns and Calls for Regulation
The deployment of AI in targeting operations has raised profound ethical concerns among experts and human rights organizations. Dr. Elke Schwarz from Queen Mary University of London notes that "the integration of AI-enabled weapon systems facilitates the objectification of human targets, leading to heightened tolerance for collateral damage".
The UN and various NGOs have called for global regulation of autonomous weapons systems, with a proposed ban by 2026. ( A lot of murder can go on between then and now)
Critics argue that these systems risk "eroding accountability" and breaching international humanitarian law.
The concern is that AI warfare creates what some experts call "digital dehumanisation" where algorithms make life-and-death decisions with minimal human oversight.
Former Employee Concerns
Thirteen former Palantir employees have condemned the company's work with government agencies, stating that the company has violated its original ethical principles. They wrote: "Early Palantirians understood the ethical weight of building these technologies. These principles have now been violated, and are rapidly being dismantled at Palantir Technologies".
The Broader Implications for AI Development
These developments highlight several critical issues in AI ethics and governance:
Accountability and Transparency: Mosaic's algorithms are described as "unauditable" and "proprietary," making it difficult to assess their accuracy or potential biases. This lack of transparency is particularly concerning when such systems influence life-and-death decisions.
Mission Creep: Systems originally designed for one purpose (counterinsurgency) are being repurposed for others (nuclear monitoring, urban warfare), potentially without adequate consideration of the different ethical requirements.
Human Oversight: Reports suggest that human review of AI targeting recommendations can be as brief as 20 seconds, raising questions about meaningful human control over lethal decisions.
The allegations surrounding Palantir's AI systems represent a significant moment in the debate over AI ethics and military applications. Whether these specific claims are fully accurate or not, they highlight the urgent need for robust governance frameworks, transparency requirements, and international agreements governing the use of AI in warfare and intelligence operations.
The stakes are particularly high given the potential for these technologies to be adopted more widely, potentially lowering the threshold for conflict and reducing human agency in critical decisions about the use of force.
An understatement if I ever heard one!