|
So many companies that deal with the government use Claide AI by Anthropic. If anthropic is blacklisted for refusing to allow the government to use Claude for certain military snd surveillance projects , the government will pivot to other AI makers. These companies do not have guardrails like Anthropic and the cutover to other AI will probably cost $$$$, right?
This situation seems complicated and costly with many companies affected. How many other contractors are likely to follow Anthropics lead? I’m happy a company has put guardrails on its AI but at the same time it seems that our defenses may be already too reliant on Claude AI. Can someone with more knowledge explain the situation and its trickle down impacts better? |
| I think there will be backlash against companies that go along with Hegseth's demands. |
Yes. Claude right now isn’t as good as some, but I am using it exclusively. The more it gets used the better training (one hopes). |
Agree. Is it better to have defensive holes? Is it possible that other governments could take over this way? -OP |
Is there a single Claude? If not, you’re not necessarily training the one the government uses. |
| If an AI is going to make decisions without humans, why bother having humans in government roles? Seems like this is where we are headed without AI guardrails. |
| The makers of these AI programs could do an amazing thing and use the trust and power people put in AI to psyop MAGA out of existence. The fact that they haven’t done it tells you where their loyalties are. |
|
This is all so nervewracking...
https://www.washingtonpost.com/technology/2026/02/27/anthropic-pentagon-lethal-military-ai/ The Trump administration is embarking upon a vast expansion of the military’s use of AI. But leading figures in the development of the technology have long had ethical and legal concerns about giving AI the power to make life-and-death decisions or turbocharging surveillance. Emil Michael, a former Uber executive who is now under secretary of defense for research and engineering, has taken the lead in the discussions with Anthropic. He has argued the government and not individual tech firms should have the final say in how the technology is used, according to a person familiar with the discussions. |
| This is particularly amusing for those of us in the traditional defense industry. These chuds like Hegseth are constantly talking about bringing in commercial companies and Silicon Valley into the defense industry, and how DoD is going to adopt commercial practices for contracting to entice them in, etc. A Silicon Valley company bites and offers its tech, and then they turn around and start threatening all kinds of penalties and blacklisting the second they don’t like the company’s standard commercial terms. |
| Anthropic is the only AI company that employs a philosopher. That tells me something, along with her riposte to Elon Musk, who said she's unqualified because she has no children: "I think it depends on how much you care about people in general vs. your own kin." |
I don't know but I feel watching some of the most recent international developments that our general security/intelligence forces have become much better. EG Venezuela, Iran, Mexico. |
| AI models are not interchangeable. ChatGPT doesn’t have the coding ability of Claude. ChatGPT is trash compared to Claude. Kegsbreath just needs to butt out and stop interfering with private companies. He’s really only qualified to comment on getting a cabinet position you’re unqualified for based solely on looks or leaking classified data in signal. |
| Is this an apolitical issue for Anthropic? Will Anthropic be willing to drop the guard rails when the Dems control the WH? |
Yes. |