If the Govt Blacklists Anthropic, what companies will take collateral damage? What will it cost the US?

Anonymous
Anonymous wrote:So, I want guardrails for all AI but is it feasible to keep them on while our adversaries don’t use them? How does it work?


If the wants to use autonomous killing machines, then they shouldn’t have fired so many mathematicians and scientists.

Trump better learn to code pretty fast lol

Anonymous
Anonymous wrote:
Anonymous wrote:So, I want guardrails for all AI but is it feasible to keep them on while our adversaries don’t use them? How does it work?


If the wants to use autonomous killing machines, then they shouldn’t have fired so many mathematicians and scientists.

Trump better learn to code pretty fast lol



Fingers are too tiny
Anonymous
Meanwhile is that some AI drone rech accidentally shooting down a balloon and another US drone in TX? No thanks!
Anonymous
Just use xAI
Anonymous
And Trump order Anthropic out
Anonymous
This issue is soooo much bigger than red vs blue. Why aren’t any of you concerned about the global impact of any country using AI to make government decisions?
Anonymous
Anonymous wrote:This issue is soooo much bigger than red vs blue. Why aren’t any of you concerned about the global impact of any country using AI to make government decisions?


How is AI making government decisions? It's not sentient, it just compiles all available data and spits out what it is trained to spit out. Someone will have to give it input to go autonomously kill.
Anonymous
Anonymous wrote:
Anonymous wrote:This issue is soooo much bigger than red vs blue. Why aren’t any of you concerned about the global impact of any country using AI to make government decisions?


How is AI making government decisions? It's not sentient, it just compiles all available data and spits out what it is trained to spit out. Someone will have to give it input to go autonomously kill.


Yes, there is talk of letting AI do xyz if abc threats present themselves without human intervention. It isn’t a case of just asking AI to spit out an assessment as I understand it. It is giving it power to take action.
Anonymous
The responses from the right appear to be that our enemies won't have these limitations. Does not make sense.

Japan used Kamikaze pilots in WWII. We still did not.
Anonymous
Anonymous wrote:
Anonymous wrote:So many companies that deal with the government use Claide AI by Anthropic. If anthropic is blacklisted for refusing to allow the government to use Claude for certain military snd surveillance projects , the government will pivot to other AI makers. These companies do not have guardrails like Anthropic and the cutover to other AI will probably cost $$$$, right?

This situation seems complicated and costly with many companies affected. How many other contractors are likely to follow Anthropics lead? I’m happy a company has put guardrails on its AI but at the same time it seems that our defenses may be already too reliant on Claude AI.

Can someone with more knowledge explain the situation and its trickle down impacts better?


I don't know but I feel watching some of the most recent international developments that our general security/intelligence forces have become much better. EG Venezuela, Iran, Mexico.


Let's take those case by case:

-Venezuela: anyone could be successful when the number two is conspiring against the number one and gives the US military the keys to the kingdom.
-Mexico: The Mexican government likely has the cartels fully wired (probably with US help), but the political decision to go after them is an entirely different process.
-Iran: Iran just brutally crushed a massive uprising and turned off the internet. The most powerful military in the world may or may not lob some bombs on the orders of its wildly unstable leader. The reason for doing so is ... oh right, the wildly unstable leader hasn't said.

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:So, I want guardrails for all AI but is it feasible to keep them on while our adversaries don’t use them? How does it work?


If the wants to use autonomous killing machines, then they shouldn’t have fired so many mathematicians and scientists.

Trump better learn to code pretty fast lol



Fingers are too tiny


In his usual crazy old fart style, he called Anthropic "left wing nut jobs." But as usual, the only nut job in the room rhymes with "dump."
Anonymous
Anonymous wrote:If an AI is going to make decisions without humans, why bother having humans in government roles? Seems like this is where we are headed without AI guardrails.


If AI is going to make decisions to shoot humans without human intervention, we're all doomed. But that's exactly the kind of insane shit that Hegseth is trying to pressure AI companies into doing. Hegseth can f all the way off and any company that goes along with this needs to be publicly scathed and boycotted.
Anonymous
Anonymous wrote:Well.

Trump orders federal agencies to ‘immediately cease’ using Anthropic technology

https://thehill.com/policy/technology/5759399-trump-bans-anthropic-tech/

"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!"

It is a radical left stance to be against mass surveillance and have concerns about fully autonomous weapons systems???!!

Pigs are flying!!!!!!!


Instead, the US is allowing a demented lunatic who belongs in a nursing home and sh&ts himself dictate how our great military fights and wins wars.
Anonymous
These people are morons. A company is not a national security threat just because they refuse to do business with you.
Anonymous
Anonymous wrote:These people are morons. A company is not a national security threat just because they refuse to do business with you.


They were willing to do business with them. They just didn’t want to help them make murderbots.
post reply Forum Index » Political Discussion
Message Quick Reply
Go to: