Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
You’re weirdly invested in commenting on a thread where you don’t even understand the technology.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
I want organizations doing important work to be doing it well. And that there are apparently no economic pressures on your taxpayer-funded org making them keep up -- I'm sure this isn't the only symptom of that.
Taxpayer funded? You're responding to someone in the private sector after complaining that person doesn't use federal tools. You're not even coherent.
A thing I notice about needed and useful innovations is that people adopt them without needing to be constantly told how vital it is that they be adopted.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
I want organizations doing important work to be doing it well. And that there are apparently no economic pressures on your taxpayer-funded org making them keep up -- I'm sure this isn't the only symptom of that.
Taxpayer funded? You're responding to someone in the private sector after complaining that person doesn't use federal tools. You're not even coherent.
A thing I notice about needed and useful innovations is that people adopt them without needing to be constantly told how vital it is that they be adopted.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
I want organizations doing important work to be doing it well. And that there are apparently no economic pressures on your taxpayer-funded org making them keep up -- I'm sure this isn't the only symptom of that.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
Well level up.
Level up why? To use AI in the workplace?
Yes hon. It’s like saying you don’t wanna “go to the cloud”. Or learn call waiting. Or wanna learn how to use the Internet . Or you don’t want to use computers you’d rather use a typewriter.
You better educate yourself on AI or you’re gonna be left behind.
NP. It's not an issue of education, it's that AI is not fit for every possible purpose. It's a tool, and you don't use the same tool for every job. To use your example, people do in fact prefer typewriters in remote places with unreliable electricity: you could crank a generator to power your tablet, or you could just get on with your work on a typewriter.
My workplace could, in theory, completely reconfigure everything so that my job could be done by AI. It would be a massive undertaking and the only payout would be replacing my one little job. Not worth it.
You don’t understand AI enough to really discuss it and I can tell by just reading what you wrote.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
Well level up.
Level up why? To use AI in the workplace?
Yes hon. It’s like saying you don’t wanna “go to the cloud”. Or learn call waiting. Or wanna learn how to use the Internet . Or you don’t want to use computers you’d rather use a typewriter.
You better educate yourself on AI or you’re gonna be left behind.
NP. It's not an issue of education, it's that AI is not fit for every possible purpose. It's a tool, and you don't use the same tool for every job. To use your example, people do in fact prefer typewriters in remote places with unreliable electricity: you could crank a generator to power your tablet, or you could just get on with your work on a typewriter.
My workplace could, in theory, completely reconfigure everything so that my job could be done by AI. It would be a massive undertaking and the only payout would be replacing my one little job. Not worth it.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
You’re weirdly invested in this tool getting into every nook and cranny. Why?
Anonymous wrote:Claude does everything for me now. Probably shaved 5-10 hours a week for me of BS writing and editing that I have to do.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.
No one is "developing their own closed model". There are open source models you can run locally on your servers. If the NSA can figure out how to do this (including with proprietary models in FedRAMPd environments), your security concerns are incorrect. And that you think you would need to develop your own models means you are detached enough from how this works that "there's no business case here" is almost certainly incorrect. Do you write? Does your org produce code? Do you synthesize text data in any way? If so, you have a business case.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:No. Who is allowed to use AI for their job?!
literally everyone?
We’re not allowed to use it at my massive employer. National security field.
Models are in FedRAMP high environments. They're using it at DoD, the FFRDCs, the NSA. Who has banned this?
We’re private sector. What they use at the agencies doesn’t apply to us.
If govies are using a technology and you're not allowed to, that's really saying something about your org.
No. It says something about the US government.
We would have to develop our own closed model in order for it to be secure, or at least mostly secure. As you can see from the responses to this thread, there is no business case here. It doesn’t save time and will only cost money. Anything we build will be old tech five minutes later. It doesn’t make financial sense and it doesn’t make workflow sense.