Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
You think the Court will accept credibility determinations made by a computer program?
You are missing the point. AI is a tool. It is not the lawyer. AI would review and gather relevant evidence. It would present it and reach a suggested conclusion. An attorney/VLJ would review it. It is just a tool.
So many in here have clearly never used AI and the resistance to technology that could vastly improve the service to veterans is a perfect example of why the general public is so quick to celebrate the termination of bureaucrats and disruptions to our lives.
The government is a bloated dinosaur that is too protective of the notion of how things were/are done in order to protect the bloat and jobs.
That’s a complete misconception. The US population grew at a staggering rate increasing 50% in 50 years whereas the government only increased at a rate of 4%. We simply don’t have enough people to do the job that are required.The government isn’t bloated, it’s just Republicans who are telling you that. And you seem to simply be buying into it without any actual thought.
Anonymous wrote:
Oh for pete's sake, this discussion is annoying. I bet if you asked most people at the Board, 'would you like an AI tool to help you review lengthy case files, highlight relevant evidence, and assist you in drafting?', they would say yes. If you ask people 'do you think AI can do your job?' they would say no. And if you couple that question with an insult and a threat to everyone's livelihood, the response is going to be even more vehement. The so-called technology resistance some folks are complaining about is more a response to the inflammatory way AI being discussed on this board than anything else imo.
There are parts of our job that I think can absolutely be done better by AI - reviewing thousands of pages of medical records, for example. I would LOVE for an AI program to just pull out the relevant evidence. AI can also help us draft more quickly. If/when AI is introduced, some attorney jobs might shift away from drafting decisions and towards teaching the AI - after all, the law is constantly changing, so it's not like you can set up the AI and just let it run. In the future, I think AI-assisted attorneys could turn out exponentially more cases than we do now. But at the end of the day, I think a human needs to make the call of whether it's going to be a grant, a denial, or a remand. A human needs to direct the AI, not just do a quality check every 10 cases. Even if we get to the point where there's no technological limitation that prevents AI from generating a case entirely independently, there are ethical limitations. I suspect there would be legal limitations too - if an AI independently denies a claim, people are going to start working on a lawsuit about the way to use AI. I doubt the outcome will be 'yeah, this is fine, we don't need humans at all.' Any predictions that don't take that into account are just blowing smoke.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
You think the Court will accept credibility determinations made by a computer program?
You are missing the point. AI is a tool. It is not the lawyer. AI would review and gather relevant evidence. It would present it and reach a suggested conclusion. An attorney/VLJ would review it. It is just a tool.
So many in here have clearly never used AI and the resistance to technology that could vastly improve the service to veterans is a perfect example of why the general public is so quick to celebrate the termination of bureaucrats and disruptions to our lives.
The government is a bloated dinosaur that is too protective of the notion of how things were/are done in order to protect the bloat and jobs.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
You think the Court will accept credibility determinations made by a computer program?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
You think the Court will accept credibility determinations made by a computer program?
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.
That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.
That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.
That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.
It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.
Agreed. People seem to think AI will just be correct. That’s not true at all. AI has been tested in courts; attorneys relied on AI to write briefs, and they have been disastrous. AI just makes up case names and fake citations. AI doesn’t have any control. It just gets input, and gives an output.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.
That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.
That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.
That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.
It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.
There’s no need to review every single AI drafted decision. Just review samples. However, it will probably take a minimum of 10 years for AI to reach this level. BVA will still employ many attorneys in the meantime.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.
That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.
That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.
That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.
It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.
That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.
That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.
That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.
It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.
Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.
My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.
AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.
BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.
I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.
Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?
Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.
CCW can absolutely be programmed.
BVA work is very basic and formulaic.
Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.
That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.
That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.