Board of Veterans Appeals (Attorney Advisor)

Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.


That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.


That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.


That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.

It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.


That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.


That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.


That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.

It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.


Agreed. People seem to think AI will just be correct. That’s not true at all. AI has been tested in courts; attorneys relied on AI to write briefs, and they have been disastrous. AI just makes up case names and fake citations. AI doesn’t have any control. It just gets input, and gives an output.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.


That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.


That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.


That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.

It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.


There’s no need to review every single AI drafted decision. Just review samples. However, it will probably take a minimum of 10 years for AI to reach this level. BVA will still employ many attorneys in the meantime.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.


That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.


That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.


That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.

It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.


There’s no need to review every single AI drafted decision. Just review samples. However, it will probably take a minimum of 10 years for AI to reach this level. BVA will still employ many attorneys in the meantime.


Maybe I’m missing something. You want to use AI to draft decisions, but you don’t actually want to review them?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


You think the Court will accept credibility determinations made by a computer program?
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


Actually, you don’t need judges then. You still need attorneys. That’s what your argument comes down to.


That is true. If you just rely on AI, you don’t need any judges. And I think you people are all making this way too simple. AI is only a tool. It can’t be an end. If I have something seriously wrong with me, I can turn to the Internet to help me diagnose, but I’m going to see a doctor for a real opinion.


That’s why BVA needs a cadre of 100 or so experienced attorneys to review AI drafted decisions for quality. Humans won’t be completely out of the adjudication process, at least not yet.


That’s not correct thinking. Unless you’re going to start from the position that AI is 100% correct, it’s going to take just as much time to review that decision written by AI and then compare it with the actual evidence of record. It might actually take more time.

It’s a lot easier to review the evidence and write a decision, then it is to review somebody else’s decision and then go through the evidence to find the evidence it supports.


Agreed. People seem to think AI will just be correct. That’s not true at all. AI has been tested in courts; attorneys relied on AI to write briefs, and they have been disastrous. AI just makes up case names and fake citations. AI doesn’t have any control. It just gets input, and gives an output.


AI is just a generic term for what generally refers to something that has a very, very large language database. There are countless AI tools. Some, like Chat GPT, don't have any parameters that would restrict hallucinating. Others, like Lexis AI, are confined to real law and cases and do not hallucinate.

AI can also be instructed to give citations and authority for any position it takes. It could cite to the record for any factual positions, just like a law journal does for virtually all propositions.

Chat GPT, for example can be given a set of your judge's cases and be told to rely on only those along with the CFR and USC and asked questions and told to cite to relevant authority for its positions. I know because I do just that. I update the dataset every few months. I can grab draft language out of it 10x faster than snippets. I can also give it fact patterns and it'll tell me how my judge has decided in the past and give me those cases. The searches are so much easier and faster than what we have on SharePoint.

If I were to put the effort into it, I could easily have it write decisions. Could it do the entire job? No, not as I have it implemented, but does it simplify my work? Oh very much so.

In terms of what the rest of the legal world is doing for doc review, it wouldn't be hard to have AI doing far more work.

Take 5 minutes and look at what platforms like Everlaw are doing.

BVA is in the stoneage with Case Flow.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


You think the Court will accept credibility determinations made by a computer program?


You are missing the point. AI is a tool. It is not the lawyer. AI would review and gather relevant evidence. It would present it and reach a suggested conclusion. An attorney/VLJ would review it. It is just a tool.

So many in here have clearly never used AI and the resistance to technology that could vastly improve the service to veterans is a perfect example of why the general public is so quick to celebrate the termination of bureaucrats and disruptions to our lives.

The government is a bloated dinosaur that is too protective of the notion of how things were/are done in order to protect the bloat and jobs.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


You think the Court will accept credibility determinations made by a computer program?


It will take years before AI is capable of making credibility determinations. But, it will get there eventually.
Anonymous
Credibility is entirely subjective. Not suited for AI analysis. The criteria for a yay or a nay on credibility would have to be established under a set of rules that could be programmed into the AI, and that would have to be established in law first.
Anonymous

Oh for pete's sake, this discussion is annoying. I bet if you asked most people at the Board, 'would you like an AI tool to help you review lengthy case files, highlight relevant evidence, and assist you in drafting?', they would say yes. If you ask people 'do you think AI can do your job?' they would say no. And if you couple that question with an insult and a threat to everyone's livelihood, the response is going to be even more vehement. The so-called technology resistance some folks are complaining about is more a response to the inflammatory way AI being discussed on this board than anything else imo.

There are parts of our job that I think can absolutely be done better by AI - reviewing thousands of pages of medical records, for example. I would LOVE for an AI program to just pull out the relevant evidence. AI can also help us draft more quickly. If/when AI is introduced, some attorney jobs might shift away from drafting decisions and towards teaching the AI - after all, the law is constantly changing, so it's not like you can set up the AI and just let it run. In the future, I think AI-assisted attorneys could turn out exponentially more cases than we do now. But at the end of the day, I think a human needs to make the call of whether it's going to be a grant, a denial, or a remand. A human needs to direct the AI, not just do a quality check every 10 cases. Even if we get to the point where there's no technological limitation that prevents AI from generating a case entirely independently, there are ethical limitations. I suspect there would be legal limitations too - if an AI independently denies a claim, people are going to start working on a lawsuit about the way to use AI. I doubt the outcome will be 'yeah, this is fine, we don't need humans at all.' Any predictions that don't take that into account are just blowing smoke.
Anonymous
Immediate BVA concerns: RIF, RTO, squashing union protections, whatever the hell “anti-Christian bias” is

Not an immediate concern: AI
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


You think the Court will accept credibility determinations made by a computer program?


You are missing the point. AI is a tool. It is not the lawyer. AI would review and gather relevant evidence. It would present it and reach a suggested conclusion. An attorney/VLJ would review it. It is just a tool.

So many in here have clearly never used AI and the resistance to technology that could vastly improve the service to veterans is a perfect example of why the general public is so quick to celebrate the termination of bureaucrats and disruptions to our lives.

The government is a bloated dinosaur that is too protective of the notion of how things were/are done in order to protect the bloat and jobs.


That’s a complete misconception. The US population grew at a staggering rate increasing 50% in 50 years whereas the government only increased at a rate of 4%. We simply don’t have enough people to do the job that are required.The government isn’t bloated, it’s just Republicans who are telling you that. And you seem to simply be buying into it without any actual thought.
Anonymous
Anonymous wrote:
Oh for pete's sake, this discussion is annoying. I bet if you asked most people at the Board, 'would you like an AI tool to help you review lengthy case files, highlight relevant evidence, and assist you in drafting?', they would say yes. If you ask people 'do you think AI can do your job?' they would say no. And if you couple that question with an insult and a threat to everyone's livelihood, the response is going to be even more vehement. The so-called technology resistance some folks are complaining about is more a response to the inflammatory way AI being discussed on this board than anything else imo.

There are parts of our job that I think can absolutely be done better by AI - reviewing thousands of pages of medical records, for example. I would LOVE for an AI program to just pull out the relevant evidence. AI can also help us draft more quickly. If/when AI is introduced, some attorney jobs might shift away from drafting decisions and towards teaching the AI - after all, the law is constantly changing, so it's not like you can set up the AI and just let it run. In the future, I think AI-assisted attorneys could turn out exponentially more cases than we do now. But at the end of the day, I think a human needs to make the call of whether it's going to be a grant, a denial, or a remand. A human needs to direct the AI, not just do a quality check every 10 cases. Even if we get to the point where there's no technological limitation that prevents AI from generating a case entirely independently, there are ethical limitations. I suspect there would be legal limitations too - if an AI independently denies a claim, people are going to start working on a lawsuit about the way to use AI. I doubt the outcome will be 'yeah, this is fine, we don't need humans at all.' Any predictions that don't take that into account are just blowing smoke.


You are correct that a human has to make the call, mostly because it will be impossible for an attorney to explain how the "Board" arrived at a conclusion if the conclusion was not actually made by someone at the Board. Will the OGC attorney just tell the Court that the conclusions in the decision were based on algorithms? That goes against the entire principle of an attorney at BVA doing de novo review.

AI would work fine if the decisions did not have to be defended in Court.
Anonymous
For anyone interested in training their own chat bot, you can grab decisions from here: https://search.usa.gov/search?query=tinnitus&affiliate=bvadecisions&start_date=2024-01-01&end_date=2025-01-23.

Example search goes to tinnitus decisions published between 1/1/24 and 1/23/25. You can edit the search for your judge, disability, time, etc.

Drop those into your AI tool of choice and instruct your AI to use this as its resource and to cite to the citation number when responding, and to use the CFR and USC, and not secondary authority like other websites. If you're using GPT, you can reduce the AI's creativity (hallucinations) by telling it to tell you it cannot find what you're looking for, instead of trying to please you by creating answers or trying to synthesize or provide close answers. Instead, tell it to prompt you with the option to expand your search criteria.

You'll also need to tell it to search through the entire CFR. Otherwise, when you ask it a question about something like effective date, it'll stop at the first answer instead of continuing to read and getting to the section on notice of intent to file. Think of AI like a lazy 1L.

For the upload, GPT worked fine when I combined the .txt files into a single PDF with a bookmark for each file (done automatically by my PDF software) and uploaded it. Keep track of the subjects and dates the decisions are from, and then you can make adjustments when new controlling case law comes out, or when your judge decides to switch up their templates.

This is so much faster than Research Tools and you don't need to use terms and connectors.
Anonymous
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:
Anonymous wrote:I think people here are catastrophizing. We don’t know what will happen, so I’m suspicious of anyone who announces they know the number with certainty. But we are in a better position than a lot of people. Clearly we are vulnerable to the RIF, some will probably lose jobs, but I do not think we are the priority for deep cuts. Yes, a lot of agencies have cut past the point of functioning, but VA is cutting less than other agencies percentage wise. That tells me this administration cares about our mission enough to not want to destroy it. We are mission critical. The project 2025 conservative agenda for VA is to privatize VHA and automate VBA. We don’t feature. So my guess is the RIFs focus on VHA to break it to the point of facilitating privatization. I think there will likely be a push to utilize AI more at VBA with an eye towards automation, but I don’t think it will get far because AI hallucinates. I don’t think they’ll make moves to automate our jobs unless it works at VBA and there’s no big pushback.

My most hopeful prediction is maybe they fire non vet probies, and maybe won’t even get that far if enough people take the DRP 2.0. I don’t know what will happen, but that’s my hopeful speculation.


AI hallucination is a nonissue in this context. It hallucinates because it's allowed to do so. There are products that do not hallucinate, such as that provided by Lexis. The hallucination that you hear about in briefs and the news is because folks are using products like off the shelf GPT, which has no parameters that would prevent the hallucination. I was using AI before I joined BVA, and am a strong proponent for its rollout at BVA.

BVA is literally decades behind what the rest of the legal community is doing in terms of doc review, de-duplicating, predictive coding, analysis, etc.


I’m hopeful that in the next 10 years, BVA will widely implement AI to adjudicate cases. If all goes well, BVA can reduce attorney headcount to 100 experienced attorneys to conduct quality reviews of AI decisions.


Will AI make credibility determinations? That is a major issue that comes into factor with lay evidence. Also, would CAVC accept a credibility determination made by AI?


Sure, it could. It would consider the same things that we do - consistency, corroboration, testimonial infirmities, etc.

CCW can absolutely be programmed.

BVA work is very basic and formulaic.


You think the Court will accept credibility determinations made by a computer program?


You are missing the point. AI is a tool. It is not the lawyer. AI would review and gather relevant evidence. It would present it and reach a suggested conclusion. An attorney/VLJ would review it. It is just a tool.

So many in here have clearly never used AI and the resistance to technology that could vastly improve the service to veterans is a perfect example of why the general public is so quick to celebrate the termination of bureaucrats and disruptions to our lives.

The government is a bloated dinosaur that is too protective of the notion of how things were/are done in order to protect the bloat and jobs.


That’s a complete misconception. The US population grew at a staggering rate increasing 50% in 50 years whereas the government only increased at a rate of 4%. We simply don’t have enough people to do the job that are required.The government isn’t bloated, it’s just Republicans who are telling you that. And you seem to simply be buying into it without any actual thought.


And where were computers 50 years ago? 50 years ago, VA was using paper files and typewriters. VA is a dinosaur when it comes to the use of technology.
post reply Forum Index » Jobs and Careers
Message Quick Reply
Go to: