A Utah Department of Motor Vehicle employee shows a sample of a digital driver's license on a mobile phone at a demonstration in August 2021 in Salt Lake City. The Transportation Security Administration is finalizing plans to allow for the use of mDLs at airport security checkpoints.

A Utah Department of Motor Vehicle employee shows a sample of a digital driver's license on a mobile phone at a demonstration in August 2021 in Salt Lake City. The Transportation Security Administration is finalizing plans to allow for the use of mDLs at airport security checkpoints. George Frey/Getty Images

TSA to allow mobile driver’s licenses after REAL ID goes into effect

The final rule will allow states that have issued mobile driver’s licenses to apply for TSA-issued waivers of certain REAL ID requirements.

The Transportation Security Administration is moving to allow travelers to continue using mobile driver’s licenses to verify their identities at airport security checkpoints after enforcement of REAL ID compliant documentation goes into effect next year

In a final rule published in the Federal Register on Friday, TSA said it was establishing a temporary process that would allow states to apply for waivers of certain REAL ID requirements for mobile driver’s licenses — or mDLs — after enforcement of the higher security standards begins on May 7, 2025

The new measure, effective Nov. 25, will allow airports and other federal facilities to accept mDLs for identity verification if the issuing state has received a TSA waiver.

Congress passed the Real ID Act in 2005, which established more rigorous requirements for driver’s licenses and government-issued identifications in the wake of the September 11th terrorist attacks. Implementation of the standards has since been delayed, however, and lawmakers subsequently amended the law in 2020 to clarify that mDLs are also covered by the REAL ID requirements.

TSA currently accepts mobile driver’s licenses issued by 11 states at 27 airports across the country. The verified personal identification documents are stored on travelers’ cell phones or in apps. The agency said in a press release that it “has a goal of accepting mDLs in all airports, by expanding the technology nationwide.”

According to the text of the final rule, the effort “arises from TSA's desire to accommodate and foster the rapid pace of mDL innovation, while ensuring the intent of the REAL ID Act and regulations are met.”

TSA also said in its rulemaking that it plans to issue “a subsequent rule that would set comprehensive requirements for mDLs.” 

It noted, however, that the new measure is a necessity in the absence of federal regulations since states “may become locked-in to existing solutions and could face a substantial burden to redevelop products acceptable to federal agencies under this future rulemaking.”

Some lawmakers have already pushed for Congress to take more of an active role in crafting standards around the use of mDLs and other digital IDs.

Rep. Bill Foster, D-Ill., who has been a prominent voice in Congress for the broader adoption of digital IDs, introduced legislation in September that would create a task force within the Executive Office of the President to, in part, “improve access and enhance security between physical and digital identity credentials.”

The measure was proposed after Foster and Rep. Clay Higgins, R-La., introduced another bill in June that would require TSA to submit a report to lawmakers on its use of digital identities and their potential impact on homeland security. Their legislation passed a House Homeland Security Committee markup in June but has not received a vote in the full House.

Charles Worthington, chief artificial intelligence officer at the Department of Veterans Affairs, testifies before a congressional panel on February 24, 2024.

Charles Worthington, chief artificial intelligence officer at the Department of Veterans Affairs, testifies before a congressional panel on February 24, 2024. MANDEL NGAN/AFP via Getty Images

VA’s head of AI sees his role as a ‘bridge’ to future use

Meet the Department of Veterans Affairs CAIO Charles Worthington. He envisions a future where AI components are built into standard technology and software.

The Department of Veterans Affairs has been testing out a variety of AI use cases to determine how the tools can enhance veteran care and benefit services. As the department’s Chief Artificial Intelligence Officer and Chief Technology Officer, Charles Worthington said a large part of his work has been helping VA “bridge from where we are now to that future where AI is just kind of a component of most systems.”

Worthington recently spoke with Nextgov/FCW about how the VA is working to onboard new AI-powered capabilities and the department’s focus on making personnel comfortable with using the emerging technologies. This interview has been edited for length and clarity.

Nextgov/FCW: Who do you report to in your organization? How many people are on your team? And what are your plans for growth?

Worthington: As CTO I report to our chief information officer, and that didn't change with the chief AI officer designation. The CAIO role though has some specific requirements that are outlined in the department memo. These include advising the rest of the agency leadership on AI topics and kind of coordinating the VA’s AI activities via the governance council that we've stood up, as well as just a general kind of coordinating role. 

We have a relatively small group of folks dedicated to AI on the team that reports to me, about five full-time employees. But we also coordinate with a lot of the other parts of VA that also have nascent and growing AI teams, most especially with our colleagues in the Veterans Health Administration. They have an emerging technology area that has a big AI focus and we work closely with that group. In other parts of VHA and the rest of the agency, there's also a growing number of folks that are dedicated to AI that we also play kind of a coordinating role with. 

VHA recently centralized some of their technology-focused offices into an organization called the Digital Health Office. A number of the AI capabilities that were created within VA’s National Artificial Intelligence Institute are now part of that office. The function they have been playing is really helping identify and prioritize use cases in the healthcare space. They're playing a big role in basically facilitating the launch and the running of some specific pilots, and are also going to be playing a big role in the governance and risk management of AI in the healthcare space.

Nextgov/FCW: How do you see your role evolving over time?

Worthington: If you think ahead 10 years, I think it's pretty likely that features that we would now categorize as AI will be just a part of most software systems in use. We’ll think of them as not necessarily a different category of tech, but rather, just a way that a lot of software works. 

So I think in a lot of ways, the job that we have as chief AI officers right now is to help bridge from where we are now to that future where AI is just kind of a component of most systems. And to do that effectively and safely, we're going to have to basically update the various ways that the government does technology management to account for the things that are unique about AI.

It's a little hard to predict how to how it's going to evolve, but I do think that the next year or two, it's basically trying to figure out how can we safely apply this technology in the government and what sort of changes do we need to make to our existing policies and procedures to make that true. And then I think those changes will be the sorts of things that can kind of carry us into that future where these AI components are just kind of built into most systems that we use.

Nextgov/FCW: What is your role in things like AI acquisition and workforce development?

Worthington: One of the four work streams that we're focused on in our AI governance framework is workforce development, and we've got a whole group of folks dedicated to that. We're lucky to have participation from our human resources office as well as key HR leaders across the agency, and they have put out this AI workforce blueprint that kind of lays out some approaches that we're going to be experimenting with. The work, I think, is going to be helping our staff understand how these tools work and how they can use them. And so as we're starting to roll out software that has these features built in, we're also thinking about how we can train people and get them comfortable with what they can and can't do.

I think a lot of the conversation around AI workforce has tended to focus on how we get those technical experts that understand how to create AI systems, which is definitely important. But there's this whole other part of the work stream, which is more about, “how do we make our existing staff understand how to use these tools and what their limitations are to maximize productivity and the impact they can have along those lines?”

When it comes to onboarding new tools, there's kind of two ways to approach this. One is use case by case, where you're trying to solve specific problems or make a specific business process work better. We've got a number of things in production, as shown in our current use case inventory, and also a number of things that are sort of in pilot. I think starting with problems and then figuring out what technology is best suited to solve that problem is the best way to proceed with any sort of tech rollout. 

For our most important problems — for example, clinician burnout is one of these problems that we’re trying to tackle — we’re thinking about ways in which technology, including AI, might be able to help with that. That's what led us down the path of testing these ambient scribe products, which can help with clinician workflow and help with some of the grunt work of writing down a clinical note, and potentially make that a little bit faster and less tedious. So I like approaching things that way, where we're starting with the problem in mind and then figuring out what tech might fit and then going and acquiring it if we need it.

Nextgov/FCW: Can you discuss your approach to the combined CAIO and CTO role and how they complement one another?

Worthington: There’s a fair amount of overlap. The mission of VA’s CTO … is to basically enable VA to improve veterans lives with better software. I think artificial intelligence is probably the most important thing that will change about software in the next 10 years. So in that sense, I think it's right in line with what we’re already supposed to be doing. We're constantly looking for ways to improve VA’s use of software so that we can have a more positive impact on veterans.

Nextgov/FCW: What is your biggest priority right now?

Worthington: We're learning alongside everyone else. I think we're definitely lucky to have had the Veterans Health Administration, and the investment that they made in the AI capability early on. I think that actually reflects this sort of rich culture of innovation and experimentation, especially in our healthcare administration. The VHA has done a lot of groundbreaking research, not just in AI, but dating back to things like the pacemaker and the nicotine patch. So our Health Administration is really good at thinking about things they could improve that would help the delivery of healthcare to veterans and nationwide. So I think that that ethos of experimentation – but really rigorous experimentation grounded in research principles – has been really helpful for VA to get started. I think that really allowed for us to think through adopting those trustworthy AI principles, because I think a lot of those are kind of in line with how we think about running the VA already. 

We want to be very fair and equitable, but we also want to take advantage of existing technology or even create new technologies that will help us achieve the mission better.

The VA plans to update the AI algorithm it uses for predicting veterans at risk of self-harm to now include factors specific to women veterans.

The VA plans to update the AI algorithm it uses for predicting veterans at risk of self-harm to now include factors specific to women veterans. P_Wei/Getty Images

VA is updating its AI suicide risk model to reach more women

The department is looking to add military sexual trauma and intimate partner violence as risk factors for suicide in its predictive model for identifying veterans at high risk of self-harm.

The Department of Veterans Affairs is in the process of adding additional risk factors to its artificial intelligence-powered tool for identifying veterans at high risk of suicide to better account for the experiences of women. 

The effort comes after a report released in March by nonprofit Disabled American Veterans warned that the department’s suicide prevention tool — the Recovery Engagement and Coordination for Health — Veterans Enhanced Treatment program — used retired male servicemembers as its baseline. 

The organization noted that the REACH VET model does not factor in military sexual trauma, or MST. VA data shows that one in three women and one in 50 men have confided in their clinical provider that they have experienced MST. The report recommended that VA revise its algorithm to include risk factors for MST, as well as intimate partner violence. 

A subsequent investigation conducted by The Fuller Project, in partnership with Military Times and Military.com, also found that the program’s algorithm considered being a white man more of an indicator of potential self-harm than other factors that fully or largely affect women. 

Naomi Mathis, Disabled American Veterans’ assistant national legislative director, told Nextgov/FCW that, as VA adopts modern capabilities like REACH VET, “you would think that these tools would enhance or enable the VA system to work better for the modern service or the modern veteran.”

REACH VET, which fully launched in 2017, uses a predictive model to analyze data from veterans’ electronic health records to identify those in the top 0.1% tier of suicide risk. REACH VET has identified approximately 6,700 veterans per month for additional healthcare assistance.

The program currently uses 61 indicators across six different categories to identify veterans at risk of self-harm, including across “demographics, diagnosis, medications, utilization and interaction terms, such as the interaction between marital statuses with gender.” 

A VA official told Nextgov/FCW that the department is “in the process of updating the REACH VET predictive algorithm to consider additional variables specific to women veterans.”

The new risk factors under consideration include intimate partner violence and MST, as well as medical conditions that affect women, such as pregnancy, fibroids, endometriosis and ovarian cysts.

“As we update the model, it will be evaluated for performance and bias before it is deployed,” the spokesperson said, adding that the goal is to launch the new algorithm in early 2025.

VA has worked in recent years to adopt novel approaches — such as REACH VET — to support veterans in need of mental healthcare services, even as the number of retired servicemembers who have died by suicide remains high. 

According to VA’s 2023 National Veteran Suicide Prevention Annual Report — which was based on data from 2021 — the rate of veteran suicide increased by 11.6% from 2020. The same review found a 24.1% increase in the age-adjusted suicide rate for women veterans from 2020 to 2021, compared to an increase of 6.3% among male veterans during the same period. 

Mathis said it was a positive step that VA was finally moving to add MST as a risk variable in its upgraded REACH VET algorithm but questioned why it didn’t factor it into its initial model, especially since it affects both men and women.

“You have underreporting of MST, and you're saying, ‘MST is not statistically significant,’” she added. “Well, then you're missing all of those people that do feel comfortable enough to report it.”

The VA representative pushed back on claims that the current REACH VET tool prioritizes men, saying that it only prioritizes individuals at the highest risk of self-harm. They noted, however, that “sex is included as a variable in the model and being male does have a positive value.”

The department has already submitted REACH VET’s predictive model and use case for a “safe AI review,” as outlined in President Joe Biden’s October 2023 executive order on the secure and trustworthy use of AI.

“We have tested the current REACH VET algorithm and are also currently testing all candidate models for the new REACH VET algorithm to ensure that they fairly estimate suicide risk across key demographic populations,” the spokesperson said.

Concerns about specific suicide risk factors not being included in REACH VET’s predictive algorithm have also attracted the attention of some top lawmakers. 

Senate Veterans’ Affairs Committee Chair Jon Tester, D-Mont., introduced legislation on Sept. 25 to improve mental healthcare services for veterans, with the proposal also seeking to address some of the concerns raised around the factors included in REACH VET’s algorithm. 

The legislation included a provision that would require VA to modify REACH VET to include “risk factors weighted for women,” including MST and intimate partner violence.

Tester said in a statement to Nextgov/FCW that VA needs to add the variables to its model to ensure that all veterans are receiving the mental health services they need.

“Women veterans are the fastest growing demographic of veterans, and my legislation will ensure VA takes the experiences of women veterans, specifically survivors of military sexual trauma and intimate partner violence, into account when treating veterans’ mental health to help make sure no veteran is falling through the cracks,” he added.

LPETTET/Getty Images

AI tools helped Treasury recover billions in fraud and improper payments

Risk screening, check fraud detection and more have helped the government recover more than $4 billion, Treasury announced.

The payment integrity arm of the Treasury Department says that new AI-powered tools are helping it spot fraudsters and bad actors before they access government money. 

Treasury prevented and recovered over $4 billion in fraudulent and improper payments in fiscal 2024 in part due to those tools, it announced Thursday, up from $652.7 million the year prior, a number the department has confirmed includes $154.9 million in prevented improper payments and $346.2 in recovered ones.

Specifically, the department’s Office of Payment Integrity houses tools open to other federal agencies and federally-funded programs administered by states, and it’s using machine learning to examine large amounts of data and flag potential fraudulent schemes, a Treasury spokesperson told Nextgov/FCW. 

Advances in the use of machine learning to catch check fraud have resulted in $1 billion in recovery, the department says.

Treasury says that its “risk-based screening” prevented $500 million in bad payments, and that “identifying and prioritizing high-risk transactions” stopped $2.5 billion. Finally, “efficiencies in payment processing schedule” yielded $180 million in prevention.

The department can’t give more details or specific examples due to “the nature of the schemes,” the spokesperson said.

“We’ve made significant progress during the past year,” Treasury Deputy Secretary Wally Adeyemo said in a statement. “We will continue to partner with others in the federal government to equip them with the necessary tools, data, and expertise they need to stop improper payments and fraud.” 

The department has also expanded the reach of its services by finding new users, it says. 

Among the office’s offerings is the Do Not Pay service, which lets agencies cross-check multiple data sources to verify eligibility before issuing payments to a vendor, grantee, loan recipient or person receiving benefits. 

Earlier this year, the Labor Department announced with the Treasury that state unemployment agencies would have streamlined access to the system. The jobless aid system saw an uptick in fraudsters submitting applications during the pandemic, often by using identity theft to try to get benefits.

Do Not Pay’s data includes the Social Security Administration’s Death Master File, which the Treasury got access to on a pilot basis late last year after Congress included it in an appropriations law. 

The appropriately-named SSA database houses information about deceased individuals so that agencies can cross check outgoing payments to make sure that the government doesn’t send them to dead people, as the IRS did during the coronavirus pandemic.

The $4 billion-plus number being touted by the Treasury includes both fraud and improper payments.

While fraud includes willful misrepresentation, improper payments include those that shouldn’t have been made or were made in the wrong amount. That can be the fault of the government, as opposed to the person receiving a payment or benefit.

The Treasury is the “government's central disbursing agency,” it says, so it is “uniquely positioned to support federal programs proactively mitigate the risk of financial fraud by leveraging data and emerging technologies.”

The department disburses over 1.4 billion payments accounting for more than $6.9 trillion to over 100 million people annually.