How should state agencies use artificial intelligence?

We’ve now got a committee for that.

When the Texas Workforce Commission became inundated with jobless claims in March 2020, it turned to artificial intelligence.

Affectionately named for the agency’s former head Larry Temple, who had died a year earlier, “Larry” the chatbot was designed to help Texans sign up for unemployment benefits.

Like a next generation FAQ page, Larry would field user-generated questions about unemployment cases. Using AI language processing, the bot would determine which answer prewritten by human staff would best fit the user’s unique phrasing of the question. The chatbot answered more than 21 million questions before being replaced by Larry 2.0 last March.

Larry is one example of the ways artificial intelligence has been used by state agencies. Adaptation of the technology in state government has grown in recent years. But that acceleration has also sparked fears of unintended consequences like bias, loss of privacy or losing control of the technology. This year, the Legislature committed to taking a more active role in monitoring how the state is using AI.

“This is going to totally revolutionize the way we do government,” said state Rep. Giovanni Capriglione, R-Southlake, who wrote a bill aimed at helping the state make better use of AI technology.

In June, Gov. Greg Abbott signed that bill, House Bill 2060, into law, creating an AI advisory council to study and take inventory of the ways state agencies currently utilize AI and assess whether the state needs a code of ethics for AI. The council’s role in monitoring what the state is doing with AI does not involve writing final policy.

Artificial intelligence describes a class of technology that emulates and builds upon human reasoning through computer systems. The chatbot uses language processing to understand users’ questions and match it to predetermined answers. New tools such as ChatGPT are categorized as generative AI because the technology generates a unique answer based on a user prompt. AI is also capable of analyzing large data sets and using that information to automate tasks previously performed by humans. Automated decision making is at the center of HB 2060.

More than one third of Texas state agencies are already utilizing some form of artificial intelligence, according to a 2022 report from the Texas Department of Information Resources. The workforce commission also has an AI tool for job seekers that provides customized recommendations of job openings. Various agencies are using AI for translating languages into English and call center tools such as speech-to-text. AI is also used to enhance cybersecurity and fraud detection.

[…]

As adoption of AI has grown, so have worries around the ethics and functionality of the technology. The AI advisory council is the first step toward oversight of how the technology is being deployed. The seven-member council will include a member of the state House and the Senate, an executive director and four individuals appointed by the governor with expertise in AI, ethics, law enforcement and constitutional law.

Samantha Shorey is an assistant professor at the University of Texas at Austin who has studied the social implications of artificial intelligence, particularly the kind designed for increased automation. She is concerned that if technology is empowered to make more decisions, it will replicate and exacerbate social inequality: “It might move us towards the end goal more quickly. But is it moving us towards an end goal that we want?”

Proponents of using more AI view automation as a way to make government work more efficiently. Harnessing the latest technology could help speed up case management for social services, provide immediate summaries of lengthy policy analysis or streamline the hiring and training process for new government employees.

However, Shorey is cautious about the possibility of artificial intelligence being brought into decision-making processes such as determining who qualifies for social service benefits, or how long someone should be on parole. Earlier this year, the U.S. Justice Department began investigating allegations that a Pennsylvania county’s AI model intended to help improve child welfare was discriminating against parents with disabilities and resulting in their children being taken away.

AI systems “tend to absorb whatever biases there are in the past data,” said Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University. Artificial intelligence that is trained on data that includes any kind of gender, religious, race or other bias is at risk of learning to discriminate.

In addition to the problem of flawed data reproducing social inequality, there are also privacy concerns around the technology’s dependence on collecting large amounts of data. What the AI could be doing with that data over time is also driving fears that humans will lose some control over the technology.

“As AI gets more and more complicated, it’s very hard to understand how these systems are working, and why they’re making decisions the way they do,” Venkatasubramanian said.

As someone who works in cybersecurity, I can attest that AI is heavily used in our field. (It’s probably more accurate to call it “machine learning”, but I’ll leave the taxonomy to those with more expertise on that matter.) This is because we handle absolutely mind-boggling amounts of data every day (like, multiple terabytes of incoming data to analyze each day) and this is the only way to get a handle on it. Whatever email system you’re using, your inbox has less spam and phishing in it because of AI. There are plenty of good use cases for this.

There are also legitimate concerns about hard-wiring biases into decision-making processes, and of course of replacing human workers with automated systems. I’m in agreement with the masses that some form of regulation needs to be in place for AI going forward, though what that looks like and how it will be enforced are very much up in the air. I’m also persuaded by the argument that a good countervailing force against the overuse of AI is with stronger protections for labor, with the SAG and WGA strikes and resulting settlements showing a good path forward. This new AI advisory council is due to present a report to the Lege by the end of the year, and I look forward to seeing what they come up with.

Related Posts:

This entry was posted in The great state of Texas and tagged , , , , . Bookmark the permalink.

2 Responses to How should state agencies use artificial intelligence?

  1. Ross says:

    Can we use AI to create foolproof pictures of Paxton committing unspeakable acts with goats or sheep? With Abbott to the side cheering him on.

  2. Manny says:

    ha, ha, or is it lol, lol

Comments are closed.