Making Sense of Technology Problems Framework

Download

This framework provides strategies for resolving different types of technology problems, based on our experiences. We introduce a couple terms for understanding how technology tends to fail, and the strategic questions to consider. We base this on an analysis of past issues, and describe the important considerations of each issue type. You’ll get a better understanding of when technology needs a straightforward fix, and when the technology has more complex technical or political problems.

Making Sense of Technology Problems Framework

How do public benefits technologies tend to fail? What harms can they cause? And what can advocates do about it?

This resource introduces a simple framework for advocates challenging benefits tech. Benefits policies at the state and federal levels are the reason many people are not able to access the supports they need, but technology can also complicate or block access. While challenging the technology used to administer benefits will not address scarcity and underfunding in benefits programs, it can get people benefits they are entitled to under existing policies. This framework focuses specifically on technology problems because there are not many existing resources on addressing them for advocates. Also, challenges focused on technology, despite their limitations, can make a difference in the short-term while the longer policy fights continue. By distinguishing between the types of technology problems that we describe below, advocates can understand if and how challenging technology can help people access benefits.

Even though policy is not the main focus of this guide, all technology problems must be understood as a part of broader issues around access to social supports. States design programs to restrict eligibility based on immigration status, arrest or conviction records, or involvement with the child welfare (or “family policing”) system. These policies are based on racist, xenophobic, and patriarchal ideas about who “deserves” social support. The false narrative around “deserving” and “undeserving” people pushes Black, Indigenous, and other people of color into the criminal legal system instead of social support programs, and undermines advocacy to create well-funded and truly supportive programs. Additionally, other policies force disabled and poor people to choose between restrictive programs or receiving support in the ways they prefer. For more resources on these issues, check out our Political Education Annotated Bibliography.

When challenging benefits technology, you may have strategic questions like: Would it be better to call on the government to discontinue the technology’s use entirely, or to fix the immediate problems to make it work correctly? What does “correctly” mean in practice? Would a better appeals process for adverse decisions make a difference? Would it be better to instead advocate for an expansion of the system’s criteria for granting access to benefits? Or are the technology problems ultimately a distraction from the more fundamental problems, like the lack of program funding? The answers to these questions depend on the types of technology problems that are causing harms.

Types of Technology Problems

Problems with benefits technologies can be broadly categorized as either “logistical issues” or “measurement issues.” Logistical issues occur when the technology is not operating according to the unambiguous set of rules it is supposed to follow—the technology is simply “broken.” Someone using the system cannot do something they are supposed to be able to do, because the technology isn’t working as expected. This can be due to inconsistency between written policy and technical design, or between technical design and implementation. Measurement issues occur when the technology is correctly operating according to a set of rules, but those rules are just one of many possible interpretations of a policy. The technology standardizes decisions about eligibility or allocation, and certain people’s needs are not considered by this set of rules. In other words, the system has a simplified model of the world that includes certain people’s situations but not others’, and can be biased or created from irrelevant data. By definition, every standardized system does this to some extent.

We sort technology issues into the loose categories of logistical and measurement issues because practically, tactics may differ when confronting a logistical problem (where everyone agrees on the need for a fix) versus a measurement issue (where there is inherent discretion in the underlying policy or political disagreements that may complicate your advocacy).

It’s also the case that sometimes technology is just implementing fundamentally punitive policy, and the primary issue is not that it’s “broken” or discretionary. For example: a program’s application website is only online between 9am to 5pm on weekdays, a system is used for facilitating Medicaid work requirements, or a system is created only for flagging atypical information as “fraud.” In these cases, the state has decided to use technology to further restrict access to programs — which means advocates need to focus on fighting the policy, and only focus on the technology to the extent that it might help delay or limit policy implementation.

A single system might have both logistical and measurement issues. They aren’t neat boxes — understanding these categories only matters because different problems require different approaches.

We’re going to walk you through two case studies from our Case Study Library. For each, we’ll give a brief overview of the case and then identify the logistical and measurement issues while also indicating policy issues that are not technological.

Let’s start with our case study on the eligibility system for Medicaid Long-Term Services and Supports in Missouri:

In 2018, the Missouri Department of Health and Senior Services (DHSS) proposed a new algorithm for determining eligibility for home and community based services (HCBS). DHSS designed the algorithm to include a subset of factors related to people’s conditions from the 200+ question InterRAI assessment, in order to calculate an eligibility score. Changes to which factors were included and how they were weighted meant that as many as 66% of currently eligible people would not be eligible, according to the first draft of the new scoring algorithm. The algorithm failed to account for things relevant to people’s level of care needs. For example, the algorithm considered people’s mobility issues with getting in and out of bed, but not with getting up and down stairs. In addition, the algorithm contained basic logic errors that meant that some factors that DHSS intended to consider were not actually used to determine people’s eligibility scores.

There were several technology issues with Missouri’s proposed eligibility algorithm. We’ll point out the issues and identify whether they are logistical or measurement.

  • Logistical issues: Although DHSS included certain factors to use for scoring, the algorithm contained basic logic errors that meant these factors would never actually be counted in someone’s eligibility score: This is a logistical issue because the logic errors in the scoring algorithm meant that it did not function as intended by the state.
  • Measurement issues: Changes to which factors were included and how they were weighted meant that as many as 66% of currently eligible people would not be eligible under the new scoring algorithm: The changes in factors that would have terminated people from the program are measurement issues, because they are issues with decisions the state agency made about which of the questions on the InterRAI assessment are relevant for assessing people’s level of care needs, and which are not.
    • The algorithm considered people’s mobility issues with getting in and out of bed, but not with getting up and down stairs: This is a specific example of a measurement issue where a factor from the InterRAI assessment (whether someone has difficulty with stairs) was not included by the state, even though advocates argue it is relevant to someone’s level of care needs.

Looking at both the logistical and measurement issues with this scoring algorithm highlights that just addressing the logic errors only partially addresses the problems with the system—there are also measurement issues to address so that people can qualify for the care they need. In this case, highlighting the impact of the measurement problems prevented the state from rolling out a system that would have cut two-thirds of people from benefits they were previously receiving. While it did not address the underfunding of home and community-based care, it did have a significant impact in preventing a wave of terminations.

Now let’s look at our case study on Social Security Administration Supplemental Security Income (SSI) terminations:

In order to receive SSI, beneficiaries cannot have more than a certain amount of assets ($2,000 for individuals and $3,000 for couples), and anyone whose assets go above the limit gets their assistance cut off. For years, people enrolled in SSI would mysteriously lose the financial assistance they were eligible for. The source of the problem was that the system would deposit benefits early when the first day of the month was on a weekend or federal holiday, but would not consider this when running asset checks. Because of this logistical flaw, people’s own SSI benefits would be counted against them, and they were automatically terminated. The New York Legal Assistance Group eventually filed a class-action lawsuit against the Social Security Administration, and won a settlement forcing the administration to fix the error.

There was one main technology issue in this SSA case:

  • Logistical issue: The system would deposit benefits early when the first day of the month was on a weekend or federal holiday, but would not consider this when running asset checks: __This is a logistical issue because the system was not correctly carrying out a well-defined administrative task. __

One of the causes of the harm (incorrect asset checks and automated terminations) was a flaw in the system’s design that, when fixed, would enable people to receive their benefits. It did not, however, address the fact that the asset limit used to disqualify people is incredibly low and essentially keeps people in poverty. Fixing the asset check mechanism doesn’t fix this policy, but it does reduce harm by preventing people whose assets are in fact under the limit from being erroneously cut off from their benefits.

Now that we’ve shown some examples of harmful benefits technology and identified the types of issues, we’re going to walk you through each type of issue in more detail and show why logistical and measurement issues require different interventions.

Identifying and Fighting Logistical Issues

Logistical issues are present when the technology doesn’t function according to its technical specifications, or the technical specifications don’t match regulatory and legal requirements. Often, logistical issues prevent people from accessing programs they are eligible for or even getting information about why they were denied benefits. In other words, someone clearly ought to be receiving services or notices based on the rules, but they are not. The trademark of logistical issues is when a technical system does not carry out a well-defined administrative task as expected.

Examples of logistical issues include:

  • Notices that are automatically populated with incorrect information or are sent to the wrong address
  • A system that crashes during real-world use
  • Documents and applications that get lost by the system
  • User interface design that prevents people from completing an application
  • Data-matching errors, like confusing two people who have the same name
  • Design that’s not accessible to vision-impaired users or users on mobile devices
  • Any other time when the technology prevents a user from doing something they’re supposed to be able to, or carries out a defined function incorrectly

Logistical issues happen because vendors and developers do not always correctly translate policy or system design requirements into code, and programmers can simply make typos or forget use cases in their designs. These issues are often overlooked because states and the federal government lack comprehensive requirements for proactive testing, piloting, and public audits. When these issues do surface, states claim they cannot afford to fix issues because of technical complexity or budget issues with contractors—even though states are legally required to make these social support programs available.

Usually, addressing logistical issues means forcing the state to correct the glitches in the technology, or at least to create a systemic workaround so no one has to suffer the consequences of the glitches. Better design, testing, and error handling can prevent logistical issues. And non-technical alternatives for applying for benefits can help people get around logistical issues when they are present. However, fixing a logistical issue does not necessarily work backwards to change an unfair policy.

The strategy for logistical problems is to fix the malfunctioning parts of the system. We want these systems to function so that people can easily access their supports, and the state is accountable for operating their programs according to their own regulations. Advocates have had some success with class-action lawsuits claiming that a system violates certain state or federal laws like the Americans with Disabilities Act or legal protections like due process. For example, the SSI class-action lawsuit settlement forced the Social Security Administration to create more checks in the system to prevent incorrect terminations. However, litigation can be time and resource intensive. Settlements can take a long time, and contractors can be very slow to fix their systems. Some interesting additional or alternative tactics include state officials punishing vendors for project failures, or advocates trying to introduce legislation to hold vendors and the state accountable for malfunctioning public benefits technology. These tactics work towards the goal of fixing the system so that there is no gap between what the system is supposed to do and what it actually does.

In short: if logistical issues are causing some or all of the harms you are seeing with benefits technology, then pressuring the state to fix the technology is a worthwhile approach that will reduce harm.

Identifying and Fighting Measurement Issues

Measurement issues are about the technology’s role in standardizing policy and whether the technology can assess people’s actual needs. Often, measurement issues prevent people from being found eligible for the amount of supports they need, or any supports at all. Analyzing measurement issues can expose inequities in access to services and supports by looking at decisions about people made at scale. There are countless examples of standardized tools that discriminate against people based on race, gender, sexuality, disability, and other excluded and marginalized attributes and identities: for example, in housing, hiring, and healthcare.

Examples of measurement issues in benefits technology include:

  • Standardized assessments that don’t take into account people’s expressed needs or preferences
  • Data about one group of people that’s used to make scoring systems for a meaningfully different group of people
  • Assessments that fail to account for intersections of conditions or certain conditions altogether
  • Assessments or input data that reflect and perpetuate racial disparities in care
  • Assessments that are less accurate for or beneficial to people of color
  • Any assessments that use misleading proxies (like cost) to determine outcomes
  • Fraud detection models that target individual applicants by using metrics like multiple applications coming from the same IP address

How do you know when you’re dealing with measurement issues? The trademark of a measurement issue is when rules are created to standardize an inherently uncertain or subjective situation. In other words, people’s experiences are being put into boxes that hide the unavoidable discretion of the government’s decisions. Advocates and people receiving support cannot always see the rules created by the system, only their output (or sometimes the rules are public, but so complex that they are difficult to understand).

In general, standardized measurement is difficult because people are not standardized. Any attempts at standardized measurement involve someone with power deciding to pay attention to certain things and not others. While there is no perfect measurement, standardized measurements can be more or less useful or harmful based on how they are designed and used. More significant measurement issues often happen because of budget limits and bad policy: assessments are often used to justify service cuts to certain groups when a program is not funded properly. In other words, states may turn to a standardized assessment to covertly make political choices about who gets care, while claiming that the system is objectively assigning resources based on need.

The good-faith reason for these standardized assessments is to limit the discretion of the people doing evaluations, which has historically been a source of bias or discrimination. But even this is misguided: people are still not empowered to simply ask for and receive the support they know they need. Standardized measurement also turns any biases built into an assessment into systemic problems. Measurement problems are often related to underfunding, but even with more funding, the design of an assessment may exclude certain people from accessing support.

Usually, addressing measurement problems means creating different avenues for people to have their needs met, or at least modifying assessments to align better with the population they are used on. But depending on how complex an assessment is, people in similar situations might not all benefit from tweaking an assessment—which is different from logistical fixes, which tend to help everyone with similar issues. Also, forcing people to be assessed ignores that many people would prefer to describe and receive the services they know they need.

A strategy for addressing measurement issues should begin with getting more information on how the system was designed and what it does. If possible, audits or testing, like in Missouri, can expose the impact of the system. One way to obtain this information is through public records requests. Our Key Questions Guide includes questions about the methods used to create the system, which can reveal major issues with its foundation. With specific information about how the system works, you may see places to ask for adjustments that would help certain people. However, if the program is underfunded and the technology is trying to distribute a limited amount of resources, a systemic solution may ultimately have to focus on funding. Measurement problems may also be addressed by simply giving people the power to request the services they know they need.

Successful strategies for addressing measurement issues are still emerging, as advocates have found that trying to adjust the assessments and algorithms may only result in certain populations getting appropriate supports while others do not. In some cases, it might be more effective to just block the assessment tool from being used at all, as advocates in Idaho did for a version of their assessment. In Idaho, advocates showed how the assessment was derived from faulty data that didn’t relate to the population that the assessment was being used on, and the judge agreed the assessment was arbitrary. However, states often respond by introducing a new algorithm. It may be helpful to create legislation that makes all eligibility rules transparent and eases the burden of appealing denials, so that people can maintain their benefits on an individual basis if the assessment does not work for them.

In short: measurement issues are more complex to address than logistical issues. In some cases you may want to advocate for improving the way that the system measures people’s needs. In other cases you may be pointing out the measurement issues and advocating for the state to stop using the system.

Conclusion

We wrote this framework to illustrate the reasons why you might adopt different goals and strategies for different types of issues, even if many of the tactics for pressuring agencies are similar. Even though technical fixes to address logistical issues may seem complex, states are generally accountable for correctly administering benefits programs according to their own policies. On the other hand, measurement instruments attempt to turn a highly variable human experience into numbers, and the most effective advocacy has been to focus on their arbitrariness and try to alter or get rid of them.

Our country’s policy approach to social supports tends to focus on ensuring that nobody gets too much rather than focusing on maximizing people’s access. This shapes decisions about what technology states invest in and the lack of investment in testing, piloting, and auditing tools before they are used. It also shapes what the technology is designed to do—for example, automated fraud detection, terminations without human review, or benefits applications that require the use of a computer. Because of this, we want to move towards proactive work to get better contracting processes for technical systems, transparency around the goals and designs of any technical systems, and ultimately more funding for programs and the end of punitive barriers to social supports.