Thoughts on the similarities between alt text and AI disclosure decisions
The way we approach accessibility for images could provide an effective framework for navigating the complex new terrain of AI disclosure in journalism
When we decide whether an image needs descriptive alt text, we're essentially asking: "Does this contribute meaningful information, or is it purely decorative?" A stock landscape flyover that serves as visual ambiance might receive minimal or empty alt text because it doesn't convey essential information—it enhances atmosphere without adding substantive content.
This could also be a useful way of thinking about how we handle AI disclosures. If an AI element is just decorative—like that background landscape—maybe it doesn't need a big flashy disclosure. It all comes down to how much that element actually contributes to what people are getting from your content.
I'm not suggesting we ditch disclosures altogether, but they could be scaled based on impact. A central AI-generated image that's crucial to your message might need clear disclosure, just like an important chart needs thorough alt text. But that little decorative flourish? Maybe just a brief mention would suffice in both cases.
The real question isn't just "Did AI help make this?" but "How much does this AI element shape what people understand or experience?" It's similar to how we approach alt text by considering what unique information an image provides.
This "materiality principle" recognizes that not everything carries the same weight. In accessibility, we already understand this information hierarchy—some visuals are essential while others just make things look nice. AI-generated content works the same way.
Think about the difference between AI-generated background music in a documentary versus an AI-generated narrator. The music sets a mood, but the narrator directly shapes how viewers receive and trust the information. They probably deserve different disclosure approaches.
Too many disclosures for minor AI elements could lead to people tuning them all out, even the important ones. Just as we're careful not to overwhelm screen reader users with unnecessary descriptions, we could develop smarter disclosure approaches that maintain transparency without disengaging audiences.
This isn't about dodging ethical responsibility. It's about being thoughtful, matching the prominence of disclosures to how significant the AI contribution actually is. AI that slightly enhanced image resolution might need different treatment than a completely AI-generated person presented as real.
From a practical standpoint, this approach respects people's time and attention. Yes, they deserve transparency, but they also need clean, functional experiences without constant interruptions.
We could implement this with tiered disclosure systems, perhaps using standardized symbols of varying prominence, or placing disclosures in locations that reflect the AI element's importance. Decorative elements might be noted in credits, while central AI content might need immediate disclosure.
As AI becomes more integrated into our work, we'll need to keep refining these approaches. What counts as "material" will evolve as people become more AI-savvy and as the technology advances.
Looking to accessibility practices for inspiration makes sense in this case, because those standards have been developed over decades by thinking about what information truly matters to different users. By applying similar thinking to AI disclosures, we can be ethically transparent while acknowledging that not all AI applications are equally significant.