@ChanceyFleet @RoyReed I, too, am concerned about this iterative effect (i.e. models getting trained on themselves) so we're definitely in favor (I am anyway) of making sure there are human checks.
I mean I don't get to build any of this, but I have some oversight into how things might be built. I think our concern is: given that we have some institutions which have no image descriptions, what is the best way to get more description in there?
I do take all of your points, they are good ones.
@RoyReed @jessamyn It’s a conundrum. Most institutions and platforms have so much legacy content that needs alt text, and accessibility teams aren’t resourced enough to do it in house. I’ve heard of crowd-volunteering efforts, which seems like a good thing, and there are also companies like Scribely that will write alt text as a B2B service. At a minimum, a human in the loop is necessary to make sure misleading or plain wrong description doesn’t get published.