Email or username:

Password:

Forgot your password?
Top-level
Talon

I know it sounds like I’m being overly harsh on this stuff, but trust me, even just one disclaimer that said all this, clear and concise, outside of their terms of use, would have made this a lot better. So far I’ve had one single picture out of the 10+ I’ve taken that was described to a degree where I would say it is 100 percent accurate in its description. Now none of them were ever lower than about 50 or 60, it usually got the main points of the image that I wanted to know correct, but other times it really struggles. But instead of telling you that it is struggling, it will make something up. That is how AI works.

23 comments
Talon

I will obviously let them know this and provide it as feedback to them directly. But I’m posting this here because I am actually concerned, and I really want you to know this if you decide to pick this up. This thing *will* make stuff up. It does so all the time. Please, please, please be very, very careful with this.

Talon

They do tell you about this though in their email. So if you’ve read the email, then you’ll have seen the disclaimer. This is good. But the disclaimer should really be in the app. They also make it sound like that this may be an issue that can be fixed. This will never be fixed completely. So again. Use it with care.

Talon

Also again, I’m not trying to downplay the image recognition capabilities of this. It definitely is better than most image recognition that we have access to. But given the nature of how this technology works, you can never be too careful.

Talon

I think though even if they did have the disclaimer, people would just ignore it or forget about it eventually. That is my main concern. Image recognition and OCR were never really reliable but they didn't have the ability to double down on their accuracy, and sometimes quite strongly too. So my feelings on this whole thing are very mixed. On the one hand I'm glad to have access to, let's be honest here, very good image recognition. But on the other hand this thing can assure you that it's correct even if it isn't. This is image recognition that talks to you. AI training biases and all.

I think though even if they did have the disclaimer, people would just ignore it or forget about it eventually. That is my main concern. Image recognition and OCR were never really reliable but they didn't have the ability to double down on their accuracy, and sometimes quite strongly too. So my feelings on this whole thing are very mixed. On the one hand I'm glad to have access to, let's be honest here, very good image recognition. But on the other hand this thing can assure you that it's correct...

Talon

Just to balance out my negatives with some positives here. The descriptions are a world apart from something like google Bard in my experience. In some instances it did tell me to call an actual assistant. And if a picture is overly terrible it seems to refuse to tell me anything about the picture and asks me to take another one, or again call for actual support. How reliable this is I can't say, but it does seem to be on the careful side of things here, as much as an AI can be, anyway. I'm personifying it a lot but mostly just for ease of expression. I'm relatively aware of how this works. So again I do think this does have use. But the asterisk here is very big.

Just to balance out my negatives with some positives here. The descriptions are a world apart from something like google Bard in my experience. In some instances it did tell me to call an actual assistant. And if a picture is overly terrible it seems to refuse to tell me anything about the picture and asks me to take another one, or again call for actual support. How reliable this is I can't say, but it does seem to be on the careful side of things here, as much as an AI can be, anyway. I'm personifying...

gocu54

@talon Wait, when did you get access to the be my Ai thing? I haven't yet as far as I know and I applied for access when they first started offering it.

Talon

@gocu54 This morning. A couple of hours ago.

gocu54 replied to Talon

@talon Nice! I'm gonna go check to see if I have access.

Sandra Pilz

@talon Thank you so much for sharing your experience. I don't have access yet and I'm not surprised about what you said. I wished some of the blindeness tech podcasts that have covered it in the past would have been more balanced. I remember one where the Be My Eyes representative said they gave it an image of a railway map of India and asked it to find a train connection from one city to another and then asked it to tell them what to say in Hindi to buy the ticket. And they said, of course,

Talon

@SunDial79 It is important that in marketting materials you present your product as favourably as possible. Sadly though they don’t employ me to do their marketting, so I will be very honest about my experience. I think it’s important that people understand what this does, and what this doesn’t.

Sandra Pilz replied to Talon

@talon Yes, sure. But I think they should be asked the right questions, too.

Talon replied to Sandra

@SunDial79 I think the fact is that people are very excited about this. Rightly so. This is quite a big leap in image descriptions. But it is very easy to lose track this way, and who wants to be negative about their own product?

Sandra Pilz

@talon it had worked. The host of the podcast seemed impressed and I was disappointed they didn't ask how the Be My Eyes reps had checked that the answer was correct. Just a few days ago I had read an article written without AI, I assume, where a journalist of a mainstream magazine wrote you couldn't say please and thank you in Hindi. At uni I learnt how to say it. As long as we don't need AI to make up things about subjects we know little about, how are we supposed to deal with AI errors.

Talon

One has to wonder, with the recent news about openai and bankruptcy, what will happen if they can't find another investor? This is why I want an offline AI for this even if the results are worse still.

Mikołaj Hołysz

@talon I mean, their models are worth *something*. At worst, if they do go into bankruptcy and get sold off, somebody else will get them and will be able to do with them as they please. It probably won’t be a free for all any more, but there’s definitely money to be made there.

Pratik Patel

@talon Even if they do, which I doubt, their assets will be bought out by someone. Right now, their structure isn't defined by profit. So they haven't explored all the productization possibilities. There's plenty going on in the open source space. I'm not concerned about losing functionality.

Mohamed Al-Hajamy💾

@talon Maybe a weird question, but I also wonder if anyone tested it with adult content? I find myself wondering if the prudish nature of ChatGPT made it in as well.

Talon replied to Mohamed

@MutedTrampet I don't dare. My account might get suspended. Which yes. Is another big problem.

modulux

@talon For what it's worth I agree a disclaimer is important, though I'm also dubious how much a disclaimer helps, because users gonna use (Yes, I know it's cynical, but I do it myself on occasion, not reading docs, etc.)

But on the other hand, I really think the pushback on tech that's genuinely useful, and a serious advance on the state of the art is very odd. There's no reasonable world in which every blind person can depend on a sighted person to be there (remotely or not) to help every time a visual issue comes up. Just like we can't rely on having electronic or braille copy for everything. I find it weird how much people get fixated on the limitations, which obviously do exist.

@talon For what it's worth I agree a disclaimer is important, though I'm also dubious how much a disclaimer helps, because users gonna use (Yes, I know it's cynical, but I do it myself on occasion, not reading docs, etc.)

But on the other hand, I really think the pushback on tech that's genuinely useful, and a serious advance on the state of the art is very odd. There's no reasonable world in which every blind person can depend on a sighted person to be there (remotely or not) to help every time...

Krishna Jariwala

@talon I kinda feel like this should be implied at this point but that's just me.

Talon

@krishnajariwala For people who know how this works, yes. They will understand. But this is integrated into a tool that people with little technical knowledge will use. We can't expect them to know this.

Dale Reardon

@talon And if they want a safety example Aira won't even endorse or permit their human assistants do certain mobility tasks such as advising when to cross the road etc

Go Up