Email or username:

Password:

Forgot your password?
Top-level
Talon

Keep in mind that I’m not saying not to use it. It will be helpful. But it can also be very dangerous and misleading. You have to know this to actually use it, and they don’t seem to tell you this.

51 comments
Talon

It also asks me how I’d rate the chat every time I end it, and there are options for I experienced problems and I had a good chat. And literally every single chat so far has had subtle but important errors in the description of the image and I have no idea whether they want me to say that this is a problem, or if they want me to say that it’s a good chat because there were no outages and quick response times or whatever. Because sure, the service worked, but again… lots of small issues. I don’t think you can resolve them, either. This is just how this tech works.

It also asks me how I’d rate the chat every time I end it, and there are options for I experienced problems and I had a good chat. And literally every single chat so far has had subtle but important errors in the description of the image and I have no idea whether they want me to say that this is a problem, or if they want me to say that it’s a good chat because there were no outages and quick response times or whatever. Because sure, the service worked, but again… lots of small issues. I don’t think...

Talon

I know it sounds like I’m being overly harsh on this stuff, but trust me, even just one disclaimer that said all this, clear and concise, outside of their terms of use, would have made this a lot better. So far I’ve had one single picture out of the 10+ I’ve taken that was described to a degree where I would say it is 100 percent accurate in its description. Now none of them were ever lower than about 50 or 60, it usually got the main points of the image that I wanted to know correct, but other times it really struggles. But instead of telling you that it is struggling, it will make something up. That is how AI works.

I know it sounds like I’m being overly harsh on this stuff, but trust me, even just one disclaimer that said all this, clear and concise, outside of their terms of use, would have made this a lot better. So far I’ve had one single picture out of the 10+ I’ve taken that was described to a degree where I would say it is 100 percent accurate in its description. Now none of them were ever lower than about 50 or 60, it usually got the main points of the image that I wanted to know correct, but other times...

Talon

I will obviously let them know this and provide it as feedback to them directly. But I’m posting this here because I am actually concerned, and I really want you to know this if you decide to pick this up. This thing *will* make stuff up. It does so all the time. Please, please, please be very, very careful with this.

Talon

They do tell you about this though in their email. So if you’ve read the email, then you’ll have seen the disclaimer. This is good. But the disclaimer should really be in the app. They also make it sound like that this may be an issue that can be fixed. This will never be fixed completely. So again. Use it with care.

Talon

Also again, I’m not trying to downplay the image recognition capabilities of this. It definitely is better than most image recognition that we have access to. But given the nature of how this technology works, you can never be too careful.

Talon

I think though even if they did have the disclaimer, people would just ignore it or forget about it eventually. That is my main concern. Image recognition and OCR were never really reliable but they didn't have the ability to double down on their accuracy, and sometimes quite strongly too. So my feelings on this whole thing are very mixed. On the one hand I'm glad to have access to, let's be honest here, very good image recognition. But on the other hand this thing can assure you that it's correct even if it isn't. This is image recognition that talks to you. AI training biases and all.

I think though even if they did have the disclaimer, people would just ignore it or forget about it eventually. That is my main concern. Image recognition and OCR were never really reliable but they didn't have the ability to double down on their accuracy, and sometimes quite strongly too. So my feelings on this whole thing are very mixed. On the one hand I'm glad to have access to, let's be honest here, very good image recognition. But on the other hand this thing can assure you that it's correct...

Talon

Just to balance out my negatives with some positives here. The descriptions are a world apart from something like google Bard in my experience. In some instances it did tell me to call an actual assistant. And if a picture is overly terrible it seems to refuse to tell me anything about the picture and asks me to take another one, or again call for actual support. How reliable this is I can't say, but it does seem to be on the careful side of things here, as much as an AI can be, anyway. I'm personifying it a lot but mostly just for ease of expression. I'm relatively aware of how this works. So again I do think this does have use. But the asterisk here is very big.

Just to balance out my negatives with some positives here. The descriptions are a world apart from something like google Bard in my experience. In some instances it did tell me to call an actual assistant. And if a picture is overly terrible it seems to refuse to tell me anything about the picture and asks me to take another one, or again call for actual support. How reliable this is I can't say, but it does seem to be on the careful side of things here, as much as an AI can be, anyway. I'm personifying...

gocu54

@talon Wait, when did you get access to the be my Ai thing? I haven't yet as far as I know and I applied for access when they first started offering it.

Talon

@gocu54 This morning. A couple of hours ago.

gocu54 replied to Talon

@talon Nice! I'm gonna go check to see if I have access.

Sandra Pilz

@talon Thank you so much for sharing your experience. I don't have access yet and I'm not surprised about what you said. I wished some of the blindeness tech podcasts that have covered it in the past would have been more balanced. I remember one where the Be My Eyes representative said they gave it an image of a railway map of India and asked it to find a train connection from one city to another and then asked it to tell them what to say in Hindi to buy the ticket. And they said, of course,

Talon

@SunDial79 It is important that in marketting materials you present your product as favourably as possible. Sadly though they don’t employ me to do their marketting, so I will be very honest about my experience. I think it’s important that people understand what this does, and what this doesn’t.

Sandra Pilz replied to Talon

@talon Yes, sure. But I think they should be asked the right questions, too.

Talon replied to Sandra

@SunDial79 I think the fact is that people are very excited about this. Rightly so. This is quite a big leap in image descriptions. But it is very easy to lose track this way, and who wants to be negative about their own product?

Sandra Pilz

@talon it had worked. The host of the podcast seemed impressed and I was disappointed they didn't ask how the Be My Eyes reps had checked that the answer was correct. Just a few days ago I had read an article written without AI, I assume, where a journalist of a mainstream magazine wrote you couldn't say please and thank you in Hindi. At uni I learnt how to say it. As long as we don't need AI to make up things about subjects we know little about, how are we supposed to deal with AI errors.

Talon

One has to wonder, with the recent news about openai and bankruptcy, what will happen if they can't find another investor? This is why I want an offline AI for this even if the results are worse still.

Mikołaj Hołysz

@talon I mean, their models are worth *something*. At worst, if they do go into bankruptcy and get sold off, somebody else will get them and will be able to do with them as they please. It probably won’t be a free for all any more, but there’s definitely money to be made there.

Pratik Patel

@talon Even if they do, which I doubt, their assets will be bought out by someone. Right now, their structure isn't defined by profit. So they haven't explored all the productization possibilities. There's plenty going on in the open source space. I'm not concerned about losing functionality.

Mohamed Al-Hajamy💾

@talon Maybe a weird question, but I also wonder if anyone tested it with adult content? I find myself wondering if the prudish nature of ChatGPT made it in as well.

Talon replied to Mohamed

@MutedTrampet I don't dare. My account might get suspended. Which yes. Is another big problem.

modulux

@talon For what it's worth I agree a disclaimer is important, though I'm also dubious how much a disclaimer helps, because users gonna use (Yes, I know it's cynical, but I do it myself on occasion, not reading docs, etc.)

But on the other hand, I really think the pushback on tech that's genuinely useful, and a serious advance on the state of the art is very odd. There's no reasonable world in which every blind person can depend on a sighted person to be there (remotely or not) to help every time a visual issue comes up. Just like we can't rely on having electronic or braille copy for everything. I find it weird how much people get fixated on the limitations, which obviously do exist.

@talon For what it's worth I agree a disclaimer is important, though I'm also dubious how much a disclaimer helps, because users gonna use (Yes, I know it's cynical, but I do it myself on occasion, not reading docs, etc.)

But on the other hand, I really think the pushback on tech that's genuinely useful, and a serious advance on the state of the art is very odd. There's no reasonable world in which every blind person can depend on a sighted person to be there (remotely or not) to help every time...

Krishna Jariwala

@talon I kinda feel like this should be implied at this point but that's just me.

Talon

@krishnajariwala For people who know how this works, yes. They will understand. But this is integrated into a tool that people with little technical knowledge will use. We can't expect them to know this.

Dale Reardon

@talon And if they want a safety example Aira won't even endorse or permit their human assistants do certain mobility tasks such as advising when to cross the road etc

Guillem Leon

@talon I'd say you experienced problems. There's no reason for an average user to know that this is just how AI works. Inaccuracy is a problem and it's not your problem. The feature has one job and it's not doing it.

Talon

@guilevi The problem is that the answer to this question is wishy washy, just like the answer the AI gave. Another problem is that these errors compound the more questions you ask. There very badly needs to be disclaimers here.

Talon

@guilevi Like the only feedback I can give is problems, or no problems. I want a text box. I want to be able to provide detail for each individual chat.

Guillem Leon

@talon I agree, and if they have an email address or whatever then we should definitely complain. But a problem is a problem. It is not a good experience if it gets things confidently wrong. You know why it happens because you know how the tech works. Someone without that experience would have hit "problems" without hesitation.

Jakob Rosin

@guilevi @talon THey have a very active slack community for the testers. But I have noticed and do agree with all of your points. The hallucinating AI is a really big issue.

Talon

@jakobrosin @guilevi There seems to be an understanding that I’m here to hate on the technology, and I want to make clear that this isn’t my intent. But I find it personally very important that people understand what this is, and what this is not, especially because the line of what this appears to be can be very, very blurry, especially with the fact that you can talk to it and it appears to be aware of what it’s seeing and saying.

Jakob Rosin

@talon @guilevi its very easy to get mislead by the verbose and beautiful descriptions. But I have as well seen it describe things what are not there or assume things in a very wrong way. IT definitely shouldn’t be used to rely upon when taking medications for example

Talon

@jakobrosin @guilevi If it does do one thing, then it is giving very beautiful descriptions. I took a picture of my yard and of my cats and the results were very sattisfying. And also accurate. It does like to take a little creative license in its expressions now and again though. Especially with my kitties. Although in that instance I definitely welcomed it haha

Kara Goldfinch

@talon If it's like the envision AI thing I had a demo of, they won't tell you. They'll just act like it giving the wrong prices on a menu and in dollars rather than pounds is exactly what it's supposed to do.

Talon

@KaraLG84 I mean by the nature of how these AI work that is what it does. But people won't know this based on how this is advertised and there are no warnings. That is definitely something that needs to be there before this makes it out of beta

Kara Goldfinch

@talon Yeah. I saw this happened when someone from Envision gave me a demo of the feature, and they just carried on like it gave the correct answer.

Talon

@KaraLG84 There is nothing inherently wrong with it giving incorrect answers as long as this is agtually explained, admitted and handled correctly. Even algorithms for OCR and whatever else can be wrong. That's ok. But selling it as being correct when it clearly isn't. That's what's wrong. People have to know.

Kara Goldfinch

@talon Agreed. OCR engines have an accuracy percentage. These AI things are billed as never getting anything wrong ever.

Talon

@KaraLG84 I probably wouldn’t even have this discussion if everyone was straight forward and honest about the current capabilities of this stuff.

Kara Goldfinch

@talon Even gps apps say they're not infallible and give all kinds of disclaimers. I honestly think that this chatgpt stuff should've never gone public in the first place.

Kara Goldfinch

@talon Oh and the envision person was like The app can recognise hundreds of currencies. Right.

Kara Goldfinch

@talon What also worries me is when the blind influencers/professional social media people get hold of it. They'll also be acting like it never gets anything wrong ever, probably because the company told them to. then a gullible person comes along and believes them with horrible results. I used to be very gullible.

Kara Goldfinch

@talon I mean I would've probably said how amazing it was if my wife hadn't had said anything.

Kara Goldfinch

@talon I think I've got everything off my chest now you'll be glad to know. Lol

Talon

@KaraLG84 I hear you. I don’t think I’m quite as negative about this, but I definitely share your concern. The advertising is a big, big issue.

Kara Goldfinch replied to Talon

@talon It's my northern Englishness coming through I expect. We're all a bunch of miserable sods here. Lol

Kara Goldfinch

@talon I wouldn't have minded so much if they had actually said it made a mistake, adjusted something and tried again. But they didn't. I only new when my wife told me afterwards. I can't remember when reading the menu if it read the prices in dollars or not as I didn't really pay attention until they started asking it questions.

superblindman

@talon @KaraLG84 To its credit, actual Chat GPT does tell you right up top that it doesn't give necessarily accurate information. But yeah, the other apps that use its API... Many of them likely do not tell you that.

Brandon

@talon I think they want to know if it's haloosinating, but I'm not OpenAI. I just asume they'd want the technology to improve.

Talon

@serrebi Definitely. But with the current architecture, as far as I understand, hallucinations are impossible to eliminate entirely. There will always be some. And of course I’m providing as much feedback to Be My Eyes as I can.

Go Up