Keep in mind that I’m not saying not to use it. It will be helpful. But it can also be very dangerous and misleading. You have to know this to actually use it, and they don’t seem to tell you this.
Top-level
Keep in mind that I’m not saying not to use it. It will be helpful. But it can also be very dangerous and misleading. You have to know this to actually use it, and they don’t seem to tell you this. 51 comments
I will obviously let them know this and provide it as feedback to them directly. But I’m posting this here because I am actually concerned, and I really want you to know this if you decide to pick this up. This thing *will* make stuff up. It does so all the time. Please, please, please be very, very careful with this. They do tell you about this though in their email. So if you’ve read the email, then you’ll have seen the disclaimer. This is good. But the disclaimer should really be in the app. They also make it sound like that this may be an issue that can be fixed. This will never be fixed completely. So again. Use it with care. Also again, I’m not trying to downplay the image recognition capabilities of this. It definitely is better than most image recognition that we have access to. But given the nature of how this technology works, you can never be too careful. @talon Thank you so much for sharing your experience. I don't have access yet and I'm not surprised about what you said. I wished some of the blindeness tech podcasts that have covered it in the past would have been more balanced. I remember one where the Be My Eyes representative said they gave it an image of a railway map of India and asked it to find a train connection from one city to another and then asked it to tell them what to say in Hindi to buy the ticket. And they said, of course, @SunDial79 It is important that in marketting materials you present your product as favourably as possible. Sadly though they don’t employ me to do their marketting, so I will be very honest about my experience. I think it’s important that people understand what this does, and what this doesn’t. @talon Yes, sure. But I think they should be asked the right questions, too. @SunDial79 I think the fact is that people are very excited about this. Rightly so. This is quite a big leap in image descriptions. But it is very easy to lose track this way, and who wants to be negative about their own product? @talon it had worked. The host of the podcast seemed impressed and I was disappointed they didn't ask how the Be My Eyes reps had checked that the answer was correct. Just a few days ago I had read an article written without AI, I assume, where a journalist of a mainstream magazine wrote you couldn't say please and thank you in Hindi. At uni I learnt how to say it. As long as we don't need AI to make up things about subjects we know little about, how are we supposed to deal with AI errors. One has to wonder, with the recent news about openai and bankruptcy, what will happen if they can't find another investor? This is why I want an offline AI for this even if the results are worse still. @talon I mean, their models are worth *something*. At worst, if they do go into bankruptcy and get sold off, somebody else will get them and will be able to do with them as they please. It probably won’t be a free for all any more, but there’s definitely money to be made there. @talon Even if they do, which I doubt, their assets will be bought out by someone. Right now, their structure isn't defined by profit. So they haven't explored all the productization possibilities. There's plenty going on in the open source space. I'm not concerned about losing functionality. @talon Maybe a weird question, but I also wonder if anyone tested it with adult content? I find myself wondering if the prudish nature of ChatGPT made it in as well. @MutedTrampet I don't dare. My account might get suspended. Which yes. Is another big problem. @krishnajariwala For people who know how this works, yes. They will understand. But this is integrated into a tool that people with little technical knowledge will use. We can't expect them to know this. @talon And if they want a safety example Aira won't even endorse or permit their human assistants do certain mobility tasks such as advising when to cross the road etc @talon I'd say you experienced problems. There's no reason for an average user to know that this is just how AI works. Inaccuracy is a problem and it's not your problem. The feature has one job and it's not doing it. @talon I agree, and if they have an email address or whatever then we should definitely complain. But a problem is a problem. It is not a good experience if it gets things confidently wrong. You know why it happens because you know how the tech works. Someone without that experience would have hit "problems" without hesitation. @jakobrosin @guilevi There seems to be an understanding that I’m here to hate on the technology, and I want to make clear that this isn’t my intent. But I find it personally very important that people understand what this is, and what this is not, especially because the line of what this appears to be can be very, very blurry, especially with the fact that you can talk to it and it appears to be aware of what it’s seeing and saying. @jakobrosin @guilevi If it does do one thing, then it is giving very beautiful descriptions. I took a picture of my yard and of my cats and the results were very sattisfying. And also accurate. It does like to take a little creative license in its expressions now and again though. Especially with my kitties. Although in that instance I definitely welcomed it haha @talon If it's like the envision AI thing I had a demo of, they won't tell you. They'll just act like it giving the wrong prices on a menu and in dollars rather than pounds is exactly what it's supposed to do. @talon Yeah. I saw this happened when someone from Envision gave me a demo of the feature, and they just carried on like it gave the correct answer. @KaraLG84 There is nothing inherently wrong with it giving incorrect answers as long as this is agtually explained, admitted and handled correctly. Even algorithms for OCR and whatever else can be wrong. That's ok. But selling it as being correct when it clearly isn't. That's what's wrong. People have to know. @talon Agreed. OCR engines have an accuracy percentage. These AI things are billed as never getting anything wrong ever. @talon Even gps apps say they're not infallible and give all kinds of disclaimers. I honestly think that this chatgpt stuff should've never gone public in the first place. @talon Oh and the envision person was like The app can recognise hundreds of currencies. Right. @talon What also worries me is when the blind influencers/professional social media people get hold of it. They'll also be acting like it never gets anything wrong ever, probably because the company told them to. then a gullible person comes along and believes them with horrible results. I used to be very gullible. @talon I mean I would've probably said how amazing it was if my wife hadn't had said anything. @talon It's my northern Englishness coming through I expect. We're all a bunch of miserable sods here. Lol @talon I wouldn't have minded so much if they had actually said it made a mistake, adjusted something and tried again. But they didn't. I only new when my wife told me afterwards. I can't remember when reading the menu if it read the prices in dollars or not as I didn't really pay attention until they started asking it questions. |
It also asks me how I’d rate the chat every time I end it, and there are options for I experienced problems and I had a good chat. And literally every single chat so far has had subtle but important errors in the description of the image and I have no idea whether they want me to say that this is a problem, or if they want me to say that it’s a good chat because there were no outages and quick response times or whatever. Because sure, the service worked, but again… lots of small issues. I don’t think you can resolve them, either. This is just how this tech works.
It also asks me how I’d rate the chat every time I end it, and there are options for I experienced problems and I had a good chat. And literally every single chat so far has had subtle but important errors in the description of the image and I have no idea whether they want me to say that this is a problem, or if they want me to say that it’s a good chat because there were no outages and quick response times or whatever. Because sure, the service worked, but again… lots of small issues. I don’t think...