64 comments
Very good comment that serves as motivation for my main point, which expressed in polite terms would something like THE PROBLEM IS NOT AI THE PROBLEM IS F*CKING CAPITALISM. As you accurately describe, what incentive has Sam Altman to share his AI, there is it none. The solution is eliminating Sam Altman and all their ilk. In other words, getting rid of capitalism would solve all of this problems. Oh, I see you have a good job and you are probably happy living in capitalism, I understand it now. It is not an ad-hominem (or at least not entirely), I am just implying that he might be having a conflict of interests here. Therefore I do not understand why you seem to criticize my answer of "what we need to do is to get rid of capitalists" @sjuvonen @remenca @tinker nuclear power and nuclear medicine, which both save an incredible number of lives, are here because the tech started as a weapon of mass death. Nobody wants mass death. But the science was good and matured into something very useful. AI can do just that if it’s not strangled by capitalism along the way. Thats why it’s important to point to good applications while criticizing theft and energy use. We can do both. Don't apologize, it is not your fault, it is really my English :) I normally engage in debates in internet that are often beyond my level and from time to time it just happens that I do not explain myself correctly, but it is a good practice anyway. Have a nice weekend. Greetings from Barcelona. That is because it is in capitalist hands, which is my entire point. We only need to seize it, like any other means of production, and socialize it's benefits. For instance, if you are an artist and you work has been used for training an AI, you should perceive the proportional part of whatever it would have been paid to generate an image with that AI. @remenca @tinker You better be careful what you wish for. You wish to be made redundant by an AI. When you dig down, there appear to be two kinds of AI boosters (who are not shareholders in a hot company). One group wants slaves. One group wants to build a god and become its priesthood. The common thread is that both enforce and encode a strict hierarchy in society. I totally agree. Actually, this was my point all the time. But with single catch. You failed to mention why this will and has happened. And the reason is because those technological improvements have been always in hands of the rich. It is not a problem of technology itself, it is a problem of who owns it. It is a political problem. Therefore, it makes no sense to blame AI for it. It is like blaming laundry machines for the loss of jobs of the launderers. I'm tempted to reply but think it will be better if you reach the conclusion by yourself, so I will only ask: You say that if AI takes over our jobs we will end doing the more menial tasks that AI cannot do. But those task must have been being done now anyway, no? So what will change? @remenca @sjuvonen @tinker New kind of jobs and especially services came out of electrification. I don't expect that AI will take only the mental part and we end up with currently existing jobs of being hairdressers and waiters, but instead new services will emerge. 200 years ago, nobody could imagine a world with most jobs being not a farmer or craftsman. I think that your demands of greater clarity are fair, but replacing capitalism with something else is something very complicated, so my answer will be unavoidably incomplete and flawed. Removing the rich people is something that has been attempted in the past and in many occasions succeeded. The secret I'd say is to have at least part of the military in your side. 1917 revolution, Paris commune, Burkina Faso, etc...all share this trait. The tricky part seems to be twofold. First you need to avoid other powers that are friendly to the rich you have removed that will try do destroy you. This has also happened in failed revolutions like in Germany, Spain or the rest of the springtime of peoples. We could even consider the Napoleonic wars against the French revolution an instance of this. The only solution I see to this is again having the support of the military. Finally, if you have not been destroyed by your capitalist enemies, the only problem is how to avoid becoming a capitalist yourself after you gain power. This is what in my understanding led to the USSR demise with the apparition of the Nomenklatura or the capitalist "communism" from China. Still, it can be argued that at least those systems ended being softer versions of capitalism that the ones that replaced, so it was not all in vain, but still. Applying this to the current situation I think that a more or less clear picture arises. The military is split in two parts, one is the military industrial complex and the other is the cannon fodder that are extremely brainwashed with maga stuff in the bottom. Some years ago the brainwashing worked because USA was the most advanced country, but this seems to be ending due to the climate change, resource depletion and the apparition of emerging powers like the brics. So far they have managed to keep the cannon fodder fooled by blaming minorities and woke people for everything. But this will not work forever, simply because the real causes of the American decline are not those, and they will run out of minorities to blame. What will happen when this happens and the bottom military realize they have been tricked? I don't know, but it could open an opportunity like the Kornilov affair where the military change sides. So in my opinion, considering how powerful the propaganda machine is, the only thing we can do is just wait until the whole thing comes breaking down. And try to educate all the maga bigots - especially the ones in the army - until they realize they are just poor proletariat like us. Other than that I don't know. It is said Lenin once said that any society is only two missed meals from revolution. We are getting there. @DMTea @sjuvonen @tinker @sjuvonen @remenca @tinker LLMs are conceptually incapable of delivering on their promises and the hopes people have been fooled to put in them. They can't solve basic issues like "hallucinations" because they are just not designed to actually know or understand anything. They are a fundamentally useless parlor trick. That's bullshit, LLMs of sufficient size paired with enough data are universal approximators, which mean that conceptually it is possible. The only catch is the cost and that we do not know if we have enough data. But conceptually I do not know why a machine should be unable to surpass any human in any intellectual task. And they can and are currently solving the hallucination problem. As a matter of fact they have managed to get like 70% hallucinations on the benchmarks with the latest techniques, I can find the paper for you if you want to read it. How much of that improvement will be preserved in real life I dont know, but they are fixing the problem, unlike you state. This is mathematically incorrect and demonstrates a lack of understanding of what universal approximation means. @remenca All an LLM does is resynthesize content from its training set that corresponds to words in the query it receives. It understands nothing. It does no reasoning. It can't even use a calculator or look things up in a database, which much simpler and lower powered machines are able to do. LLMs are incapable of intelligence BY DESIGN. They are literally not AI at all: https://link.springer.com/article/10.1007/s10676-024-09775-5 I am sorry mate, but this article all it does is just to frame the problem of hallucinations (which is being solved as we speak) as some interpretation of some guy about what is "bullshit". It does not talk about scaling laws, nor approximation, nor pac nor nothing. Please, do not embarass yourself citing articles you do not understand. @remenca LLM's don't do any reasoning in the first place, so what they do can't be scaled up into "intelligence". Pretty simple! @sjuvonen |
@sjuvonen @tinker
I was thinking of AI