Email or username:

Password:

Forgot your password?
Top-level
Rich Felker

The proponents of this kind of shit want to throw away the whole concept of having and using scientific knowledge obtained by experiment, with documentation of how it was obtained, evidence supporting the resulting models, falsifiability, etc., and replace it with a worse version of the way humans tens of thousands of years ago came to believe things about the world: simplistic pattern recognition.

33 comments
Rich Felker

Simplistic pattern recognition, and falling for false patterns, is the worst of human stupidity. This shit should be called artificial stupidity not artificial intelligence.

Dad

@dalias AI really stands for Artificial Idiot ;)

Tom Forsyth

@dalias There is a reasonable use for ML here to prove a solution (or an approximation of one) exists *at all*.

But once you've done that, go figure out what the ML actually learned, turn it into a nice 20-line program you can reason about and tweak, and stop applying ML voodoo.

I mean this is what we do all the time anyway, it's just usually the "ML" bit is "a human brain".

Brian Baresch

@dalias I've been calling it Artificial Idiocy at work.

chris martens

@editer @dalias maybe don’t do this though because “idiot” is deeply ableist

Chris Gioran 💔

@dalias I think you are attributing way too much agency to them.

It's not high aspirations for AI's capabilities. It's fundamental disinterest in the subject matter that's the problem.

My bet is that they thought - gee, this looks like a hard problem. I bet that no one has thought about it before, so instead of me investing the effort to understand it, I'll just have AI solve it for me.

Aeon.Cypher

@chrisg @dalias

AI is also a hard problem, but people have the illusion that they understand it because they know how to `import keras`

Rich Felker

@chrisg That's kinda the whole thing, the anti-expertise sentiment behind it. 🤬

Rich Felker

@chrisg And, not incidentally, the same anti-expertise sentiment is characteristic of fascism.

Chris Gioran 💔

@dalias Very true.

Now that you wrote this, I realize it is possible to draw a line between obsession with AI and it's associated deification of existing knowledge to some of Ur-Fascism properties, like the cult of tradition and cult of action.

Luis Bruno

@dalias we're tired of experts, brexit means brexit, innit guv?

@chrisg

d@nny "disc@" mc²

@dalias allergic to domain expertise!!!! this is why copilot uses a fucking ENGLISH tokenizer for PROGRAM CODE which we have fucking PARSERS for!!!!!

d@nny "disc@" mc²

@dalias i'm NOT fucking writing it for them they can stew in their own fucking mediocrity

Matthew Booth

@hipsterelectron @dalias On a related note, are there any language-specific models out there which take a parsed intermediate representation as input and confine themselves to valid output?

d@nny "disc@" mc²

@mattb @dalias i asked an LLM engineer about this and he basically said nobody cares about it because it requires a lot of work (domain expertise) so i'm vaguely confident that especially if you define a generative model not in terms of crass next-token prediction but using existing methods of program synthesis via a parse tree or ideally an IR of some sort you could generate a significantly better form of autocomplete trained on e.g. just the code in a small monorepo, or just all the code checked out on your own machine. i think part of the reason copilot didn't release tiered versions according to license (would have been so. fucking. easy. but their goal is to destroy copyright enforcement not to build anything useful) is because it really sucks unless it has a ridiculous amount of data

@mattb @dalias i asked an LLM engineer about this and he basically said nobody cares about it because it requires a lot of work (domain expertise) so i'm vaguely confident that especially if you define a generative model not in terms of crass next-token prediction but using existing methods of program synthesis via a parse tree or ideally an IR of some sort you could generate a significantly better form of autocomplete trained on e.g. just the code in a small monorepo, or just all the code checked...

Tane Piper ⁂

@hipsterelectron @mattb @dalias this here.

Hand waving some of the infra improvements and some reasoning capabilities: LLMs are just Markov chains with all their pre-computed in lookup tables and loaded into memory.

This is why they will never beat expert systems at reasoning - because that's not what next token prediction is.

Side note: I love the idea of building a AST-based model to query than a token based one.

d@nny "disc@" mc²

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible and i have spent five years on a theory/implementation of high-performance resumable non-contiguous parsing techniques which compose sub-matchers

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible...

Adrian Cochrane

@hipsterelectron @mattb @dalias This has been explored in the past (known variously as e.g. "Evolutionary Programming") where you take a bunch of randomly-generated programs in their parsed format to copy/transform/combine them & assess the ones which best solve the problem to semi-randomly go into the next round.

But Neural Nets seem to be the only Machine Learning tactic which gets any attention...

David Mankins

@hipsterelectron @mattb @dalias

I the problem is getting a mapping from a textual program description to the intermediate representation. Tge LLMs do their coding tricks by associating code to accompanying discussion and comments. What they want to do is let you use text to describe your problem then have the system shot out plausible code.

I think you’re idea implicitly requires the model to actually have some understanding of what it’s doing.

d@nny "disc@" mc²

@lain_7 @mattb @dalias to paraphrase @emilymbender, it's just acting as a much worse search engine at that point. erasing copyright/attribution is a positive for the monied interests pushing these machines over ones incorporating any level of domain expertise

David Mankins

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

There was work that tried to move from formal(ish) spec to code in the 80s out 90s. Maybe that stuff could be resurrected, taking advantage of greater computer power and maybe the translation abilities of transformers.

Or maybe I’m misunderstanding you, if so, apologies.

I’ve been wondering if one could use something like the Berkeley parser to parse text into SVO triples that could be turned into assertions the populate (or supplement) a knowledge base, then use that knowledge base to address questions. One nice feature of that is that you could store the provenance of the assertion in your knowledge base, too.

Or maybe i’m 20 years behind the state of the art.

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

Emily M. Bender (she/her)

@mattb @hipsterelectron @dalias

Yes, there is a long tradition of parsing into semantic representations, and even work on generating from them. If you look at it that way, you immediately see that generation of grammatical strings alone isn't really enough. You need to have a way to connect the semantic representations to some model of the world, and determine what valid things you want to say.
>>

Emily M. Bender (she/her)

@mattb @hipsterelectron @dalias

One of the issues with LLMs is that they provide apparent fluency on unlimited topics, making it seem like you don't need to do the extremely difficult world modeling work on those topics...

Tane Piper ⁂

@emilymbender @mattb @hipsterelectron @dalias LLMs are just Ricardian models of the world (it's clear the people [outside acidemia] who make them think they will just infinitely grow in knowledge perfectly)

Tane Piper ⁂

@emilymbender @mattb @hipsterelectron @dalias I make this as my own observation, not as an explanation. Obviously you know more in the academic field, but I also observe on the practitioner space where we are looking at putting them in front of people and I'm not so sure.

Not every message has the intent you seem to be alluding to

Emily M. Bender (she/her)

@tanepiper Please feel free to make your observations outside of my mentions, then. As it stands, you have addressed this comment to me, in response to my post, without any connective text indicating how it is supposed to relate. It reads as if you felt that I needed to be enlightened.

Emily M. Bender (she/her) replied to Emily M. Bender (she/her)

@tanepiper Also, in case you missed it, mansplaining is never about intent.

Tane Piper ⁂ replied to Emily M. Bender (she/her)

@emilymbender no, apologies if it came off that way - reading it with a tinge of sarcasm and deadpan humour helps (but of course that does not come across in text). Many sales teams of products promise infinite productivity gains and it's exhausting. I've clarified it's this hopefully.

FWIW I was already in this particular thread, just a different branch 🤷🏼‍♀️

tane.codes/@tanepiper/11274034

Jeffrey Hulten

@dalias

SCOTUS basically did the same thing in reversing Chevron. Law is a complex language model reinforcing bigotry and protecting the status quo, not a method for separating fact and falsehood (especially in the face of new information).

Raven Onthill

@dalias they want badly to believe that minds can be reduced to simple stochastic models. This wasn't an entirely unreasonable hypothesis 20 years ago, but at this point it doesn't look like it's correct.

Rich Felker

@ravenonthill The concept is still vaguely plausible, but their idea for how to achieve it is utter bullshit.

If you compare how human minds are "trained", there are multiple feedback layers in the form of consequences, and most importantly, we select very carefully what training inputs are used rather than throwing giant mostly wrong and mostly evil corpuses at children, and most of the training is experiential not ingesting word soup.

Go Up