Email or username:

Password:

Forgot your password?
Rich Felker

Yesterday I encountered a "wrong-on-the-internet" rando professing his excitement for "using machine learning" in #3dprinting to throttle speeds in the right places to avoid quality loss.

While completely not worth engaging with, I feel like this is a useful example to understand why this idiocy is so infuriating...

46 comments
Rich Felker

This is a problem domain where the constraints and effects are pretty much entirely comprehensible in terms of known physical models. Any suboptimal behavior is entirely a matter of nobody having spent the time to apply known models. But sure, let's instead spend the time hooking up ML, CV to evaluate results, and waste tons (literally) of plastic training a model to learn a poor approximation of what we already know.

But this is a general pattern that's terrifying...

gaytalogger

@dalias on the other hand, they're not even wrong. with all the money going into machine learning pointlessly, it's more likely to get done by machine learning than the existing methods.

Rich Felker

The proponents of this kind of shit want to throw away the whole concept of having and using scientific knowledge obtained by experiment, with documentation of how it was obtained, evidence supporting the resulting models, falsifiability, etc., and replace it with a worse version of the way humans tens of thousands of years ago came to believe things about the world: simplistic pattern recognition.

Rich Felker

Simplistic pattern recognition, and falling for false patterns, is the worst of human stupidity. This shit should be called artificial stupidity not artificial intelligence.

Dad

@dalias AI really stands for Artificial Idiot ;)

Tom Forsyth

@dalias There is a reasonable use for ML here to prove a solution (or an approximation of one) exists *at all*.

But once you've done that, go figure out what the ML actually learned, turn it into a nice 20-line program you can reason about and tweak, and stop applying ML voodoo.

I mean this is what we do all the time anyway, it's just usually the "ML" bit is "a human brain".

Brian Baresch

@dalias I've been calling it Artificial Idiocy at work.

chris martens

@editer @dalias maybe don’t do this though because “idiot” is deeply ableist

Chris Gioran 💔

@dalias I think you are attributing way too much agency to them.

It's not high aspirations for AI's capabilities. It's fundamental disinterest in the subject matter that's the problem.

My bet is that they thought - gee, this looks like a hard problem. I bet that no one has thought about it before, so instead of me investing the effort to understand it, I'll just have AI solve it for me.

Aeon.Cypher

@chrisg @dalias

AI is also a hard problem, but people have the illusion that they understand it because they know how to `import keras`

Rich Felker

@chrisg That's kinda the whole thing, the anti-expertise sentiment behind it. 🤬

Rich Felker

@chrisg And, not incidentally, the same anti-expertise sentiment is characteristic of fascism.

Chris Gioran 💔

@dalias Very true.

Now that you wrote this, I realize it is possible to draw a line between obsession with AI and it's associated deification of existing knowledge to some of Ur-Fascism properties, like the cult of tradition and cult of action.

Luis Bruno

@dalias we're tired of experts, brexit means brexit, innit guv?

@chrisg

d@nny "disc@" mcClanahan

@dalias allergic to domain expertise!!!! this is why copilot uses a fucking ENGLISH tokenizer for PROGRAM CODE which we have fucking PARSERS for!!!!!

d@nny "disc@" mcClanahan

@dalias i'm NOT fucking writing it for them they can stew in their own fucking mediocrity

Matthew Booth

@hipsterelectron @dalias On a related note, are there any language-specific models out there which take a parsed intermediate representation as input and confine themselves to valid output?

d@nny "disc@" mcClanahan

@mattb @dalias i asked an LLM engineer about this and he basically said nobody cares about it because it requires a lot of work (domain expertise) so i'm vaguely confident that especially if you define a generative model not in terms of crass next-token prediction but using existing methods of program synthesis via a parse tree or ideally an IR of some sort you could generate a significantly better form of autocomplete trained on e.g. just the code in a small monorepo, or just all the code checked out on your own machine. i think part of the reason copilot didn't release tiered versions according to license (would have been so. fucking. easy. but their goal is to destroy copyright enforcement not to build anything useful) is because it really sucks unless it has a ridiculous amount of data

@mattb @dalias i asked an LLM engineer about this and he basically said nobody cares about it because it requires a lot of work (domain expertise) so i'm vaguely confident that especially if you define a generative model not in terms of crass next-token prediction but using existing methods of program synthesis via a parse tree or ideally an IR of some sort you could generate a significantly better form of autocomplete trained on e.g. just the code in a small monorepo, or just all the code checked...

Tane Piper ⁂

@hipsterelectron @mattb @dalias this here.

Hand waving some of the infra improvements and some reasoning capabilities: LLMs are just Markov chains with all their pre-computed in lookup tables and loaded into memory.

This is why they will never beat expert systems at reasoning - because that's not what next token prediction is.

Side note: I love the idea of building a AST-based model to query than a token based one.

d@nny "disc@" mcClanahan

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible and i have spent five years on a theory/implementation of high-performance resumable non-contiguous parsing techniques which compose sub-matchers

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible...

Adrian Cochrane

@hipsterelectron @mattb @dalias This has been explored in the past (known variously as e.g. "Evolutionary Programming") where you take a bunch of randomly-generated programs in their parsed format to copy/transform/combine them & assess the ones which best solve the problem to semi-randomly go into the next round.

But Neural Nets seem to be the only Machine Learning tactic which gets any attention...

David Mankins

@hipsterelectron @mattb @dalias

I the problem is getting a mapping from a textual program description to the intermediate representation. Tge LLMs do their coding tricks by associating code to accompanying discussion and comments. What they want to do is let you use text to describe your problem then have the system shot out plausible code.

I think you’re idea implicitly requires the model to actually have some understanding of what it’s doing.

d@nny "disc@" mcClanahan

@lain_7 @mattb @dalias to paraphrase @emilymbender, it's just acting as a much worse search engine at that point. erasing copyright/attribution is a positive for the monied interests pushing these machines over ones incorporating any level of domain expertise

David Mankins

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

There was work that tried to move from formal(ish) spec to code in the 80s out 90s. Maybe that stuff could be resurrected, taking advantage of greater computer power and maybe the translation abilities of transformers.

Or maybe I’m misunderstanding you, if so, apologies.

I’ve been wondering if one could use something like the Berkeley parser to parse text into SVO triples that could be turned into assertions the populate (or supplement) a knowledge base, then use that knowledge base to address questions. One nice feature of that is that you could store the provenance of the assertion in your knowledge base, too.

Or maybe i’m 20 years behind the state of the art.

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

Emily M. Bender (she/her)

@mattb @hipsterelectron @dalias

Yes, there is a long tradition of parsing into semantic representations, and even work on generating from them. If you look at it that way, you immediately see that generation of grammatical strings alone isn't really enough. You need to have a way to connect the semantic representations to some model of the world, and determine what valid things you want to say.
>>

Emily M. Bender (she/her)

@mattb @hipsterelectron @dalias

One of the issues with LLMs is that they provide apparent fluency on unlimited topics, making it seem like you don't need to do the extremely difficult world modeling work on those topics...

Tane Piper ⁂

@emilymbender @mattb @hipsterelectron @dalias LLMs are just Ricardian models of the world (it's clear the people [outside acidemia] who make them think they will just infinitely grow in knowledge perfectly)

Tane Piper ⁂

@emilymbender @mattb @hipsterelectron @dalias I make this as my own observation, not as an explanation. Obviously you know more in the academic field, but I also observe on the practitioner space where we are looking at putting them in front of people and I'm not so sure.

Not every message has the intent you seem to be alluding to

Emily M. Bender (she/her)

@tanepiper Please feel free to make your observations outside of my mentions, then. As it stands, you have addressed this comment to me, in response to my post, without any connective text indicating how it is supposed to relate. It reads as if you felt that I needed to be enlightened.

Emily M. Bender (she/her) replied to Emily M. Bender (she/her)

@tanepiper Also, in case you missed it, mansplaining is never about intent.

Tane Piper ⁂ replied to Emily M. Bender (she/her)

@emilymbender no, apologies if it came off that way - reading it with a tinge of sarcasm and deadpan humour helps (but of course that does not come across in text). Many sales teams of products promise infinite productivity gains and it's exhausting. I've clarified it's this hopefully.

FWIW I was already in this particular thread, just a different branch 🤷🏼‍♀️

tane.codes/@tanepiper/11274034

Jeffrey Hulten

@dalias

SCOTUS basically did the same thing in reversing Chevron. Law is a complex language model reinforcing bigotry and protecting the status quo, not a method for separating fact and falsehood (especially in the face of new information).

Raven Onthill

@dalias they want badly to believe that minds can be reduced to simple stochastic models. This wasn't an entirely unreasonable hypothesis 20 years ago, but at this point it doesn't look like it's correct.

Rich Felker

@ravenonthill The concept is still vaguely plausible, but their idea for how to achieve it is utter bullshit.

If you compare how human minds are "trained", there are multiple feedback layers in the form of consequences, and most importantly, we select very carefully what training inputs are used rather than throwing giant mostly wrong and mostly evil corpuses at children, and most of the training is experiential not ingesting word soup.

Sören Meyer-Eppler

@dalias I agree with your general argument, but my uneducated guess for routing and timing for 3D printing is that it would be full of NP complete optimization problems where heuristic solutions are appropriate? If so, maybe throwing an AI at it is an expensive but not entirely misguided approach?

Rich Felker

@BuschnicK There's a tiny chance that might be right if the topic were travel order, but the topic was literally something where the answer is as dumb as "you can only exceed your heater's sustained melt rate capability in short bursts, and doing that on the outside surface may look bad".

Steve Canon

@dalias at least in part, this is testament to our collective failure to make actual engineering accessible to normal people. I can believe that for a lot of people a usually-good-enough ML solution is easier, even though we know it’s profoundly stupid and wasteful.

G. Wozniak

@steve @dalias Based on my exposure to all this AI stuff, I find those most enthusiastic are those who want to make fast gains in a field they know nothing about. It's not about it getting better, it's about that (perceived) immediate gain.

Rich Felker

@gwozniak @steve IOW it's about being able to butt in, displace experts, and pretend you can do something you're clueless about while making it worse.

Dawn Ahukanna

@bob_zim @gwozniak @steve @dalias
In 2017 I did a keynote presentation on “Data on steroids” in a similar vein.

Joe

@dalias This is only tangentially related; but the kittycad/zoo.dev text-to-cad thing is one of the clearest examples of an AI thing that just doesn't really work

myrmepropagandist

@dalias Can you tell us more about why machine learning isn’t a good fit for this? Though I didn’t even know throttling speeds could possibly improve quality. Knowing little it sounds possible? But what tipped you off that it would not work?

Go Up