Email or username:

Password:

Forgot your password?
Top-level
d@nny "disc@" mcClanahan

@mattb @dalias i asked an LLM engineer about this and he basically said nobody cares about it because it requires a lot of work (domain expertise) so i'm vaguely confident that especially if you define a generative model not in terms of crass next-token prediction but using existing methods of program synthesis via a parse tree or ideally an IR of some sort you could generate a significantly better form of autocomplete trained on e.g. just the code in a small monorepo, or just all the code checked out on your own machine. i think part of the reason copilot didn't release tiered versions according to license (would have been so. fucking. easy. but their goal is to destroy copyright enforcement not to build anything useful) is because it really sucks unless it has a ridiculous amount of data

6 comments
Tane Piper ⁂

@hipsterelectron @mattb @dalias this here.

Hand waving some of the infra improvements and some reasoning capabilities: LLMs are just Markov chains with all their pre-computed in lookup tables and loaded into memory.

This is why they will never beat expert systems at reasoning - because that's not what next token prediction is.

Side note: I love the idea of building a AST-based model to query than a token based one.

d@nny "disc@" mcClanahan

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible and i have spent five years on a theory/implementation of high-performance resumable non-contiguous parsing techniques which compose sub-matchers

@tanepiper @mattb @dalias it's a fantastic fucking idea i just have other things i care more about. i was also going to make a cross-language lsp server which i realized would serve as a great basis for this work but then i realized actually i myself would never use it bc i strongly prefer regex search with my emacs extension. i'm hoping to apply for phd research on regex engine techniques and specifically am working on a new regex engine for emacs. this is all because i think tree-sitter is horrible...

Adrian Cochrane

@hipsterelectron @mattb @dalias This has been explored in the past (known variously as e.g. "Evolutionary Programming") where you take a bunch of randomly-generated programs in their parsed format to copy/transform/combine them & assess the ones which best solve the problem to semi-randomly go into the next round.

But Neural Nets seem to be the only Machine Learning tactic which gets any attention...

David Mankins

@hipsterelectron @mattb @dalias

I the problem is getting a mapping from a textual program description to the intermediate representation. Tge LLMs do their coding tricks by associating code to accompanying discussion and comments. What they want to do is let you use text to describe your problem then have the system shot out plausible code.

I think you’re idea implicitly requires the model to actually have some understanding of what it’s doing.

d@nny "disc@" mcClanahan

@lain_7 @mattb @dalias to paraphrase @emilymbender, it's just acting as a much worse search engine at that point. erasing copyright/attribution is a positive for the monied interests pushing these machines over ones incorporating any level of domain expertise

David Mankins

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

There was work that tried to move from formal(ish) spec to code in the 80s out 90s. Maybe that stuff could be resurrected, taking advantage of greater computer power and maybe the translation abilities of transformers.

Or maybe I’m misunderstanding you, if so, apologies.

I’ve been wondering if one could use something like the Berkeley parser to parse text into SVO triples that could be turned into assertions the populate (or supplement) a knowledge base, then use that knowledge base to address questions. One nice feature of that is that you could store the provenance of the assertion in your knowledge base, too.

Or maybe i’m 20 years behind the state of the art.

@hipsterelectron @mattb @dalias @emilymbender

well, that’s how LLMs work, isn’t it? Are you talking instead about a hypothetical system that has some understanding of the semantics (represented, say, in a knowledge base of some sort) of the ASTs and transformations of them?

I’m guessing that getting the semantics into the system might be a challenge.

Go Up