@bramus @hi_mayank @Meyerweb That’s kinda why I’m suggesting supporting grammars provided by the author instead of the browser supporting a set of languages out of the box. It’d be language/dialect/version agnostic. The browser would use it to generate parsers to use internally for tokenizing blocks of text. The author could link to a grammar, give it a name, then when they want to use it for a code block just tell it to use that named grammar.
```html
<link rel=grammar type=text/peg href=/css.peg name=css-2025>
<code grammar=css-2025>
@scope { /* some code */ }
</code>
```
@knowler @bramus @hi_mayank @Meyerweb This is a very cool API, and I can definitely see value in removing the JS dependency.
I wonder if a slightly more feasible approach might be to define a syntax for declaring token ranges. Then Prism could emit these spans and styling. Something like:
```
<code highlights="keyword 0 4 identifier 6 8 comment 12 18">
const foo = // ...
</code>
```
(Range 0-4 is a `keyword`, range 6-8 is an `identifier`, etc. All subject to `::highlight` styles of those names.)
This feels like a lot less complexity in the browser, but the downside is that each `<code>` block needs it's own token ranges. I think Prism could still do this, but the advantage of @knowler's approach is using a single grammer for all `<code>` blocks in that language.
@knowler @bramus @hi_mayank @Meyerweb This is a very cool API, and I can definitely see value in removing the JS dependency.
I wonder if a slightly more feasible approach might be to define a syntax for declaring token ranges. Then Prism could emit these spans and styling. Something like:
```
<code highlights="keyword 0 4 identifier 6 8 comment 12 18">
const foo = // ...
</code>
```