Email or username:

Password:

Forgot your password?
Top-level
Sebastian Wick

@marcan

> And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces

What does it mean not to have a vblank IRQ? I know that on AMD you program some registers and the firmware then does a page flip when it has to but you still know when all of this happens.

How opinionated is it on colorspaces? We want to change how KMS works and implementing policy in firmware sounds horrible

lore.kernel.org/dri-devel/QMer

22 comments
Hector Martin

@swick We know when page flips happen, we just can't "wait for vblank" *without* an associated page flip which X (only) wants.

In principle we could emulate that, but we tried and it was a mess and made things worse.

Re colorspaces, they are intertwined with pixel formats. Only some combinations work. For example, we can only get the raw native full gamut RGB with 10-bit pixel formats, 8-bit pixel formats are stuck at sRGB. And then some other options for 10-bit modes are using a "wide RGB" convention where the components go from -0.75 to 1.25 with 0 at 384 and 1 at 895, where the primaries are for some narrower colorspace (sRGB or BT.2020 or whatever) but then you can go out of gamut with the extra headroom to encode more colors. The same colorspace options with 8-bit pixel formats don't have the extra headroom.

And then only some of these options work for some hardware planes.

@swick We know when page flips happen, we just can't "wait for vblank" *without* an associated page flip which X (only) wants.

In principle we could emulate that, but we tried and it was a mess and made things worse.

Re colorspaces, they are intertwined with pixel formats. Only some combinations work. For example, we can only get the raw native full gamut RGB with 10-bit pixel formats, 8-bit pixel formats are stuck at sRGB. And then some other options for 10-bit modes are using a "wide RGB" convention...

Sebastian Wick

@marcan Pixel format and color space being intertwined is already a big yikes and can't be expressed anywhere. The other stuff sounds like you attach some colorspace metadata to a plane and the conversions and output are opaque. Won't work with the new KMS API .

That's what you get when design hardware for a specific compositor I guess...

Hector Martin

@swick The actual API we get is pixel format, colorspace, and EOTF per plane. It's just that only some combinations work or the interpretation of the range of the primaries changes with the bit width.

We might be able to emulate a bunch of the missing combinations by manually setting color transformation matrices in the driver, which is probably not *too* horrible but not ideal.

Most of this is firmware limitations, but although we can poke the hardware registers directly, that's a giant can of worms and we have no good way to step on top of the firmware like that. And throwing away the firmware is possible but out of the question, because we have no tooling to reverse engineer below it and it does a *huge* amount of work for us. As quirky as the firmware interface is and how painful it's been to deal with (marshalled C++ method calls, seriously), trying to reimplement everything ourselves would be way worse.

@swick The actual API we get is pixel format, colorspace, and EOTF per plane. It's just that only some combinations work or the interpretation of the range of the primaries changes with the bit width.

We might be able to emulate a bunch of the missing combinations by manually setting color transformation matrices in the driver, which is probably not *too* horrible but not ideal.

Sebastian Wick

@marcan The intertwining thing is really annoying but you might be able to work around it somehow.

The way it does color management on the other hand is an issue. Color conversions are not well defined and contain policy. Compositors have a policy for their shader path and when they offload it to KMS and now that has another policy you can see visual gliches from offloading. So either compositors implement whatever policy the specific KMS driver has or they just don't offload things.

Sebastian Wick

@marcan Nvidia already has a bit of a silly color pipeline but at least it is well-defined. From your description you're working with a completely opaque system and don't even have enough information to replicate it in shaders, so opportunistic offloading won't work at all without glitches.

Hector Martin

@swick I mean I spent a bunch of time looking at color ramps to work this out so far, and I'm pretty sure we can characterize whatever is going on to the point we can replicate it in shaders if need be.

Like we're doing 2-stage thermal capacity/resistance modeling for speaker voice coil power dissipation, it's not our first rodeo with characterizing hardware :P

Hector Martin

@swick Come to think of it, I should stick HDMI capture cards onto the machines in the CI test rack I plan on building. Ideally ones where I can hack around the EDID. That will let me add the display pipes to the CI matrix, including end-to-end color stuff :)

Sebastian Wick

@marcan Not sure how reliable a capture card will be. We really want chamelium boards in the freedesktop CI tbh.

Hector Martin

@swick Oh, I know it's a crapshoot, I've gone through a lot of them already (and hacked some firmwares). I'm fairly confident I can put something together that works reasonably enough though.

Chamelium sounds nice, is that available for purchase? I have an NeTV2 lying around which should be able to do very controlled tests too.

Sebastian Wick replied to Hector

@marcan AFAIK you can't buy the Chamelium boards. They have an email address on the site so you can try to contact them (but they ignored me when I wrote). Others here might know more.

Hector Martin replied to Sebastian

@swick Ah, I'll probably go the NeTV2 route them. Should be able to effectively do the same thing Chamelium does, and you can actually buy that (and I have one already, plus I'm friends with the guy who designed it).

Sebastian Wick replied to Hector

@marcan Please share if you manage to do something useful with it! It won't be enough to test HDR but might me enough for everything else so I'm interested.

Hector Martin replied to Sebastian

@swick I don't see why HDR wouldn't work? It's open source, you can make the firmware grab the raw bits if need be. Obviously has bandwidth limitations but as long as you don't need to test the corner of the resolution/color depth matrix...

Sebastian Wick replied to Hector

@marcan Thinking more about the metadata, InfoFrames, VSC SDP, ...

But sure, you can still test if the pixels are as expected, no matter if its SDR or HDR and that's already really useful.

Hector Martin replied to Sebastian

@swick We can capture all the raw metadata too, it's an FPGA. I'm actually thinking of just having it dump the entire frame bitstream and doing all the decoding in software, so we can have really fancy diagnostics for CI.

Sebastian Wick replied to Hector

@marcan I thought, at least on chamelium, that there is a display controller chip in front of the FPGA? TBH, I didn't look that closely into it yet. If we can get the entire bistream in software that would be amazing and indeed everything required for testing HDR as well.

Hector Martin replied to Sebastian

@swick Not on NeTV2, it goes straight into the FPGA and we can get the entire bitstream.

Martin Roukala (né Peres) replied to Hector

@marcan @swick Lol, been chatting about that at XDC with Leo from AMD. We agreed that any open Chamelium would need to act more like an oscilloscope and less like a normal receiver. Specify your trigger, wait for the symbols to come. Decode that using your CPU.

This way, everything new feature can be tested without having to hack the gateware, including USB, Audio, .... Also means that testing anything related to DP-MST would be much easier than with real hardware.

We both have an NeTV2 now :D

Sebastian Wick

@marcan That sounds much better than I initially thought and yeah, it should be possible to expose in the new API then.

It might not be of much use though. The more specific the conversions are the less likely it will match whatever the compositor chooses to do.

And if you can't give us a no-op/passthrough path then we're back to square one because then the policy implemented with shaders in the compositor will get mangled by the scanout color pipeline.

Hector Martin

@swick The pass-through path is 10bit native gamut RGB, which we do have and is what KWin-Wayland picks by default these days. We just don't have it for 8-bit formats (which only the TTY and X seem to really love to use).

Hector Martin

@swick In principle as long as whatever color management we expose is well specified it should work, no? Either we can implement standard options that are strictly specified and/or we can expose strictly specified CTM and LUTs or if we can't we just don't expose any of it and let the compositor do it in software.

Sebastian Wick

@marcan Yes, whatever is well-specified can be exposed but if the hardware/firmware does anything to the color at any point that is not well-specified the whole system falls apart. From your description it sounds like you specify the input and output and the rest is opaque but I would be very happy if that's not the case.

Go Up