Email or username:

Password:

Forgot your password?
Top-level
🐧sima🐧

@marcan the problem is people tried to fix the X11 bare metal situation

and realized that you more or less have to reimplement an entire wayland server, with all it's compositing stack and redraw loop and modeset handling and everything else

and then you need to retrofit the X11 window model into that rendering pipeline, all while keeping the glamor acceleration going

people actually built this. it's called $wayland_compositor + Xwayland

that stack _is_ your modern bare metal X11 stack

27 comments
🐧sima🐧

@marcan Xorg X11 plus X11 compositor to collect all the windows and bake them into a desktop does the same as wayland compositor + Xwayland

except the latter has an actually reasonable architecture with a much cleaner split of responsibilities. and the added possibility that you can actually use modern hw fully if your app can speak wayland natively

Kevin Karhan :verified:

@sima @marcan which is why Xwayland is being used in Proton & DXVK on Wayland-supporting hardware, resulting in a "better than bare metal Windows" performance on the same hardware with near-zero effort.

Even if you downgrade from a.SSD to an HDD.

tnt

@kkarhan @sima @marcan Does that include NVidia HW ? I never tested myself, it's just what I read that my perf would be way worse with wayland and what I saw on recent published benchmark too.
(and AFAIK it's being worked on, and improving, but I'm just looking to know the status perf wise today running proton on Xorg vs wayland)

Hector Martin

@tnt @kkarhan @sima Nvidia Wayland support is known to be horrible and broken, and that's 100% Nvidia's fault and a major cause of reputational damage to Wayland.

Andy

@tnt @kkarhan @sima @marcan After marcan's tweet yesterday, I decided to try and give Wayland on an RTX 2080 Ti a go again. I switched back to Xorg now, because Nvidia's wayland support is absolutely horrible:

The most annoying part: Electron apps and proton with DXVK show the weird behavior that old frames are displayed between newer frames. While typing in Slack for example, the letters you just typed vanish for a split second and then show up again. The same happens in Far Cry 6 when running in gamescope + proton. Results in horrible stuttering. Some synchronization primitive is broken. I am not the only one and I am running the closed source driver, so this one seems to be affected as well: github.com/NVIDIA/open-gpu-ker

Also I tried to 3D print something and Cura just crashed with an int3 on its way through `gdk_display_manager_open_display`. I didn't debug it further and just switched back to Xorg.

See you in a year (or when I decide to buy new, non-Nvidia hardware)!

@tnt @kkarhan @sima @marcan After marcan's tweet yesterday, I decided to try and give Wayland on an RTX 2080 Ti a go again. I switched back to Xorg now, because Nvidia's wayland support is absolutely horrible:

The most annoying part: Electron apps and proton with DXVK show the weird behavior that old frames are displayed between newer frames. While typing in Slack for example, the letters you just typed vanish for a split second and then show up again. The same happens in Far Cry 6 when running in...

Hector Martin

@G33KatWork @tnt @kkarhan @sima Yes, it's completely broken on Nvidia, and it's entirely Nvidia's fault, and this is one major reason Wayland undeservedly gets a bad rap.

For the record, the missing synchronization feature in the Nvidia proprietary drivers is implicit sync. It's been years and they still don't have it. Wayland is completely broken without it (so is X on modesetting for that matter, but presumably they do something else in their proprietary DDX).

@lina implemented it for the Asahi GPU driver in two weeks, plus maybe a couple more of debugging, give or take. It's already been shipping to users for a while, with a few rare glitches that are identified and fixed for the next version already.

@G33KatWork @tnt @kkarhan @sima Yes, it's completely broken on Nvidia, and it's entirely Nvidia's fault, and this is one major reason Wayland undeservedly gets a bad rap.

For the record, the missing synchronization feature in the Nvidia proprietary drivers is implicit sync. It's been years and they still don't have it. Wayland is completely broken without it (so is X on modesetting for that matter, but presumably they do something else in their proprietary DDX).

Luna 🇩🇰🦊 //nullptr::live

@marcan @G33KatWork @tnt @kkarhan @sima @lina Sadly NVIDIA is still more or less the biggest player, so NVIDIA not working well with it will cause headaches for a lot of people, including me. People are not going to switch to different worse performing GPUs for their tasks (even if the main perf advantage NVIDIA has is brute force with massive power draw)

Hopefully now that NVIDIA has an open kernel driver some parts can be alleviated??

DistroHopper39B :verified:

@LunaFoxgirlVT @marcan @G33KatWork @tnt @kkarhan @sima @lina this goes along with NVIDIA also supporting CUDA and AMD's Linux OpenCL support being in the form of a DKMS driver that only seems to support a 3 year old LTS of Ubuntu and is broken on the last 2 LTS kernels (or at least was the last time I tried to use it)

🐧sima🐧

@LunaFoxgirlVT @marcan @G33KatWork @tnt @kkarhan @lina the open driver only fixes nvidia's issue of no longer being able to cheat and get access to GPL-only kernel services. which they need for cuda

the other thing they had to fix is make the fw redistributable, which was the total killer before

it's still a giantic mess because they don't do any kind of reasonable fw api versioning, which means doing a real linux driver with all the features in upstream is still very hard, and unecessarily so

Hector Martin

@sima @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina To be fair Apple also aren't doing any FW API versioning, and we're dealing with it anyway :P

🐧sima🐧

@marcan @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina you don't need to load it from linux (so no lolz with redistribution rights) and I thought in the bootloader entry you can spec which one you want, so that you don't have to support them all? at least it sounded somewhat reasonable

nvidia didn't even do that for years, until they where forced because the kernel's module loader got stricter with enforcing GPL-only module access, and that broke cuda

Hector Martin replied to 🐧sima🐧

@sima @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina We pick the supported versions and our installer only offers those, but if it's loaded by Linux itself then you should also be able to restrict the set of supported versions, right?

🐧sima🐧 replied to Hector

@marcan @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina yes

maybe it's just me being extremely biased because nvidia has a track record of maximally screwing over the open drivers, but gut feeling is that the nvidia way sounds a lot more messy

like from what I've heard apple's design seems pretty settled, which helps. nvidia's is a first cut because they panicked and try way to hard to hide stuff in the fw, and some things will need to drastically change at least for compute in upstream

Hector Martin replied to 🐧sima🐧

@sima @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina Yeah, I don't know if it's good or bad but at least for us we *know* we're getting whatever Apple does on macOS and there isn't any room for asking for something else, so the path forward is clear regardless of how easy or hard it is.

🐧sima🐧 replied to Hector

@marcan @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina yeah

I guess my worry is that nvidia has a track record of actively making the open stack harder than necessary, and some of the things suggest the new open driver + new fw blob is going to be a repeat

apple seems to just not care and so more design things in a way that makes sense instead of trying real hard to make the gpl'ed kernel driver as small as possible (now that impossible is out), whether that makes sense or not technically

Kevin Karhan :verified: replied to 🐧sima🐧

@sima @marcan @LunaFoxgirlVT @G33KatWork @tnt @lina *nodds in agreement*
#Apple did really leverage the solid basis of #ARM / #ARM64 as a cleaner [not entire clean tho!] slate on their machines.
And whilst I can only speculate upon how Apple 'feels' about @AsahiLinux , I do am convinced that a lot of driver and hardware engineers there would love to commit code if they weren't under dozens of NDAs.

🐧sima🐧 replied to 🐧sima🐧

@marcan @LunaFoxgirlVT @G33KatWork @tnt @kkarhan @lina at least in my experience nothing good ever comes out of designing fw when your principle is to hide all your "vendor value add" in it and move it out of the gpl'ed kernel driver

I've seen that in other places than nvidia, and it's absolute pain. and from what I've heard, this "hide it all in fw and use the kernel as the new shim to get access to gpl stuff" is absolutely their plan

that's also why the fw is ginormous

Gen X-Wing

@marcan @G33KatWork @tnt @kkarhan @sima @lina She did it without proper documentation and without the support of a massive corporation as well.

Let’s face it, it’s not incompetence from nVidia, it’s disinterest. I mean Lina (hi!) is good, but given time and documentation I’m sure I could do it too. Their Windows devs surely could.

Not claiming anything high and mighty with this, but I selected an AMD GPU for this very reason.

tnt

@G33KatWork @kkarhan @sima @marcan

Did you try with the part-binary kernel module or using the open-gpu-kernel-modules one ? Not sure if it makes any difference ...

Andy

@tnt @kkarhan @sima @marcan Just the binary driver from the Arch Linux package repository. I didn't even touch the open source driver yet.

🐧sima🐧

@kkarhan @marcan this is why I recommend amd if you want a discrete gpu for linux

it works, and between amd and valve plus the entire community, you get a rather solid stack

unfortuantely intel's ARC isn't there yet, the kernel driver is a bit a crater for a few reasons and the hw has a few too many kinks still. but I'm hopefully there's going to be a solid 2nd discrete gpu option soon

nvidia is for when you require cuda, or just love to feel the pain

Kevin Karhan :verified:

@sima @marcan Exactly...

I'm more convinced that #Intel will get #Arc to the same standard as their #iGPU|s have been for ages in terms of stability than #nvidia being less asshole-ish in the future.

We've seen a complete reversal of tech wisdom with #AMD & nvidia swapping seats when it comes to recommended Hardware, and it's kinda a shame that nvidia went worse...

_L4NyrlfL1I0

@sima @marcan I wonder, would it technically be possible to create a wayland compositor that does just Xwayland, but doesn't do window management and instead allows you to use the old X11 window manager interface to manage your windows? If that is easier than trying to keep the X11 DDX stuff on life support it would be possible to keep old Window Managers alive until the transition to wayland ia completed while still ditching the bitrotting X11 DDX stack.

🐧sima🐧

@yrlf @marcan possible, yes

the same way it's possible to fix up -modesetting and improve the compositor interface to the point you can do the same things wayland can do

it would be an utter waste of scarce engineer time though

the fundamental issue with Xorg is that the compositor is smeared across two (or three) processes, with a terrible protocol in between. that is the reason you can't really use modern gpu hw to it's fullest, and the right fix is to merge these things into one process

Oro "it's flatpak time" 🏳️‍🌈

@sima @yrlf @marcan I believe the Xwayland-only compositor in mind here is partially implemented here in Termux-x11, if what the dev says is true; it's just a Wayland compositor made to only support Xwayland. The README is confusing, though, so that won't help much.

github.com/termux/termux-x11

bjorn3

@yrlf @sima @marcan XWayland has a fullscreen mode which allows you to run an unmodified X11 based desktop on top of a Wayland compositor: gitlab.freedesktop.org/xorg/xs

Go Up