Email or username:

Password:

Forgot your password?
Hector Martin

A bit of (simplified) X history and how we got here.

Back in the 90s and 2000s, X was running display drivers directly in userspace. That was a terrible idea, and made interop between X and the TTY layer a nightmare. It also meant you needed to write X drivers for everything. And it meant X had to run as root. And that if X crashed it had a high chance of making your whole machine unusable.

Then along came KMS, and moved modesetting into the kernel. Along with a common API, that obsoleted the need for GPU-specific drivers to display stuff. But X kept on using GPU-specific drivers. Why? Because X relies on 2D acceleration, a concept that doesn't even exist any more in modern hardware, so it still needed GPU-specific drivers to implement that.

The X developers of course realized that modern hardware couldn't do 2D any more, so along came Glamor, which implements X's three decades of 2D acceleration APIs on top of OpenGL. Now you could run X on any modern GPU with 3D drivers.

And so finally we could run X without any GPU-specific drivers, but since X still wants there to be "a driver", along came xf86-video-modesetting, which was supposed to be the future. It was intended to work on any modern GPU with Mesa/KMS drivers.

That was in 2015. And here's the problem: X was already dying by then. Modesetting sucked. Intel deprecated their GPU-specific DDX driver and it started bitrotting, but modesetting couldn't even handle tear-free output until earlier this year (2023, 8 whole years later). Just ask any Intel user of the Ivy Bridge/Haswell era what a mess it all is. Meanwhile Nvidia and AMD kept maintaining their respective DDX drivers and largely insulating users from the slow death of core X, so people thought this was a platform/vendor thing, even though X had what was supposed to be a platform-neutral solution that just wasn't up to par.

And so when other platforms like ARM systems came around, we got stuck with modesetting. Nobody wants to write an X DDX. Nobody even knows how outside of people who have done it in the past, and those people are burned out. So X will *always* be stuck being an inferior experience if you're not AMD or Nvidia, because the core common code that's supposed to handle it all just doesn't cut it.

On top of that, ARM platforms have to deal with separate display and render devices, which is something modesetting can't handle automatically. So now we need platform-specific X config files to make it work.

And then there's us. When Apple designed the M1, they decided to put a coprocessor CPU in the display controller. And instead of running the display driver in macOS, they moved most of it to firmware. That means that from Linux's point of view, we're not running on bare metal, we're running on top of an abstraction intended for macOS' compositor. And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces. That all works well with modern Wayland compositors, which use KMS abstractions that are a pretty good match for this model (it's the future and every other platform is moving in this direction).

But X and its modesetting driver are stuck in the past. It tries to do ridiculous things like draw directly into the visible framebuffer instead of a back buffer, or expect there to be a "vblank IRQ" even though you don't need one any more. It implements a software fallback for when there is no hardware cursor plane, but the code is broken and it flickers. And so on. These are all problems, legacy nonsense, and bugs that are part of core X. They just happen to hurt smaller platforms more, and they particularly hurt us.

That's not even getting into fundamental issues with the core X protocol, like how it can't see the Fn key on Macs because Macs have software Fn keys and that keycode is too large in the evdev keycode table, or how it only has 8 modifiers that are all in use today, and we need one more for Fn. Those things can't be properly fixed without breaking the X11 protocol and clients.

So no, X will never work properly on Asahi. Because it's buggy, it has been buggy for over 8 years, nobody has fixed it in that time, and certainly nobody is going to go fix it now. The attempt at having a vendor-neutral driver was too little too late, and by then momentum was already switching to Wayland. Had continued X development lasted long enough to get modesetting up to par 8 years ago, the story with Asahi today would be different. But it didn't, and now here we are, and there is nothing left to be done.

So please, use Wayland on Asahi. You only get a pass on using X if you need accessibility features that aren't in Wayland yet.

143 comments
sfan5 :ablobcatwave:

@marcan
> [...], but modesetting couldn't even handle tear-free output until earlier this year (2023, 8 whole years later)

You mean I can have tear-free output on my Tiger Lake laptop? This still doesn't work today and as far as I'm aware the relevant PR on xserver has been in limbo for 4 years.

Hector Martin

@sfan5 It got merged in January (seriously). I'm not holding my breath that it'll be a great solution though. It's kind of inherently a hack given the X design.

You need to enable it manually with the TearFree option.

Falk Stern

@marcan „The X11 approach to device independence is to treat everything as a VAX framebuffer on acid“ - The Unix Haters Handbook, ~1994 ;)

Polychrome :clockworkheart:
@marcan one of those times where you don't really want to move on but have to face that you have to.
Karl

@marcan I hope you are aware that educating the silent majority on this topic is a good thing, even for non-asahi users.

Thank you.

✧✦✶Catherine✶✦✧

@marcan oh they finally got tear-free modesetting work on X? sick!!!

🐧sima🐧

@marcan the problem is people tried to fix the X11 bare metal situation

and realized that you more or less have to reimplement an entire wayland server, with all it's compositing stack and redraw loop and modeset handling and everything else

and then you need to retrofit the X11 window model into that rendering pipeline, all while keeping the glamor acceleration going

people actually built this. it's called $wayland_compositor + Xwayland

that stack _is_ your modern bare metal X11 stack

🐧sima🐧

@marcan Xorg X11 plus X11 compositor to collect all the windows and bake them into a desktop does the same as wayland compositor + Xwayland

except the latter has an actually reasonable architecture with a much cleaner split of responsibilities. and the added possibility that you can actually use modern hw fully if your app can speak wayland natively

Kevin Karhan :verified:

@sima @marcan which is why Xwayland is being used in Proton & DXVK on Wayland-supporting hardware, resulting in a "better than bare metal Windows" performance on the same hardware with near-zero effort.

Even if you downgrade from a.SSD to an HDD.

tnt

@kkarhan @sima @marcan Does that include NVidia HW ? I never tested myself, it's just what I read that my perf would be way worse with wayland and what I saw on recent published benchmark too.
(and AFAIK it's being worked on, and improving, but I'm just looking to know the status perf wise today running proton on Xorg vs wayland)

Hector Martin

@tnt @kkarhan @sima Nvidia Wayland support is known to be horrible and broken, and that's 100% Nvidia's fault and a major cause of reputational damage to Wayland.

_L4NyrlfL1I0

@sima @marcan I wonder, would it technically be possible to create a wayland compositor that does just Xwayland, but doesn't do window management and instead allows you to use the old X11 window manager interface to manage your windows? If that is easier than trying to keep the X11 DDX stuff on life support it would be possible to keep old Window Managers alive until the transition to wayland ia completed while still ditching the bitrotting X11 DDX stack.

[Yaseenist] luna luna :verified_trans: :therian:

@marcan@social.treehouse.systems thank you for these posts, I honestly had no idea about all the technical debt X has before those

Luci xor Amber
@marcan - It tries to do ridiculous things like draw directly into the visible framebuffer instead of a back buffer

excuse me, why and where?
exus1pl

@marcan do multiplescreens on external adapter work on M1? Only reason I'm stuck with X11 on my laptop is because almost all docking stations don't work on it or there is problem with screen sharing in Teams. Wayland solves many problems but also is still missing functionality.

Hector Martin

@exus1pl That's a driver issue, it has nothing to do with Wayland the protocol.

We don't support external displays at all yet and when we do it'll be one per port (no MST in this hardware, unless you use Thunderbolt) and it's all going to work as intended with Wayland.

NRoach44

@exus1pl @marcan Ah, if that's a displaylink dock you can thank them for that. If your laptop has a proper USB-C DisplayPort alt mode port, that much will work fine though.

Xerz! :blobcathearttrans:

@marcan OH YEAH I was recently troubleshooting a Nehalem setup and it’s been… “fun” to hear how well older iGPUs work like :blobcatgooglyheadache:

Seirdy

@marcan Thanks for this; now I know why Sway is so much faster than i3 on ARM processors.

Marcin Juszkiewicz

X11 has one more problem with keyboard - only 8bit for keycodes.

My 10+ years old keyboard has 3 keys which are not seen by X11 but work on text console (no Wayland here yet).

@marcan

Hector Martin

@hrw That's the Fn key issue I mentioned :p

karolherbst 🐧 🦀

@marcan well.. Nvidia still has native 2D acceleration commands 🙃 But I think it's the only vendor having that.

It also didn't change for 10 years. So maybe it gets deleted at some point.

Hector Martin

@karolherbst Ah, but who says I consider Nvidia hardware "modern"? 🙃

karolherbst 🐧 🦀

@marcan yeah.. though I kinda see their point using the 2D stuff to do large buffer copies instead of running a shader just copying bytes :D

I have no idea if they actually have a 2D block or if it's emulated in shaders on the hardware.

Anyway, the advantage is: implementing a driver is simpler (and has less CPU overhead). And I wouldn't be surprised it's the reason they kept it as it's still useful within writing a Vulkan (or whatever API) driver.

Hector Martin

@karolherbst Apple has a blitter too, but that isn't really a 2D engine (it's meant more for copies and mipmapping). Though they still use just shaders copying bits in some cases I think.

karolherbst 🐧 🦀

@marcan yeah I know, the 2D stuff in Nvidia still is able to draw primitives and stuff, so a buffer copy is two lines and one rect 🙃.

They even have polylines. Anyway, in case you are interested, it's all defined here: github.com/NVIDIA/open-gpu-doc

But in the end it doesn't matter, because modern GUI won't use the X API to accelerate drawing their primitives anymore anyway. But it's still there if one wants to use it! :D

No idea if they plan to remove it though.

@marcan yeah I know, the 2D stuff in Nvidia still is able to draw primitives and stuff, so a buffer copy is two lines and one rect 🙃.

They even have polylines. Anyway, in case you are interested, it's all defined here: github.com/NVIDIA/open-gpu-doc

But in the end it doesn't matter, because modern GUI won't use the X API to accelerate drawing their primitives anymore anyway. But it's still there if one wants to use it! :D

CounterPillow

@karolherbst @marcan On the ARM SoC side of things, Rockchip likes to throw blitters into everything. Multiple. There's not just the full blitter blocks with gradient fill functions, pixel format conversions, scaling, compositing, and rotating, of which there are sometimes several per SoC, but your video output block can scale and composite too, and your video decoder can scale as well. These are all in the same SoC (RK3588), in addition to a full Mali GPU.

Alba 🌸 :v_pat:

@marcan thank you for this post :blobcatheart:

one question: by "that keycode is too large in the evdev keycode table", do you mean it's outside the [8,255] range of keycodes that the X protocol can carry?

Hector Martin

@mildsunrise Yes, it's more than 8 bits because evdev defines keycodes for all possible keys across all hardware, and there are more than 255 of those.

Dickon Hood

@marcan I just wish they'd managed to keep some sort of network transparency in Wayland. I use it far, far more than is possibly sensible, and I'm going to really miss it once X dies hard enough that I have to migrate.

Dickon Hood

@marcan Thanks for that. I had no idea that existed.

Emmanuele Bassi

@dickon @marcan Nothing written after the XRender extension (2000) uses network transparency on X11. It's all client-side buffers sent as images over the wire—something the X11 core protocol is *shockingly* bad at doing, because it interleaves commands and buffers, and that prevents even the simplest form of compression. Even VNC is more efficient.

If you want remoting with Wayland, I strongly encourage you to look at RDP.

(hic/haec/hoc)

@ebassi @dickon @marcan I think that most people don't really care whether there's real network transparency or not, they just want to type "ssh -Y host", launch a GUI application and see it pop up on their screen. It looks like waypipe could provide this kind of experience but I haven't tried it yet so I don't know if it's full of papercuts

Arjan van de Ven

@ebassi @dickon @marcan
RDP has it's own huge mess since it's actually fundamentally rooted in RC4 crypto which broke in 2003 or so.
("proof": modern openssl has a "no-rc4" configure option. All the security folks will say you should set that since, well, rc4 and 2003. If you use that, the RDP stack no longer works)

Sebastian Wick

@marcan

> And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces

What does it mean not to have a vblank IRQ? I know that on AMD you program some registers and the firmware then does a page flip when it has to but you still know when all of this happens.

How opinionated is it on colorspaces? We want to change how KMS works and implementing policy in firmware sounds horrible

lore.kernel.org/dri-devel/QMer

@marcan

> And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces

What does it mean not to have a vblank IRQ? I know that on AMD you program some registers and the firmware then does a page flip when it has to but you still know when all of this happens.

Hector Martin

@swick We know when page flips happen, we just can't "wait for vblank" *without* an associated page flip which X (only) wants.

In principle we could emulate that, but we tried and it was a mess and made things worse.

Re colorspaces, they are intertwined with pixel formats. Only some combinations work. For example, we can only get the raw native full gamut RGB with 10-bit pixel formats, 8-bit pixel formats are stuck at sRGB. And then some other options for 10-bit modes are using a "wide RGB" convention where the components go from -0.75 to 1.25 with 0 at 384 and 1 at 895, where the primaries are for some narrower colorspace (sRGB or BT.2020 or whatever) but then you can go out of gamut with the extra headroom to encode more colors. The same colorspace options with 8-bit pixel formats don't have the extra headroom.

And then only some of these options work for some hardware planes.

@swick We know when page flips happen, we just can't "wait for vblank" *without* an associated page flip which X (only) wants.

In principle we could emulate that, but we tried and it was a mess and made things worse.

Re colorspaces, they are intertwined with pixel formats. Only some combinations work. For example, we can only get the raw native full gamut RGB with 10-bit pixel formats, 8-bit pixel formats are stuck at sRGB. And then some other options for 10-bit modes are using a "wide RGB" convention...

Sebastian Wick

@marcan Pixel format and color space being intertwined is already a big yikes and can't be expressed anywhere. The other stuff sounds like you attach some colorspace metadata to a plane and the conversions and output are opaque. Won't work with the new KMS API .

That's what you get when design hardware for a specific compositor I guess...

Hector Martin

@swick The actual API we get is pixel format, colorspace, and EOTF per plane. It's just that only some combinations work or the interpretation of the range of the primaries changes with the bit width.

We might be able to emulate a bunch of the missing combinations by manually setting color transformation matrices in the driver, which is probably not *too* horrible but not ideal.

Most of this is firmware limitations, but although we can poke the hardware registers directly, that's a giant can of worms and we have no good way to step on top of the firmware like that. And throwing away the firmware is possible but out of the question, because we have no tooling to reverse engineer below it and it does a *huge* amount of work for us. As quirky as the firmware interface is and how painful it's been to deal with (marshalled C++ method calls, seriously), trying to reimplement everything ourselves would be way worse.

@swick The actual API we get is pixel format, colorspace, and EOTF per plane. It's just that only some combinations work or the interpretation of the range of the primaries changes with the bit width.

We might be able to emulate a bunch of the missing combinations by manually setting color transformation matrices in the driver, which is probably not *too* horrible but not ideal.

Sebastian Wick

@marcan The intertwining thing is really annoying but you might be able to work around it somehow.

The way it does color management on the other hand is an issue. Color conversions are not well defined and contain policy. Compositors have a policy for their shader path and when they offload it to KMS and now that has another policy you can see visual gliches from offloading. So either compositors implement whatever policy the specific KMS driver has or they just don't offload things.

Sebastian Wick

@marcan Nvidia already has a bit of a silly color pipeline but at least it is well-defined. From your description you're working with a completely opaque system and don't even have enough information to replicate it in shaders, so opportunistic offloading won't work at all without glitches.

Hector Martin

@swick I mean I spent a bunch of time looking at color ramps to work this out so far, and I'm pretty sure we can characterize whatever is going on to the point we can replicate it in shaders if need be.

Like we're doing 2-stage thermal capacity/resistance modeling for speaker voice coil power dissipation, it's not our first rodeo with characterizing hardware :P

Hector Martin

@swick Come to think of it, I should stick HDMI capture cards onto the machines in the CI test rack I plan on building. Ideally ones where I can hack around the EDID. That will let me add the display pipes to the CI matrix, including end-to-end color stuff :)

Sebastian Wick

@marcan Not sure how reliable a capture card will be. We really want chamelium boards in the freedesktop CI tbh.

Hector Martin

@swick Oh, I know it's a crapshoot, I've gone through a lot of them already (and hacked some firmwares). I'm fairly confident I can put something together that works reasonably enough though.

Chamelium sounds nice, is that available for purchase? I have an NeTV2 lying around which should be able to do very controlled tests too.

Sebastian Wick replied to Hector

@marcan AFAIK you can't buy the Chamelium boards. They have an email address on the site so you can try to contact them (but they ignored me when I wrote). Others here might know more.

Hector Martin replied to Sebastian

@swick Ah, I'll probably go the NeTV2 route them. Should be able to effectively do the same thing Chamelium does, and you can actually buy that (and I have one already, plus I'm friends with the guy who designed it).

Sebastian Wick replied to Hector

@marcan Please share if you manage to do something useful with it! It won't be enough to test HDR but might me enough for everything else so I'm interested.

Hector Martin replied to Sebastian

@swick I don't see why HDR wouldn't work? It's open source, you can make the firmware grab the raw bits if need be. Obviously has bandwidth limitations but as long as you don't need to test the corner of the resolution/color depth matrix...

Sebastian Wick replied to Hector

@marcan Thinking more about the metadata, InfoFrames, VSC SDP, ...

But sure, you can still test if the pixels are as expected, no matter if its SDR or HDR and that's already really useful.

Hector Martin replied to Sebastian

@swick We can capture all the raw metadata too, it's an FPGA. I'm actually thinking of just having it dump the entire frame bitstream and doing all the decoding in software, so we can have really fancy diagnostics for CI.

Sebastian Wick

@marcan That sounds much better than I initially thought and yeah, it should be possible to expose in the new API then.

It might not be of much use though. The more specific the conversions are the less likely it will match whatever the compositor chooses to do.

And if you can't give us a no-op/passthrough path then we're back to square one because then the policy implemented with shaders in the compositor will get mangled by the scanout color pipeline.

Hector Martin

@swick The pass-through path is 10bit native gamut RGB, which we do have and is what KWin-Wayland picks by default these days. We just don't have it for 8-bit formats (which only the TTY and X seem to really love to use).

Hector Martin

@swick In principle as long as whatever color management we expose is well specified it should work, no? Either we can implement standard options that are strictly specified and/or we can expose strictly specified CTM and LUTs or if we can't we just don't expose any of it and let the compositor do it in software.

Sebastian Wick

@marcan Yes, whatever is well-specified can be exposed but if the hardware/firmware does anything to the color at any point that is not well-specified the whole system falls apart. From your description it sounds like you specify the input and output and the rest is opaque but I would be very happy if that's not the case.

PathWars

@marcan

"Find X, where X is a summary of some interesting history!"

Resuna

@marcan All that matters to me is X11 protocol on the wire I don't care how it's implemented, but I use remote X11 regularly.

Resuna

@marcan Doesn’t that still need 2d acceleration?

Hector Martin

@resuna No, that's what Glamor is for, which is part of XWayland.

Resuna

@marcan So this is basically an implementation detail.

Hector Martin

@resuna Yes, that's the point. The X11 protocol still works just fine with XWayland and will continue to work forever for client apps (which is all you care about for remote X11).

John Reck

@marcan by "That all works well with modern Wayland compositors, which use KMS abstractions that are a pretty good match for this model (it's the future and every other platform is moving in this direction)." Did you really mean "this is what other platforms did over a decade ago"? Who is still "moving" in this direction other than Linux X vs. Wayland?

It's kinda sad how stuck in the long obsolete past Linux is here...

Quentin Minster

@marcan Oh that explains the major headache anytime I've wanted to get display stuff to work cleanly on my Ivy Bridge desktop…
If I may, how does all this interact with VDPAU/VAAPI? As I recall, these also played a role in the headaches.

Hector Martin

@laomaiweng That's a whole other mess TBH, which I'm trying to stay as far away from as possible because our video decoders have nothing to do with the GPU and ideally don't need those kinds of APIs at all once we support them.

rexum

@marcan imagine fitting this on twitter :kappa:

James Harvey

@marcan I've given Wayland a go because of these posts and it's surprising how actually usable it is now. I'm on KDE, so it's been nice seeing their Wayland support get truly good (I tried a year or so ago and it wasn't great iirc). I don't know if it's placebo, but scrolling in Firefox feels noticeably smoother.

The only issue I've ran into so far is that Chromium programs run at 60fps instead of 144fps. Having to set environment variables for a load of stuff is also annoying, but I guess that's the joys of Arch 🙃

@marcan I've given Wayland a go because of these posts and it's surprising how actually usable it is now. I'm on KDE, so it's been nice seeing their Wayland support get truly good (I tried a year or so ago and it wasn't great iirc). I don't know if it's placebo, but scrolling in Firefox feels noticeably smoother.

Kate 🏳️‍⚧️

@jmshrv @marcan Same for me. I tried Wayland on KDE a few times over the last years, but there was always something that bugged me (e.g. all windows were 0x0 in size after waking up from suspend). I just switched again because of these posts and now everything works really well without any problems!

Jeff Fortin T.

@marcan For those who had not seen it, this hilarious & educational talk by Daniel Stone in 2013 is still very insightful today to understand what led @XOrgFoundation developers to create #Wayland, and why nobody wants to touch the #Xorg / #X11 codebase anymore even with a 10-foot pole and hazmat suit ☣️ youtube.com/watch?v=GWQh_DmDLK

TheBuell

@marcan Does wayland now handle apps run remotely on a different machine? Not sure if that now works? I'd like to switch to Wayland but held off because I was told that I could not run apps using ssh -Y on the remote machine under Wayland.

Andrew Theken

@marcan Firstly, thanks for all the work you appear to have done on X over the years. This is an interesting post that I vaguely understand, but not entirely. :-)

My primary use case for X11 is still to pipe a gui to xquartz on macos over ssh.

Do you have any knowledge about how I could move this workload to Wayland? (I'm excited for Wayland, but not sure how it compares to my existing case)

@Migueldeicaza's tweet from two years ago is basically the only thing that turns up on google about it

otheorange_tag

@marcan I work on the lowest possible level, loved it when the frame buffer was directly useable and not some fakey simulated thing? I now rely on xcb and just the copy/mapping/image stuff. XCB works on wayland(grammatically spoken or not) except resizing, if that gets fixed I will use wayland, yeah I know, it's not an api. Comment remains. Love not using fonts, love not using anything but sockets, see pinned stuff

Benjamin

@marcan I see all of those issue, and as a programmer, I agree that sometimes the solution to issues is "scrap everything and do it right from scratch".
However, as a user... I don't care. I just want my programs to work. And that includes legacy stuff that may be binary-only and not able to "just compile against Wayland".

For me as a user, acceptance will stand and fall with backwards-compatibility.

Michael Guntsche

@marcan one of the things I am missing from wayland is icc color profile support. While i know that the X implementation is not perfect it is at least something.

Koopa 🌸

@marcan as someone who is still daily driving an Ivy Bridge system (haven't yet migrated to Wayland), this is very illuminating

i have tried both the Intel and modesetting DDX, and both have issues (Intel has some graphical artifacts, modesetting is too quick to "black-out" a paused video; both have screen tearing iirc)

Val Packett

@marcan@social.treehouse.systems AMD didn't really continue maintaining anything from the old X days btw… IIRC Glamor actually kinda came from the AMD side. Either way the latest xf86 "driver" -amdgpu is basically just -modesetting with extra hacks bolted on

DELETED

@marcan
That is an amazing recount, thanks. I still think X11 receives too much shit because of the only functioning and open-source reference implementation.

We also still do not have a well composed and integrated video terminal system, one that handles all the device enumeration and multiplexing with the kernel. In the old Kernel Graphics Interface (KGI) Project from 1999, KMS was already a thing, but it was integrated with the kernel (in fact, you only needed to implement a micro driver in the kernel for modesetting), and the vty system. So KMS was not really just handed off to the GPU, or X, the vty system was managing the various contexts for mode setting, and then map GPU and input contexts together, this multiplexing cause the GPU to mode set, however, if the vty already had a context, that was restored first. All this happened from bootloader to window server. X was then basic a display server/window manager writing to a machine independent surface/buffer, that could be accelerated or not. KGI had a portable abstraction for drivers, tho, it is probably very dated now. So once a driver was written for KGI, it was portable to any system that provide the host environment for KGI.

@marcan
That is an amazing recount, thanks. I still think X11 receives too much shit because of the only functioning and open-source reference implementation.

We also still do not have a well composed and integrated video terminal system, one that handles all the device enumeration and multiplexing with the kernel. In the old Kernel Graphics Interface (KGI) Project from 1999, KMS was already a thing, but it was integrated with the kernel (in fact, you only needed to implement a micro driver in the...

:yell: Ibly 🏳️‍⚧️ θΔ

@marcan ah, that actually explains to me why X development has slowed down, now i know!

hoping wayland eventually gets the support for my favorite music players that still don't work... or more accurately have it go vice versa. if that doesn't happen maybe i'll start writing my own damn winamp 2.x-compatible music player lol

Simon Brooke

@marcan Detail disagreement: "if X crashed it had a high chance of making your whole machine unusable".

No, that really isn't true. I was using X11 on BSD from 1988, on System V.4 from 1989, on UnixWare from (I think) 1992, and on Linux from 1993. I don't recall X crashing on the BSD or System V.4 at all. on Linux, X was pretty flaky in those days and crashed not infrequently; on UnixWare it did crash, but not often. I don't recall any X crash that locked the machine.

Wohao_Gaster :fatyoshi:

@marcan Don't call asahi broken until you stop using features that are broken because they're outdated

gunstick

@marcan I am all in for waylamd everywhere. I just miss the seamless display forwarding you had with X. Like just adding -X to your ssh and the grafics magically appear from the remote machine.
As far as I know, nothing like that is yet existing in wayland.

Go Up