Email or username:

Password:

Forgot your password?
32 posts total
Hector Martin

Found the DMP disable chicken bit. it's HID11_EL1<30> (at least on M2).

So yeah, as I predicted, GoFetch is entirely patchable. I'll write up a patch for Linux to hook it up as a CPU security bug workaround.

(HID4_EL1<4> also works, but we have a name for that and it looks like a big hammer: HID4_FORCE_CPU_OLDEST_IN_ORDER)

Code here: github.com/AsahiLinux/m1n1/blo (Thanks to @dkohlbre for the userspace C version this is based off of!)

One interesting finding is that the DMP is already disabled in EL2 (and presumably EL1), it only works in EL0. So it looks like the CPU designers already had some idea that it is a security liability, and chose to hard-disable it in kernel mode. This means kernel-mode crypto on Linux is already intrinsically safe.

Found the DMP disable chicken bit. it's HID11_EL1<30> (at least on M2).

So yeah, as I predicted, GoFetch is entirely patchable. I'll write up a patch for Linux to hook it up as a CPU security bug workaround.

(HID4_EL1<4> also works, but we have a name for that and it looks like a big hammer: HID4_FORCE_CPU_OLDEST_IN_ORDER)

Show previous comments
Christian Horn

@marcan I hope it gets implemented so mitigations=off can disable it, this will allow performance/energy consumption comparisons.

meta

@marcan @dkohlbre i wouldve thought that poking all the HID bits is a good way to make a brick!

Glyph

@marcan @dkohlbre @filippo does this reach back to M1 as well? I know M3 already had something even when gofetch first announced

Hector Martin

This is disappointing.

theverge.com/2024/3/4/24090357

On one hand, the Yuzu folks had it coming with all the thinly veiled promotion of game piracy (if you can even call it thinly veiled). There's a reason I banned everyone even remotely talking about that back when I was part of the Wii homebrew community.

On the other, the proposed settlement asserts that the *emulator* itself is a DMCA violation (not just the conduct of those developing it), and that is an absolutely ridiculous idea. I *believe* this doesn't actually set any legal precedent (since it is wasn't litigated, but IANAL), so other emulators should still be safe... but still, really not a good look.

I'm so glad I'm no longer in the game hacking world and having to deal with this kind of stuff...

This is disappointing.

theverge.com/2024/3/4/24090357

On one hand, the Yuzu folks had it coming with all the thinly veiled promotion of game piracy (if you can even call it thinly veiled). There's a reason I banned everyone even remotely talking about that back when I was part of the Wii homebrew community.

Hector Martin

I just reminded myself of the extra fun shim shenanigans going on in Asahi Fedora. I've previously described the Asahi Linux boot chain as:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> GRUB - > Linux

Which is already amusing enough, but Fedora throws in another twist. To support other platforms with "interesting" secure boot requirements (cough Microsoft-controlled certificates cough), Fedora ships with shim to handle handoff to GRUB and allow users to control their UEFI secure boot keys. But it's even more fun than that, because our installs don't support UEFI variables and instead have a 1:1 mapping between EFI system partitions and OSes, relying on the default removable media boot path.

On an installed Asahi Fedora system, you get this:

EFI
EFI/BOOT
EFI/BOOT/BOOTAA64.EFI
EFI/BOOT/fbaa64.efi
EFI/BOOT/mmaa64.efi
EFI/fedora
EFI/fedora/BOOTAA64.CSV
EFI/fedora/grub.cfg
EFI/fedora/grubaa64.efi
EFI/fedora/mmaa64.efi
EFI/fedora/shim.efi
EFI/fedora/shimaa64.efi
EFI/fedora/gcdaa64.efi

Here, U-Boot runs BOOT/BOOTAA64.EFI, which is a copy of shim. shim is designed to boot grubaa64.efi from the same directory. But since it can't find it there (since it's in fedora), it goes ahead and loads fbaa64.efi instead. That's a fallback app that then goes off searching for bootable OSes, looking for CSV files in every subfolder of EFI. It finds fedora/BOOTAA64.CSV, which is a UTF-16 CSV file that points it to shimaa64.efi. That is, itself, another identical copy of shim, but this time it's booted from the right fedora directory. The fallback app tries to configure this as a boot image in the EFI variables, but since we don't support those, that fails and continues. Then this second fedora/shimaa64.efi runs and finally finds fedora/grubaa64.efi which is the GRUB core image which continues booting.

So, on Fedora Asahi, the boot chain is actually:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> shim (first copy) -> fallback -> shim (second copy) -> GRUB - > Linux

Every. Time. (Thankfully, these extra steps go fast so it doesn't materially affect boot time; the major bootloader time contributors are kernel/initramfs load time and U-Boot USB init time right now).

The shim stuff is, of course, completely useless for Asahi Linux, since there are no built-in platform keys or any of that UEFI secure boot nonsense; once we do support secure boot, the distro update handoff will be at the m1n1 stage 1/2 boundary, well before any of this stuff, and the UEFI layer might as well use a fixed distro key/cert that will be known to always work, plus the m1n1 stage 2 signing key (which is not going to use UEFI Secure boot, it will be its own simpler thing) would be set at initial install time to whatever the distro needs. But since this mechanism exists to support other platforms, we didn't want to diverge and attempt to "clean this up" further, since that just sets us up for more weird breakage in the future. Blame Microsoft for this extra mess...

But it's even sillier, because this whole UEFI secure boot mechanism isn't supported in Fedora aarch64 at all (the shim isn't actually signed, and GRUB/fallback are signed with a test certificate), so this whole mechanism is actually completely useless on all aarch64 platforms, it only gets built and signed properly on x86, and aarch64 just inherits it 🙃​

In the early Fedora Asahi days I noticed our Kiwi build stuff was dumping the GRUB core image into EFI/BOOT too, which bypassed this fallback mechanism... but also meant that the GRUB core image didn't get updated when GRUB got updated, which is a ticking time bomb. Thankfully we noticed that and got rid of it. So now there's a silly boot chain, but it should be safe for normal distro updates.

(As for the other stuff? mmaa64.efi is the MokManager that is useless to us, and shim.efi is just a copy of shim for some reason, and gcdaa64.efi is just a copy of grub for some reason. No idea why those two exist.)

I just reminded myself of the extra fun shim shenanigans going on in Asahi Fedora. I've previously described the Asahi Linux boot chain as:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> GRUB - > Linux

Which is already amusing enough, but Fedora throws in another twist. To support other platforms with "interesting" secure boot requirements (cough Microsoft-controlled certificates cough), Fedora ships with shim to handle handoff to GRUB and allow users to control their...

Show previous comments
elly
@marcan I wonder: why so many layers though?
I would expect that you can do BROM -> iB1 -> iB2 -> ST3 from storage (U-Boot).

We managed to do the same thing with ARM64 Chromebooks (although porting drivers for different platforms from Linux is a PITA). It looks something like that:

BootROM -> BL2 (Coreboot) -> BL31 (TF-A) -> Coreboot (drops execution level to EL2) -> DepthCharge -> ELF from storage (U-Boot/LinuxBoot).

I also wonder if you could get NVRAM working. U-Boot supports it in OP-TEE, but no idea if/how apple implemented it.
@marcan I wonder: why so many layers though?
I would expect that you can do BROM -> iB1 -> iB2 -> ST3 from storage (U-Boot).

We managed to do the same thing with ARM64 Chromebooks (although porting drivers for different platforms from Linux is a PITA). It looks something like that:
Demi Marie Obenour

@marcan Would it be possible to directly boot a kernel from m1n1, or even include the kernel image in m1n1? If Qubes OS ever gets Apple silicon support, I want to keep the secure boot chain as short as possible. Ideally, it would be no longer than Apple’s chain.

Neal Gompa (ニール・ゴンパ) :fedora:

@marcan `gcdaa64.efi` is a copy of GRUB with cdboot drivers built in. I'm not actually sure why they're split, maybe @vathpela knows?

Hector Martin

Today I learned that YouTube is deliberately crippling Firefox on Asahi Linux. It will give you lowered video resolutions. If you just replace "aarch64" with "x86_64" in the UA, suddenly you get 4K and everything.

They literally have a test for "is ARM", and if so, they consider your system has garbage performance and cripple the available formats/codecs. I checked the code.

Logic: Quality 1080 by default. If your machine has 2 or fewer cores, quality 480. If anything ARM, quality 240. Yes, Google thinks all ARM machines are 5 times worse than Intel machines, even if you have 20 cores or something.

Why does this not affect Chromium? Because chromium on aarch64 pretends to be x86_64

Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36

🤦‍♂️​🤦‍♂️​🤦‍♂️​🤦‍♂️​🤦‍♂️​

Welp, guess I'm shipping a user agent override for Firefox on Fedora to pretend to be x86.

Today I learned that YouTube is deliberately crippling Firefox on Asahi Linux. It will give you lowered video resolutions. If you just replace "aarch64" with "x86_64" in the UA, suddenly you get 4K and everything.

They literally have a test for "is ARM", and if so, they consider your system has garbage performance and cripple the available formats/codecs. I checked the code.

Show previous comments
Eckes :mastodon:

@marcan I guess that’s not about the CPU but the quality of drivers for hardware decoding

Charles U. Farley

@marcan Google disabled U2F based on user agent for a long time as well.

Hector Martin

KDE and GNOME are both supported DEs for Fedora Asahi Remix, but there's still one issue that makes it impossible for me to honestly recommend GNOME to anyone trying out Linux on these platforms for the first time: GNOME does not support fractional scaling out of the box, and it is actively broken with XWayland if you enable it by editing the configs directly.

I consider proper HiDPI support with fractional scaling a basic fundamental requirement for Apple machines. It's a basic macOS feature, and not having it on Linux is just silly. It doesn't even need to be perfect fractional scaling support (integer scaling + display output rescaling is fine, it's what macOS does AFAIK)... but it needs to be there.

In GNOME you can enable it via the command line (sigh...), but if you do, XWayland apps just become a blurry mess since they render at 100%. This includes apps like Thunderbird out of the box.

KDE does this right, within the constraints of the legacy X11 protocol: the X11 scale is set to the largest of your monitor scales, so X11 apps look crisp on at least one monitor (even crisper than on Wayland at non-integer scales, at least until the native Wayland fractional scaling stuff catches up) and only minimally soft on the others (typical downscaling softness, same thing macOS does and same thing you get on Wayland for most apps today).

KDE had that problem way back when we first shipped the Arch alpha, which is why that was using native Xorg. They fixed it soon thereafter, so now KDE Wayland works as intended. But GNOME still hasn't caught up, and AIUI they don't even plan to do what KDE did...

For folks who are happy with GNOME, of course, we do consider it a supported desktop environment and will debug issues that crop up related to our platform drivers/etc. But I just... can't in good conscience tell people to try GNOME first as a first-time experience on Apple Silicon, not when the out-of-the-box experience is just "200% or 100%, nothing in between, unless you hack configs manually and then a bunch of apps become horribly blurry".

* Note: By fractional scaling, I mean effective fractional scaling, not native fractional scaling. Native fractional scaling is brand new in Wayland and stuff is still catching up, but even macOS doesn't do that either. The important part is that things are the right size (and you have more than integer sizes available), and that nothing is ever upscaled from a lower pixel density, which is what you get with KDE today.

KDE and GNOME are both supported DEs for Fedora Asahi Remix, but there's still one issue that makes it impossible for me to honestly recommend GNOME to anyone trying out Linux on these platforms for the first time: GNOME does not support fractional scaling out of the box, and it is actively broken with XWayland if you enable it by editing the configs directly.

Show previous comments
Dmitry Borodaenko

@marcan I've been using 150% scaling in GNOME for about 3 years now, what did I miss?

Fabio Valentini

@marcan yeah ... the fractional scaling Implementation in GNOME is really sad and broken 😞 we wanted to enable it by default for Fedora 39 but it turned out to break rendering of Xwayland applications, *always* upscaling from 1x, even on "integer" factors ... (side effect: fullscreen applications like games only "see" a scaled framebuffer so they can't even render at full resolution if you wanted)

the "let xwayland windows scale themselves" option in KDE is not perfect, but much better 😐

Sonny

@marcan the reason GNOME doesn't enable fractional scaling by default is that none of the solution for XWayland was considered satisfying until now. There is more to it, but that's one of the main issue.

We are looking into using rootfull Xwayland to solve the problem.

See "Are we done yet?" in ofourdan.blogspot.com/2023/11/ but I recommend reading both parts.

Hector Martin

Idle thought spun off from the prior discussion: If a bug reporter doesn't just report the bug, but debugs it and tells you exactly what went wrong and why the code is broken, then you should credit them with Co-developed-by, not Reported-by.

Debugging is just as much a part of development as actually writing the code is, and deserves equal credit. A strict reading of the kernel contribution guidelines doesn't imply this would be incorrect usage either. The only catch is the docs say C-d-b needs to be followed by a signoff, which would be unnecessary in this case (as that is about *copyright ownership/licensing* which only applies to writing the actual code), but an extra signoff never hurt anybody so shrug.

Idle thought spun off from the prior discussion: If a bug reporter doesn't just report the bug, but debugs it and tells you exactly what went wrong and why the code is broken, then you should credit them with Co-developed-by, not Reported-by.

Debugging is just as much a part of development as actually writing the code is, and deserves equal credit. A strict reading of the kernel contribution guidelines doesn't imply this would be incorrect usage either. The only catch is the docs say C-d-b needs...

joey

@marcan time for a new tag: Debugged-by:

Ayke van Laethem

@marcan personally if I write a patch like this I will at least credit the bug reporter with finding a fix. As you said, they did most of the work and deserve at least equal credit.

Man2Dev :idle:

@marcan I agree and giving some credit would set some incentive to give good bug reports

Hector Martin

Calling Sonoma users: We have had a few reports of Sonoma upgrades causing issues with Asahi (installed before or after the upgrade). These issues may be related to reports of Sonoma corrupting System recoveryOS, which is apparently a known issue in general (happens in rare cases). This is all almost certainly caused by one or more Apple bugs.

If you are running Sonoma (with or without Asahi), would you mind helping us by testing your System RecoveryOS? To do so, fully power down your machine and wait a few seconds, and then quickly tap and hold the power button (quickly press, release, press and hold; this is a double press, not the usual single press and hold). If you get the boot picker as usual, then your System recoveryOS is OK. If you get a "please recover me" exclamation mark screen, your System recoveryOS is broken.

If you have the bug, please let me know, as I would like to investigate what is going wrong and whether we can detect it somehow (or maybe even write a fixer tool). Ideally I'd want to either get temporary SSH access to macOS or dumps of files in certain partitions.

The reason why this is important is that there is a possibly related issue where Sonoma boot firmware won't boot our 13.5 Asahi macOS stubs, including recovery mode. That means that you can get stuck not being able to use the boot picker, and if your System recoveryOS is also broken, then there is no way to recover locally (you need a DFU revive), which sucks. I want to at the very least detect this bad state and refuse installation if the installer detects your recoveryOS is borked.

Your machine should go back to normal after a forced shutdown and reboot from the exclamation mark screen, as long as your regular boot OS and paired recoveryOS are fine.

Calling Sonoma users: We have had a few reports of Sonoma upgrades causing issues with Asahi (installed before or after the upgrade). These issues may be related to reports of Sonoma corrupting System recoveryOS, which is apparently a known issue in general (happens in rare cases). This is all almost certainly caused by one or more Apple bugs.

Show previous comments
Sébastien de Graffenried

@marcan when I do the double press, I get “continue holding for startup options”, then “launching startup options” then a black screen. The computer seems to be on, I have to hold the power button to shut it down before I can start it up again. Could that be it?

Hayato Fujii

@marcan Is System recoveryOS the one which doesn't allow changes to Startup Security?

Hanchin Hsieh

@marcan@social.treehouse.systems I’m a Sonoma user and mine M1Max recovery OS is fine after tested.

Hector Martin

For Asahi, I contribute code across the entire stack: from bootloaders to the kernels to libraries to desktop environments to web browsers. I keep saying the Linux kernel development process is horrible, so what about the rest?

Let's talk about a project that does things right. KDE is a project of comparable scale to the Linux kernel. Here it its patch submission process:

1. I write the patch
2. I open the merge request *
3. The maintainer directly suggests changes using the interface, fully integrated with gitlab.
4. I click a button to accept the changes. They get turned into Git commits automatically. I don't even have to manually pull or apply patches.
5. I ask about Git history preferences, get told to squash them.
6. git pull, git rebase -i origin/main, clickety click, git push -f (I wouldn't be surprised if there's a way to do this in the UI too and I just don't know)
7. They merge my MR.

The whole thing took like 5 minutes of mental energy total, once the initial patch was ready.

Seriously, look at some of the timetamps. It wasn't even 15 minutes of wall clock time from the first suggestion to final commit. Less than an hour from opening the MR. And then it got merged the next day.

And this is why I love contributing to KDE and why they're our flagship desktop environment. :)

* This of course does not consider one-time setup costs. For KDE, that was opening up an Invent account, which takes 5 minutes, and I didn't have to learn anything because I already know GitLab and anyone familiar with any forge will quickly find their way around it anyway. The kernel, of course, requires you to learn arcane processes, sign up for one or more mailing lists, set up email filters, discover tools like b4 to make your life less miserable, manually configure everything, set up your email client with manual config overrides to make it handle formatting properly, etc., and none of that is useful for any project other than the small handful that insist in following the kernel model.

For Asahi, I contribute code across the entire stack: from bootloaders to the kernels to libraries to desktop environments to web browsers. I keep saying the Linux kernel development process is horrible, so what about the rest?

Let's talk about a project that does things right. KDE is a project of comparable scale to the Linux kernel. Here it its patch submission process:

Show previous comments
Pavol Babinčák

@marcan add comment "/rebase" against your MR and GitLab will take a care of the rest.

Just a tip, if you would like to save couple of commands in the shell. 🙂

Haelwenn /элвэн/ :triskell:
@marcan
Meanwhile FreeDesktop and the like:
- Get stuck on their Gitlab because they locked down it so much to counter spam that you'd need to ask each project for access (of course with the "Request Access" button of Gitlab not being appropriate)
- Send the patch to their mailing-list
- Few days later have to explain that their setup is near impossible to use, and also send URL to the commit in case they don't into email

Bare mailing-list sucks (you can have email with CIs btw) but having to interact with the various Gitlab setups is horribly annoying.
And if there's one thing email has perfected over the ages it's spam mitigation for everyone, while on Gitlab it's a proprietary Enterprise addon.
@marcan
Meanwhile FreeDesktop and the like:
- Get stuck on their Gitlab because they locked down it so much to counter spam that you'd need to ask each project for access (of course with the "Request Access" button of Gitlab not being appropriate)
Drew DeVault

@marcan
>KDE is a project of comparable scale to the Linux kernel.

lol tho

Hector Martin

We've been using Matrix a lot for Fedora Asahi and I really want to like it but just... sigh. It's so clunky and broken in random ways.

Undiagnosable encryption failures/desyncs, notifications not arriving, mismatched feature support between clients, ...

The flagship Element client is a bloatfest, but third party clients always seem to work worse in some way, and even Element iOS is weirdly broken vs. the desktop/web version.

It's really sad that Discord basically does everything better.

Show previous comments
Thib

Hej @marcan, Matrix (and most particularly Element) has accumulated tech debt, but it's on a good way to solving it :)

The Matrix Foundation is going full steam on the matrix-rust-sdk to have one solid implementation that gives a consistent (good) experience across clients.

It's too early to see the results of this work, but we're well aware of the problems and doing our best to address them sustainably.

Centralised systems who sell your data are easier to maintain, but we keep fighting!

n0toose

@marcan tbh, I now seem to understand why Moxie was "uncomfortable with third party clients" on Signal when called to take an official stand without taking against e.g. soft forks like molly.im

Filip 🌱 ❄️ 🦀

@marcan @GrapheneOS also echoed those feelings. IIRC @matrix answered with some assurances that they are working on addressing many of their pain points

Hector Martin

So apparently dang and the HN crowd are so upset I wrote some messages for HN visitors to our website, that they now banned my home IP address 🙃

Yes, seriously. I get 403s from any device on my home connection, and yet it works fine on 4G.

Just when you thought they couldn't get pettier. And no, I haven't been doing any scraping/scripting/anything sus.

Show previous comments
[HUGS] getimiskon :OwOid: :blobcatgooglywtf: :verified_neko:
@marcan having to deal with them seems like it's a really huge pain :blobcatsweats:
Drew DeVault

@marcan I also ran into something like this over the weekend, though tbf I *was* doing research to dig up dirt on HN

Hector Martin

OH MY FUCKING GOD.

Pictured: Apple's M2 MacBook Air 13" speaker response (measured with a mic), and the response you get when you zero out every 128th sample of a sine sweep.

They have a stupid off-by-one bug in the middle of their bass enhancer AND NOBODY NOTICED NOR FIXED IT IN OVER A YEAR.

So instead of this (for a 128-sample block size):

for (int sample = 0; sample <= 127; sample++)
// process sample

They did this:

for (int sample = 0; sample < 127; sample++)
// process sample

Legendary audio engineering there Apple.

We can now, very confidently say the audio quality of Asahi Linux will be better than Apple's. Because we don't have blatant, in your face off-by-one bugs in our DSP and we actually check the output to make sure it's good 😂​

FFS, and people praise them for audio quality. I get it, the bar is so low it's buried underground for just about every other laptop, but come on...

Edit: replaced gif with video because Mastodon is choking on the animation duration...

Edit 2: Update below; I can repro this across a large range of versions on this machine but none of the other models I've tried so far. It is definitely a bug, very very obvious to the ear, and seems unique to this machine model.

Edit 3: Still there in Sonoma, this is a current bug.

OH MY FUCKING GOD.

Pictured: Apple's M2 MacBook Air 13" speaker response (measured with a mic), and the response you get when you zero out every 128th sample of a sine sweep.

They have a stupid off-by-one bug in the middle of their bass enhancer AND NOBODY NOTICED NOR FIXED IT IN OVER A YEAR.

So instead of this (for a 128-sample block size):

Show previous comments
Bill Zaumen

@marcan Only a year before anyone noticed? I once reported a bug in YACC - an off-by-one error resulting in a stack overflow - and nobody had noticed it for a good 10 years! The bug was in the generated code, apparently part of a template, and I only found it because I was testing a parser by throwing unusual cases at it.

argv minus one

@marcan Another fine reason to Rewrite It In #Rust ™: iterators prevent a lot of off-by-one bugs.

Hector Martin

Can anyone identify this chip? It's supposed to be a 155mbit fiber optic transceiver, but I'm not sure I can read the logo (GZJL?) and "7901" isn't finding anything...

Show previous comments
Growlph Ibex

@marcan Wild, I can't find anything for any variation of that manufacturer logo...

Hector Martin

Speaking of Unicode identifiers being a stupid idea: I have not seen a single Unicode/punycode URL in my almost 10 years in Japan, in real life.

Not. Once. Not in the hostname portion, not in the path portion. Never.

Nobody wants that nonsense here. Seriously. It's a silly novelty and only creates practical problems (and security issues).

You know how Japanese ads and billboards direct people to complex pages/URLs? They give you a search term to plug into Google.

(To clarify, you do get Unicode terms in path fields for things like wikis, but never as part of URLs people are expected to type out, and I've seriously never seen punycode domains.)

Speaking of Unicode identifiers being a stupid idea: I have not seen a single Unicode/punycode URL in my almost 10 years in Japan, in real life.

Not. Once. Not in the hostname portion, not in the path portion. Never.

Nobody wants that nonsense here. Seriously. It's a silly novelty and only creates practical problems (and security issues).

Show previous comments
scarcraft

@marcan
Here in spain we have free .es domains for the city councils. I assume that is nearly a mandatory rule to obtain certain money funds. As an example, my city has the logroño.es

Landon Epps

@marcan I do see Unicode domains in search results. They tend to be single purpose websites geared toward SEO. I bet it‘s because Google prioritizes a result if the URL matches the search term exactly (e.g. wimax比較.com).

Interestingly [kanji].com is often used in logos, but the actual URL is ASCII. I see 価格.com the most, but there are plenty of examples. The Unicode domain in their logo doesn’t exist, not even to redirect. The actual URL is kakaku.com.

Григорий Клюшников

Cyrillic IDN domains in the .рф TLD are a thing in Russia. They aren't very popular, but they do exist, including on ads.

Hector Martin

Ah yes, top quality discussion on Hacker News. 🙃​

Hector Martin

I'm just going to screenshot a bunch of choice comments to show off HN's legendary moderation, so I can point at this next time someone asks what's so bad about HN.

13 hours ago, flagged but still visible (not dead, so no difference).

Hector Martin

Lol, it took 2 weeks for geohot to go from grandiosely announcing he's going to make ML good on AMD hardware with his own code to giving up after running into some bugs.

geohot.github.io/blog/jekyll/u

pbs.twimg.com/media/Fx0vKkAX0A

Psst - if you want accessible endpoint ML, we're getting pretty close to releasing compute support on our Asahi GPU drivers thanks to @lina and @alyssa's work, and @eiln's work-in-progress Apple Neural Engine driver is already running popular ML models on Asahi Linux.

Lol, it took 2 weeks for geohot to go from grandiosely announcing he's going to make ML good on AMD hardware with his own code to giving up after running into some bugs.

geohot.github.io/blog/jekyll/u

pbs.twimg.com/media/Fx0vKkAX0A

Hector Martin

I'm going to be really blunt here: if you don't care about trans people, if you even remotely think there's the slightest hint of merit to the blatant genocidal actions that are going on in the US right now, you can fuck right off from my projects, spaces, and communities.

I don't give a fuck about "tech shouldn't be political" garbage takes. Tech is made by people and right-wing legislators in the US are trying to *kill* my colleagues right now. There is no tech without people.

Show previous comments
Christmas Tree

@marcan “Tech is made by people and right-wing legislators in the US are trying to *kill* my colleagues right now. There is no tech without people.”

Just blows my mind how people can’t wrap their heads around this very simple fact

Arena Cops 🇺🇦✌

@marcan They don't deserve to be (called) legislators — they're evil opportunists committed to score points by targeting & stigmatizing minorities, & inciting antipathy against.

Ironically, the so-called "GOP" is itself a minority within the U.S., pretending to be a majority in some places by committing organized fraudulent voter suppression & gerrymandering & other methods of electoral fraud.

#StandWithHumanity #HumanityMatters #HumanRights #LGBTQ #StrongerTogether #BanTheGOP #RespectLife

Neratas Frosthorn Stormwing

@marcan to say nothing of the significant contributions trans folk have made to tech. Without queer folks, normies wouldn't even have computers.

Hector Martin

Yet another person on Reddit surprised that Asahi Linux compiles stuff way faster than macOS.

"But macOS is so optimized for the hardware!" they all say... except Linux is already way more optimized in general than macOS is, for many workloads!

$ time tar xf linux-6.3.3.tar

macOS on APFS: 6.8 seconds
Linux on ext4: 1.0 seconds

Both on an M1 MacBook Air 13". That's how much faster the Linux is at dealing with files than macOS.

The hardware drivers don't matter you're dealing with pure CPU workloads and an NVMe SSD. We already have cpufreq and share the Linux NVMe core, so there's nothing left to optimize there that is specific to this hardware. The only thing missing is deep CPU idle which will unlock boost clocks, but only for single-core workloads (multicore compiling is already at its max).

Yet another person on Reddit surprised that Asahi Linux compiles stuff way faster than macOS.

"But macOS is so optimized for the hardware!" they all say... except Linux is already way more optimized in general than macOS is, for many workloads!

$ time tar xf linux-6.3.3.tar

macOS on APFS: 6.8 seconds
Linux on ext4: 1.0 seconds

Show previous comments
rain 🌦️

@marcan Yeah, this is completely unsurprising. In my experience macOS's technical implementation details are leagues behind Linux's.

Kevin Karhan :verified:

@marcan this reminds me how #Linux also was the first #OS to run on #Itanium and how the #GCC is even the best compiler for that architecture - even better than #Intel's own!

youtube.com/watch?v=3oxrybkd7M

But yeah, the only thing that would make @AsahiLinux even faster on #AppleSilicon would be if compiling stuff would be done entirely in RAM wherever possible, leveraging 10x->1000x more IOPS and lower latency.

CaroCaronte

@marcan
ok but you're not doing (almost) any computation here, tar does not even compress anything, you're just barely reading and writing a set of files onto the hd so your point is ext4 is faster than apfs which might be true, apfs is not known for speed but reliability over ssd, encryption, snapshots support ecc...

I personally like both mac and linux (and use windows at work) so I'm happy either way
#crossPlatformHappiness

@marcan
ok but you're not doing (almost) any computation here, tar does not even compress anything, you're just barely reading and writing a set of files onto the hd so your point is ext4 is faster than apfs which might be true, apfs is not known for speed but reliability over ssd, encryption, snapshots support ecc...

Hector Martin

A bit of (simplified) X history and how we got here.

Back in the 90s and 2000s, X was running display drivers directly in userspace. That was a terrible idea, and made interop between X and the TTY layer a nightmare. It also meant you needed to write X drivers for everything. And it meant X had to run as root. And that if X crashed it had a high chance of making your whole machine unusable.

Then along came KMS, and moved modesetting into the kernel. Along with a common API, that obsoleted the need for GPU-specific drivers to display stuff. But X kept on using GPU-specific drivers. Why? Because X relies on 2D acceleration, a concept that doesn't even exist any more in modern hardware, so it still needed GPU-specific drivers to implement that.

The X developers of course realized that modern hardware couldn't do 2D any more, so along came Glamor, which implements X's three decades of 2D acceleration APIs on top of OpenGL. Now you could run X on any modern GPU with 3D drivers.

And so finally we could run X without any GPU-specific drivers, but since X still wants there to be "a driver", along came xf86-video-modesetting, which was supposed to be the future. It was intended to work on any modern GPU with Mesa/KMS drivers.

That was in 2015. And here's the problem: X was already dying by then. Modesetting sucked. Intel deprecated their GPU-specific DDX driver and it started bitrotting, but modesetting couldn't even handle tear-free output until earlier this year (2023, 8 whole years later). Just ask any Intel user of the Ivy Bridge/Haswell era what a mess it all is. Meanwhile Nvidia and AMD kept maintaining their respective DDX drivers and largely insulating users from the slow death of core X, so people thought this was a platform/vendor thing, even though X had what was supposed to be a platform-neutral solution that just wasn't up to par.

And so when other platforms like ARM systems came around, we got stuck with modesetting. Nobody wants to write an X DDX. Nobody even knows how outside of people who have done it in the past, and those people are burned out. So X will *always* be stuck being an inferior experience if you're not AMD or Nvidia, because the core common code that's supposed to handle it all just doesn't cut it.

On top of that, ARM platforms have to deal with separate display and render devices, which is something modesetting can't handle automatically. So now we need platform-specific X config files to make it work.

And then there's us. When Apple designed the M1, they decided to put a coprocessor CPU in the display controller. And instead of running the display driver in macOS, they moved most of it to firmware. That means that from Linux's point of view, we're not running on bare metal, we're running on top of an abstraction intended for macOS' compositor. And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces. That all works well with modern Wayland compositors, which use KMS abstractions that are a pretty good match for this model (it's the future and every other platform is moving in this direction).

But X and its modesetting driver are stuck in the past. It tries to do ridiculous things like draw directly into the visible framebuffer instead of a back buffer, or expect there to be a "vblank IRQ" even though you don't need one any more. It implements a software fallback for when there is no hardware cursor plane, but the code is broken and it flickers. And so on. These are all problems, legacy nonsense, and bugs that are part of core X. They just happen to hurt smaller platforms more, and they particularly hurt us.

That's not even getting into fundamental issues with the core X protocol, like how it can't see the Fn key on Macs because Macs have software Fn keys and that keycode is too large in the evdev keycode table, or how it only has 8 modifiers that are all in use today, and we need one more for Fn. Those things can't be properly fixed without breaking the X11 protocol and clients.

So no, X will never work properly on Asahi. Because it's buggy, it has been buggy for over 8 years, nobody has fixed it in that time, and certainly nobody is going to go fix it now. The attempt at having a vendor-neutral driver was too little too late, and by then momentum was already switching to Wayland. Had continued X development lasted long enough to get modesetting up to par 8 years ago, the story with Asahi today would be different. But it didn't, and now here we are, and there is nothing left to be done.

So please, use Wayland on Asahi. You only get a pass on using X if you need accessibility features that aren't in Wayland yet.

A bit of (simplified) X history and how we got here.

Back in the 90s and 2000s, X was running display drivers directly in userspace. That was a terrible idea, and made interop between X and the TTY layer a nightmare. It also meant you needed to write X drivers for everything. And it meant X had to run as root. And that if X crashed it had a high chance of making your whole machine unusable.

Show previous comments
Simon Brooke

@marcan Detail disagreement: "if X crashed it had a high chance of making your whole machine unusable".

No, that really isn't true. I was using X11 on BSD from 1988, on System V.4 from 1989, on UnixWare from (I think) 1992, and on Linux from 1993. I don't recall X crashing on the BSD or System V.4 at all. on Linux, X was pretty flaky in those days and crashed not infrequently; on UnixWare it did crash, but not often. I don't recall any X crash that locked the machine.

Wohao_Gaster :fatyoshi:

@marcan Don't call asahi broken until you stop using features that are broken because they're outdated

gunstick

@marcan I am all in for waylamd everywhere. I just miss the seamless display forwarding you had with X. Like just adding -X to your ssh and the grafics magically appear from the remote machine.
As far as I know, nothing like that is yet existing in wayland.

Hector Martin

Soooo my previous toots ended up on Phoronix and here come the entitled users saying how dare you tell me to switch to Wayland.

Repeat after me: Xorg is dead. It is unmaintained. It is buggy and those bugs are not getting fixed. *THIS IS FROM ITS OWN DEVELOPERS*. The people previously working on Xorg are now working on Wayland. They are literally part of the same organization FFS.

If you want Xorg to keep working, fix it yourself. Oh, not interested? Nobody else is either. Guess what, if nobody works on it, it will bitrot into oblivion. Nobody has signed up to fix it. No amount of wishful thinking is going to change that. You can keep using it all you like, but unless YOU sign up to maintain it, it's going to die.

Want Xorg to survive? Take over maintenance. We're all waiting.

*crickets*

Soooo my previous toots ended up on Phoronix and here come the entitled users saying how dare you tell me to switch to Wayland.

Repeat after me: Xorg is dead. It is unmaintained. It is buggy and those bugs are not getting fixed. *THIS IS FROM ITS OWN DEVELOPERS*. The people previously working on Xorg are now working on Wayland. They are literally part of the same organization FFS.

Show previous comments
hapbt

@marcan I’ve been using Wayland for a couple years now with no Xorg and I can honestly say it’s finally here it’s solid enough for daily use, never going back

Be

@marcan oh no toots are ending up on Phoronix now 😨

Hector Martin

Please, please stop using Xorg with Asahi Linux.

It's all but unmaintained, broken in fundamental ways that cannot be fixed, unsuited to modern display hardware (like these machines), and we absolutely do not have the bandwidth to spend time on it.

We strive for a quality desktop on Apple Silicon machines, but we have to pick and choose our battles very carefully, because we can't single-handedly fix all the problems in the entire Linux desktop ecosystem. Yes, some Xorg things might work better on other platforms. That doesn't mean Xorg isn't broken, it means those platforms have spent years working around Xorg's failings. We don't have the time for that. Distributions and major desktop environments are already dropping Xorg support. It's pointless to try to support it well today on a new platform.

XWayland will continue to be supported for legacy client apps, and we do plan to spend time on optimizing the XWayland experience. But for anything that goes beyond "displaying windows" (compositors, IMEs, input management, desktop environments, etc.), please use native Wayland applications, since XWayland will never integrate properly for those things (by design).

Yes, not every random app and feature you use on Xorg will have a Wayland equivalent. Deal with it. The major players in desktop Linux have decided it's time to move on from Xorg, and if you want to go against the tide you're on your own.

We do expect Xorg to continue to function for the bare essentials (i.e. showing a working desktop), but that's it. We won't be working on any features or non-desktop-breaking bugs beyond that.

The only reason we shipped Xorg by default is that Wayland compositors were slower with software rendering. The reverse is true now that we have GPU drivers, and we will be switching all default-Xorg-KDE users to default-Wayland in an update (along with promoting the GPU drivers to the default builds) really soon. At that point Xorg will be relegated to SDDM, and once a native Wayland release of that finally happens, we won't be shipping any usage of the X server any more.

Please, please stop using Xorg with Asahi Linux.

It's all but unmaintained, broken in fundamental ways that cannot be fixed, unsuited to modern display hardware (like these machines), and we absolutely do not have the bandwidth to spend time on it.

We strive for a quality desktop on Apple Silicon machines, but we have to pick and choose our battles very carefully, because we can't single-handedly fix all the problems in the entire Linux desktop ecosystem. Yes, some Xorg things might work better on...

Markus Werle

@marcan slightly off-topic: I am trying to run a Wayland desktop (Ubuntu-22.04 LTS) in a Podman or Docker container, but this fails. Any pointer to helpful resources?

Arvid E. Picciani

@marcan thats unfortunate because wayland will never work for legacy humans like me who prefer keyboard input

Ewen McNeill

@marcan in case it helps: even RedHat (in RHEL 9 release notes) explicitly state that Xorg is deprecated and will be removed in a future major RHEL release.

Linux on the desktop has clearly settled on Wayland (and XWayland as a protocol bridge) as the way forward. AFAICT this has basically been the position for ~5 years: all the desktop support work has been going into Wayland.

“Native X11” is pretty much retreocomputing at this point :-)

access.redhat.com/documentatio

@marcan in case it helps: even RedHat (in RHEL 9 release notes) explicitly state that Xorg is deprecated and will be removed in a future major RHEL release.

Linux on the desktop has clearly settled on Wayland (and XWayland as a protocol bridge) as the way forward. AFAICT this has basically been the position for ~5 years: all the desktop support work has been going into Wayland.

Go Up