Email or username:

Password:

Forgot your password?
40 posts total
Hector Martin

Say what you want about Telegram, but it has one of the best data export/backup features I've ever seen.

Fully client-driven, with real-time progress display (and even the ability to manually skip large file downloads on-the-fly).

The output is a bunch of plain HTML pages with just 200 lines of pure JS (no frameworks) for some minor interactivity features, that loads instantly and looks roughly like the Telegram client itself, easy to browse and search.

The JSON is one big blob with all the same data in a trivial format. The text encoding is interesting: Telegram supports rich text, but instead of in-line HTML-style markup, in the JSON it's encoded as JSON objects representing the different spans of text with different format. Very clean.

Say what you want about Telegram, but it has one of the best data export/backup features I've ever seen.

Fully client-driven, with real-time progress display (and even the ability to manually skip large file downloads on-the-fly).

The output is a bunch of plain HTML pages with just 200 lines of pure JS (no frameworks) for some minor interactivity features, that loads instantly and looks roughly like the Telegram client itself, easy to browse and search.

Hector Martin

I'm all for Signal and E2EE and distributed systems and all that, but... Telegram is, by far, the least-bullshit most-fun messenger I've ever used. Everything just seems to work, it's lean, has native open source client apps, a big pile of features that are cohesively integrated and work, API/bot support, useful stuff like automatic translation (premium feature, but that's understandable since translation APIs aren't free), etc.

Other platforms would do well to learn from it.

Hector Martin

I've just been told that Apple are transitioning to cleartext iBoot images. We already knew there wasn't anything naughty in iBoot (decryption keys had been published for some systems/versions, plus it's tiny anyway and doesn't have space for networking stacks or anything like that) but this means that, going forward, the entire AP (main CPU) boot chain for Apple Silicon machines is cleartext, as well as SMC and other aux firmware that was inside iBoot for practical reasons.

The only remaining encrypted component is SEPOS, but it's optional and we don't even load it yet for Asahi Linux. All other system firmware other than iBoot and the embedded SMC/PMU blobs was already plaintext.

That means that there is no place left for evil backdoors to hide in the set of mutable Apple Silicon firmware. All updates Apple publishes going forward can be audited for any weirdness. πŸ₯³

(In practice this doesn't really change much for the already-excellent privacy posture of Apple Silicon systems running Asahi, which have always been way ahead of anything x86 since there's no Intel ME or AMD PSP equivalent full-system-access backdoor capable CPU, but it helps dispel some remaining paranoid hypotheticals about what Apple could potentially do, even if already very unlikely.)

I've just been told that Apple are transitioning to cleartext iBoot images. We already knew there wasn't anything naughty in iBoot (decryption keys had been published for some systems/versions, plus it's tiny anyway and doesn't have space for networking stacks or anything like that) but this means that, going forward, the entire AP (main CPU) boot chain for Apple Silicon machines is cleartext, as well as SMC and other aux firmware that was inside iBoot for practical reasons.

Sobex

@marcan I guess we owe some drinks to whoever at Apple can claim to have been involved in that decision \o/
(I'll volunteer to pay a round of drink in France)

Hector Martin

Facts about hardware are not copyrightable.

People tend to ascribe magical properties to copyright, as if any kind of information whatsoever is copyrightable. That's not how it works.

Copyright is intended to protect creative works. Hardware devices are not considered creative devices, they are functional. They are protected by patent rights, not copyright – and patent rights only protect the ability to reproduce the device, not describe it.

This means that PCB layouts are not copyrightable. By extension, nor are circuit netlists (i.e. the "information" within a circuit schematic). (Yes, this has interesting implications for open source hardware! You can attach licenses all you want to OSHW, but they only protect the actual source design files - anyone can just copy the functional design manually and manufacture copies and ignore the license, as long as they change the name to not run into trademark issues/etc., any firmware notwithstanding)

IC masks are protected under a very explicit law in the US. They weren't before that. By extension, nothing else about the chip design other than possibly firmware is copyrightable.

If you go and make an x86 clone or an unlicensed ARM core, Intel and ARM won't go after you for copyright violation. They will go after you for patent infringement, because the ISAs are patented. Talking about the architectures and writing code for them and any other research is perfectly fine. The only thing you can't do is reimplement them.

This is why projects like Asahi Linux can exist. If somehow just knowing how hardware works were a potential copyright violation, none of this would be possible.

What this means is: it is entirely legitimate to inspect things like vendor tools and software to learn things about the hardware, and then transfer that knowledge over to FOSS. You may run into license/EULA issues depending on what you do with the source data specifically (think: "no reverse engineering" type provisions), but as far as the knowledge contained within is concerned, that is not copyrightable, and the manufacturer has no copyright claim over the resulting FOSS.

This includes copying register names. I have an actual lawyer's opinion on that (via @bunnie). I tend to rewrite vendor register names more often than not anyway because often they are terrible, but I'm not legally required to.

The reason why we don't just go and throw vendor drivers into Ghidra and decompile all day, besides the EULA implications for the person doing it, is that the code is copyrightable and it can become a legal liability if you end up writing code that drives the hardware the same way, including in aspects that are deemed creative and copyrightable. This is why we have things like the clean-room approach and why we prefer things like hardware access tracing over decompilation.

But stuff like register names and pure facts about the hardware like that? Totally fair game.

Fun fact: Vendor documentation, like the ARM Architecture Reference Manual, has no copyright release for this stuff in the license. If register names were copyrightable, then anyone who has ever read ARM docs and copied and pasted a reg name into their code would be infringing copyright. They aren't, because this stuff isn't copyrightable.

Facts about hardware are not copyrightable.

People tend to ascribe magical properties to copyright, as if any kind of information whatsoever is copyrightable. That's not how it works.

Copyright is intended to protect creative works. Hardware devices are not considered creative devices, they are functional. They are protected by patent rights, not copyright – and patent rights only protect the ability to reproduce the device, not describe it.

Exa :calim:

@marcan I wonder how this stands in Europe and other countries. In France we got different rights from author laws than in the US, that are more protective I'd say.

Hector Martin

Ah yes, let's ship a kernel driver that parses update files that are pushed globally simultaneously to millions of users without progressive staging, and let's write it in a memory unsafe language so it crashes if an update is malformed, and let's have no automated boot recovery mechanism to disable things after a few failed boots. What could possibly go wrong?

πŸ€¦β€β™‚οΈ

Show previous comments
Trillion Byter

@marcan brb. Need to add a few topics in my personal Jira.

Christian Berger DECT 2763

@marcan Well it's a nice stunt. I mean nobody would ever use "endpoint security" software on an important system. That would be ridiculous and a clear breach of, for example, the contract rules of Crowdstrike.

Jan β˜•πŸŽΌπŸŽΉβ˜οΈπŸ‹οΈβ€β™‚οΈ

@marcan rust in kernel wasn't available when this thing was written. And you just don't go rewrite stuff for the fun of it.

Not to defend them for whatever errors happened in the qa proces, but hindsight 20/20 here.

Hector Martin

TIL that some people are playing GitHub race games to make it seem like Linus Torvalds merged their GitHub PR.

Cute trick, but no, Linus doesn't actually ever merge anything via GitHub. Explanation here.

Stefan Sperling

@marcan Wow, this shouldn't be possible. Is it something GitHub is aware of and going to prevent?

Reminds me of another bug in their platform where one can make trees added to forked repositories appear in the origin repository because all forks use shared repository storage underneath: github.com/github/dmca/tree/41

kayenne

@marcan this is an amazing Stupid Programmer Trick

Peter Bindels

@marcan So the bug is Github pretending to be in charge of things it's not in charge of, by making the assumption "if a PR exists for commit hash X and X is in the mainline, then the PR must have been merged"?

Hector Martin

Apparently Apple just announced hardware-accelrated GIC support for Apple Silicon virtualization. Asahi Linux has had that feature for over two years now. Glad they caught up supporting features of their own hardware! 😁

Hector Martin

We really should just start calling non-conformant graphics API implementations (like Apple's OpenGL, or MoltenVK) "buggy".

If code doesn't pass tests we usually call that a bug and don't ship it until it passes them, right? It is exceptional that Apple is shipping OpenGL drivers that don't pass the tests and they are okay with that. They shouldn't really get credit for supporting "OpenGL 4.1" when they literally fail the OpenGL 4.1 tests (they even fail the OpenGL ES 2.0 tests!).*

* Nobody actually knows how to compile/run the full test suite on macOS OpenGL (because Apple would be the only entity who would care about that, and they don't), but we do know for a fact they have bugs the tests test for, so we can confidently say they'd fail the tests if someone were to actually port/run them.

We really should just start calling non-conformant graphics API implementations (like Apple's OpenGL, or MoltenVK) "buggy".

If code doesn't pass tests we usually call that a bug and don't ship it until it passes them, right? It is exceptional that Apple is shipping OpenGL drivers that don't pass the tests and they are okay with that. They shouldn't really get credit for supporting "OpenGL 4.1" when they literally fail the OpenGL 4.1 tests (they even fail the OpenGL ES 2.0 tests!).*

Show previous comments
Kai Ninomiya

@marcan My team (Chrome WebGL/WebGPU) is very conformance-focused by nature and we have always called them bugs. Occasionally the bugs are in the spec or test instead of the driver, but if a test is failing, there's a bug somewhere.

Longhorn

@marcan Vulkan Portability was a deliberate tradeoff there.

Karl

@marcan I would not be surprised if the engineers working at Apple on OpenGL drivers agreed with you.

Claiming that something is supported is often a call made by marketing/salespeople as soon as engineering reports that they cobbled something together that kinda works.

Hector Martin

Just had another argument about curl|sh, so I'm going to say this top level for future reference.

The way we use curl|sh is as secure, or more secure, than traditional distro distribution mechanisms (e.g. ISO images with hashes or PGP signatures) for 99.9% of users. If you think otherwise, you don't understand the threat models involved, and you're wrong.

If you are in the 0.1% that actually cross-references PGP keys against multiple sources, exchanges keys in person, and that kind of thing, then you could indeed actually benefit from a more secure distribution mechanism. You're also, unfortunately, not a significant enough fraction of our user base for us to spend time catering to your increased security demands, that we could instead be spending improving security for everyone (such as by working on SEP support for hardware-backed crypto operations, or figuring out how to actually offer FDE reasonably in our installer).

And if you're not manually verifying fingerprints with friends, but curl|sh still gives you the ick even though you have no solid arguments against it (you don't, trust me, none of you do, I've had this argument too many times already), that's a you problem.

Just had another argument about curl|sh, so I'm going to say this top level for future reference.

The way we use curl|sh is as secure, or more secure, than traditional distro distribution mechanisms (e.g. ISO images with hashes or PGP signatures) for 99.9% of users. If you think otherwise, you don't understand the threat models involved, and you're wrong.

Show previous comments
Samantha
@marcan just wait until they find out about how brew is installed on macOS
mppf

@marcan

The worst-case scenario isn't that your web server is hacked and somebody starts installing malware instead of your tool. The worst case scenario is that this happens and the source of malware goes on undetected for years because the web server gives different scripts to different people.

Additionally, curl|sh will seriously hamper incident response people figuring out the source of malware, because it doesn't save the script that was run anywhere.

1/2

Hector Martin

Found the DMP disable chicken bit. it's HID11_EL1<30> (at least on M2).

So yeah, as I predicted, GoFetch is entirely patchable. I'll write up a patch for Linux to hook it up as a CPU security bug workaround.

(HID4_EL1<4> also works, but we have a name for that and it looks like a big hammer: HID4_FORCE_CPU_OLDEST_IN_ORDER)

Code here: github.com/AsahiLinux/m1n1/blo (Thanks to @dkohlbre for the userspace C version this is based off of!)

One interesting finding is that the DMP is already disabled in EL2 (and presumably EL1), it only works in EL0. So it looks like the CPU designers already had some idea that it is a security liability, and chose to hard-disable it in kernel mode. This means kernel-mode crypto on Linux is already intrinsically safe.

Found the DMP disable chicken bit. it's HID11_EL1<30> (at least on M2).

So yeah, as I predicted, GoFetch is entirely patchable. I'll write up a patch for Linux to hook it up as a CPU security bug workaround.

(HID4_EL1<4> also works, but we have a name for that and it looks like a big hammer: HID4_FORCE_CPU_OLDEST_IN_ORDER)

Show previous comments
Christian Horn

@marcan I hope it gets implemented so mitigations=off can disable it, this will allow performance/energy consumption comparisons.

meta

@marcan @dkohlbre i wouldve thought that poking all the HID bits is a good way to make a brick!

Glyph

@marcan @dkohlbre @filippo does this reach back to M1 as well? I know M3 already had something even when gofetch first announced

Hector Martin

This is disappointing.

theverge.com/2024/3/4/24090357

On one hand, the Yuzu folks had it coming with all the thinly veiled promotion of game piracy (if you can even call it thinly veiled). There's a reason I banned everyone even remotely talking about that back when I was part of the Wii homebrew community.

On the other, the proposed settlement asserts that the *emulator* itself is a DMCA violation (not just the conduct of those developing it), and that is an absolutely ridiculous idea. I *believe* this doesn't actually set any legal precedent (since it is wasn't litigated, but IANAL), so other emulators should still be safe... but still, really not a good look.

I'm so glad I'm no longer in the game hacking world and having to deal with this kind of stuff...

This is disappointing.

theverge.com/2024/3/4/24090357

On one hand, the Yuzu folks had it coming with all the thinly veiled promotion of game piracy (if you can even call it thinly veiled). There's a reason I banned everyone even remotely talking about that back when I was part of the Wii homebrew community.

Hector Martin

I just reminded myself of the extra fun shim shenanigans going on in Asahi Fedora. I've previously described the Asahi Linux boot chain as:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> GRUB - > Linux

Which is already amusing enough, but Fedora throws in another twist. To support other platforms with "interesting" secure boot requirements (cough Microsoft-controlled certificates cough), Fedora ships with shim to handle handoff to GRUB and allow users to control their UEFI secure boot keys. But it's even more fun than that, because our installs don't support UEFI variables and instead have a 1:1 mapping between EFI system partitions and OSes, relying on the default removable media boot path.

On an installed Asahi Fedora system, you get this:

EFI
EFI/BOOT
EFI/BOOT/BOOTAA64.EFI
EFI/BOOT/fbaa64.efi
EFI/BOOT/mmaa64.efi
EFI/fedora
EFI/fedora/BOOTAA64.CSV
EFI/fedora/grub.cfg
EFI/fedora/grubaa64.efi
EFI/fedora/mmaa64.efi
EFI/fedora/shim.efi
EFI/fedora/shimaa64.efi
EFI/fedora/gcdaa64.efi

Here, U-Boot runs BOOT/BOOTAA64.EFI, which is a copy of shim. shim is designed to boot grubaa64.efi from the same directory. But since it can't find it there (since it's in fedora), it goes ahead and loads fbaa64.efi instead. That's a fallback app that then goes off searching for bootable OSes, looking for CSV files in every subfolder of EFI. It finds fedora/BOOTAA64.CSV, which is a UTF-16 CSV file that points it to shimaa64.efi. That is, itself, another identical copy of shim, but this time it's booted from the right fedora directory. The fallback app tries to configure this as a boot image in the EFI variables, but since we don't support those, that fails and continues. Then this second fedora/shimaa64.efi runs and finally finds fedora/grubaa64.efi which is the GRUB core image which continues booting.

So, on Fedora Asahi, the boot chain is actually:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> shim (first copy) -> fallback -> shim (second copy) -> GRUB - > Linux

Every. Time. (Thankfully, these extra steps go fast so it doesn't materially affect boot time; the major bootloader time contributors are kernel/initramfs load time and U-Boot USB init time right now).

The shim stuff is, of course, completely useless for Asahi Linux, since there are no built-in platform keys or any of that UEFI secure boot nonsense; once we do support secure boot, the distro update handoff will be at the m1n1 stage 1/2 boundary, well before any of this stuff, and the UEFI layer might as well use a fixed distro key/cert that will be known to always work, plus the m1n1 stage 2 signing key (which is not going to use UEFI Secure boot, it will be its own simpler thing) would be set at initial install time to whatever the distro needs. But since this mechanism exists to support other platforms, we didn't want to diverge and attempt to "clean this up" further, since that just sets us up for more weird breakage in the future. Blame Microsoft for this extra mess...

But it's even sillier, because this whole UEFI secure boot mechanism isn't supported in Fedora aarch64 at all (the shim isn't actually signed, and GRUB/fallback are signed with a test certificate), so this whole mechanism is actually completely useless on all aarch64 platforms, it only gets built and signed properly on x86, and aarch64 just inherits it πŸ™ƒβ€‹

In the early Fedora Asahi days I noticed our Kiwi build stuff was dumping the GRUB core image into EFI/BOOT too, which bypassed this fallback mechanism... but also meant that the GRUB core image didn't get updated when GRUB got updated, which is a ticking time bomb. Thankfully we noticed that and got rid of it. So now there's a silly boot chain, but it should be safe for normal distro updates.

(As for the other stuff? mmaa64.efi is the MokManager that is useless to us, and shim.efi is just a copy of shim for some reason, and gcdaa64.efi is just a copy of grub for some reason. No idea why those two exist.)

I just reminded myself of the extra fun shim shenanigans going on in Asahi Fedora. I've previously described the Asahi Linux boot chain as:

SecureROM -> iBoot1 -> iBoot2 -> m1n1 stage 1 -> m1n1 stage 2 -> u-boot -> GRUB - > Linux

Which is already amusing enough, but Fedora throws in another twist. To support other platforms with "interesting" secure boot requirements (cough Microsoft-controlled certificates cough), Fedora ships with shim to handle handoff to GRUB and allow users to control their...

Show previous comments
elly
@marcan I wonder: why so many layers though?
I would expect that you can do BROM -> iB1 -> iB2 -> ST3 from storage (U-Boot).

We managed to do the same thing with ARM64 Chromebooks (although porting drivers for different platforms from Linux is a PITA). It looks something like that:

BootROM -> BL2 (Coreboot) -> BL31 (TF-A) -> Coreboot (drops execution level to EL2) -> DepthCharge -> ELF from storage (U-Boot/LinuxBoot).

I also wonder if you could get NVRAM working. U-Boot supports it in OP-TEE, but no idea if/how apple implemented it.
@marcan I wonder: why so many layers though?
I would expect that you can do BROM -> iB1 -> iB2 -> ST3 from storage (U-Boot).

We managed to do the same thing with ARM64 Chromebooks (although porting drivers for different platforms from Linux is a PITA). It looks something like that:
Demi Marie Obenour

@marcan Would it be possible to directly boot a kernel from m1n1, or even include the kernel image in m1n1? If Qubes OS ever gets Apple silicon support, I want to keep the secure boot chain as short as possible. Ideally, it would be no longer than Apple’s chain.

Neal Gompa (ニール・ゴンパ) :fedora:

@marcan `gcdaa64.efi` is a copy of GRUB with cdboot drivers built in. I'm not actually sure why they're split, maybe @vathpela knows?

Hector Martin

Today I learned that YouTube is deliberately crippling Firefox on Asahi Linux. It will give you lowered video resolutions. If you just replace "aarch64" with "x86_64" in the UA, suddenly you get 4K and everything.

They literally have a test for "is ARM", and if so, they consider your system has garbage performance and cripple the available formats/codecs. I checked the code.

Logic: Quality 1080 by default. If your machine has 2 or fewer cores, quality 480. If anything ARM, quality 240. Yes, Google thinks all ARM machines are 5 times worse than Intel machines, even if you have 20 cores or something.

Why does this not affect Chromium? Because chromium on aarch64 pretends to be x86_64

Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36

πŸ€¦β€β™‚οΈβ€‹πŸ€¦β€β™‚οΈβ€‹πŸ€¦β€β™‚οΈβ€‹πŸ€¦β€β™‚οΈβ€‹πŸ€¦β€β™‚οΈβ€‹

Welp, guess I'm shipping a user agent override for Firefox on Fedora to pretend to be x86.

Today I learned that YouTube is deliberately crippling Firefox on Asahi Linux. It will give you lowered video resolutions. If you just replace "aarch64" with "x86_64" in the UA, suddenly you get 4K and everything.

They literally have a test for "is ARM", and if so, they consider your system has garbage performance and cripple the available formats/codecs. I checked the code.

Show previous comments
Eckes :mastodon:

@marcan I guess that’s not about the CPU but the quality of drivers for hardware decoding

Charles U. Farley

@marcan Google disabled U2F based on user agent for a long time as well.

Hector Martin

KDE and GNOME are both supported DEs for Fedora Asahi Remix, but there's still one issue that makes it impossible for me to honestly recommend GNOME to anyone trying out Linux on these platforms for the first time: GNOME does not support fractional scaling out of the box, and it is actively broken with XWayland if you enable it by editing the configs directly.

I consider proper HiDPI support with fractional scaling a basic fundamental requirement for Apple machines. It's a basic macOS feature, and not having it on Linux is just silly. It doesn't even need to be perfect fractional scaling support (integer scaling + display output rescaling is fine, it's what macOS does AFAIK)... but it needs to be there.

In GNOME you can enable it via the command line (sigh...), but if you do, XWayland apps just become a blurry mess since they render at 100%. This includes apps like Thunderbird out of the box.

KDE does this right, within the constraints of the legacy X11 protocol: the X11 scale is set to the largest of your monitor scales, so X11 apps look crisp on at least one monitor (even crisper than on Wayland at non-integer scales, at least until the native Wayland fractional scaling stuff catches up) and only minimally soft on the others (typical downscaling softness, same thing macOS does and same thing you get on Wayland for most apps today).

KDE had that problem way back when we first shipped the Arch alpha, which is why that was using native Xorg. They fixed it soon thereafter, so now KDE Wayland works as intended. But GNOME still hasn't caught up, and AIUI they don't even plan to do what KDE did...

For folks who are happy with GNOME, of course, we do consider it a supported desktop environment and will debug issues that crop up related to our platform drivers/etc. But I just... can't in good conscience tell people to try GNOME first as a first-time experience on Apple Silicon, not when the out-of-the-box experience is just "200% or 100%, nothing in between, unless you hack configs manually and then a bunch of apps become horribly blurry".

* Note: By fractional scaling, I mean effective fractional scaling, not native fractional scaling. Native fractional scaling is brand new in Wayland and stuff is still catching up, but even macOS doesn't do that either. The important part is that things are the right size (and you have more than integer sizes available), and that nothing is ever upscaled from a lower pixel density, which is what you get with KDE today.

KDE and GNOME are both supported DEs for Fedora Asahi Remix, but there's still one issue that makes it impossible for me to honestly recommend GNOME to anyone trying out Linux on these platforms for the first time: GNOME does not support fractional scaling out of the box, and it is actively broken with XWayland if you enable it by editing the configs directly.

Show previous comments
Dmitry Borodaenko

@marcan I've been using 150% scaling in GNOME for about 3 years now, what did I miss?

Fabio Valentini

@marcan yeah ... the fractional scaling Implementation in GNOME is really sad and broken 😞 we wanted to enable it by default for Fedora 39 but it turned out to break rendering of Xwayland applications, *always* upscaling from 1x, even on "integer" factors ... (side effect: fullscreen applications like games only "see" a scaled framebuffer so they can't even render at full resolution if you wanted)

the "let xwayland windows scale themselves" option in KDE is not perfect, but much better 😐

Sonny

@marcan the reason GNOME doesn't enable fractional scaling by default is that none of the solution for XWayland was considered satisfying until now. There is more to it, but that's one of the main issue.

We are looking into using rootfull Xwayland to solve the problem.

See "Are we done yet?" in ofourdan.blogspot.com/2023/11/ but I recommend reading both parts.

Hector Martin

Idle thought spun off from the prior discussion: If a bug reporter doesn't just report the bug, but debugs it and tells you exactly what went wrong and why the code is broken, then you should credit them with Co-developed-by, not Reported-by.

Debugging is just as much a part of development as actually writing the code is, and deserves equal credit. A strict reading of the kernel contribution guidelines doesn't imply this would be incorrect usage either. The only catch is the docs say C-d-b needs to be followed by a signoff, which would be unnecessary in this case (as that is about *copyright ownership/licensing* which only applies to writing the actual code), but an extra signoff never hurt anybody so shrug.

Idle thought spun off from the prior discussion: If a bug reporter doesn't just report the bug, but debugs it and tells you exactly what went wrong and why the code is broken, then you should credit them with Co-developed-by, not Reported-by.

Debugging is just as much a part of development as actually writing the code is, and deserves equal credit. A strict reading of the kernel contribution guidelines doesn't imply this would be incorrect usage either. The only catch is the docs say C-d-b needs...

joey

@marcan time for a new tag: Debugged-by:

Ayke van Laethem

@marcan personally if I write a patch like this I will at least credit the bug reporter with finding a fix. As you said, they did most of the work and deserve at least equal credit.

Man2Dev :idle:

@marcan I agree and giving some credit would set some incentive to give good bug reports

Hector Martin

Calling Sonoma users: We have had a few reports of Sonoma upgrades causing issues with Asahi (installed before or after the upgrade). These issues may be related to reports of Sonoma corrupting System recoveryOS, which is apparently a known issue in general (happens in rare cases). This is all almost certainly caused by one or more Apple bugs.

If you are running Sonoma (with or without Asahi), would you mind helping us by testing your System RecoveryOS? To do so, fully power down your machine and wait a few seconds, and then quickly tap and hold the power button (quickly press, release, press and hold; this is a double press, not the usual single press and hold). If you get the boot picker as usual, then your System recoveryOS is OK. If you get a "please recover me" exclamation mark screen, your System recoveryOS is broken.

If you have the bug, please let me know, as I would like to investigate what is going wrong and whether we can detect it somehow (or maybe even write a fixer tool). Ideally I'd want to either get temporary SSH access to macOS or dumps of files in certain partitions.

The reason why this is important is that there is a possibly related issue where Sonoma boot firmware won't boot our 13.5 Asahi macOS stubs, including recovery mode. That means that you can get stuck not being able to use the boot picker, and if your System recoveryOS is also broken, then there is no way to recover locally (you need a DFU revive), which sucks. I want to at the very least detect this bad state and refuse installation if the installer detects your recoveryOS is borked.

Your machine should go back to normal after a forced shutdown and reboot from the exclamation mark screen, as long as your regular boot OS and paired recoveryOS are fine.

Calling Sonoma users: We have had a few reports of Sonoma upgrades causing issues with Asahi (installed before or after the upgrade). These issues may be related to reports of Sonoma corrupting System recoveryOS, which is apparently a known issue in general (happens in rare cases). This is all almost certainly caused by one or more Apple bugs.

Show previous comments
SΓ©bastien de Graffenried

@marcan when I do the double press, I get β€œcontinue holding for startup options”, then β€œlaunching startup options” then a black screen. The computer seems to be on, I have to hold the power button to shut it down before I can start it up again. Could that be it?

Hayato Fujii

@marcan Is System recoveryOS the one which doesn't allow changes to Startup Security?

Hanchin Hsieh

@marcan@social.treehouse.systems I’m a Sonoma user and mine M1Max recovery OS is fine after tested.

Hector Martin

For Asahi, I contribute code across the entire stack: from bootloaders to the kernels to libraries to desktop environments to web browsers. I keep saying the Linux kernel development process is horrible, so what about the rest?

Let's talk about a project that does things right. KDE is a project of comparable scale to the Linux kernel. Here it its patch submission process:

1. I write the patch
2. I open the merge request *
3. The maintainer directly suggests changes using the interface, fully integrated with gitlab.
4. I click a button to accept the changes. They get turned into Git commits automatically. I don't even have to manually pull or apply patches.
5. I ask about Git history preferences, get told to squash them.
6. git pull, git rebase -i origin/main, clickety click, git push -f (I wouldn't be surprised if there's a way to do this in the UI too and I just don't know)
7. They merge my MR.

The whole thing took like 5 minutes of mental energy total, once the initial patch was ready.

Seriously, look at some of the timetamps. It wasn't even 15 minutes of wall clock time from the first suggestion to final commit. Less than an hour from opening the MR. And then it got merged the next day.

And this is why I love contributing to KDE and why they're our flagship desktop environment. :)

* This of course does not consider one-time setup costs. For KDE, that was opening up an Invent account, which takes 5 minutes, and I didn't have to learn anything because I already know GitLab and anyone familiar with any forge will quickly find their way around it anyway. The kernel, of course, requires you to learn arcane processes, sign up for one or more mailing lists, set up email filters, discover tools like b4 to make your life less miserable, manually configure everything, set up your email client with manual config overrides to make it handle formatting properly, etc., and none of that is useful for any project other than the small handful that insist in following the kernel model.

For Asahi, I contribute code across the entire stack: from bootloaders to the kernels to libraries to desktop environments to web browsers. I keep saying the Linux kernel development process is horrible, so what about the rest?

Let's talk about a project that does things right. KDE is a project of comparable scale to the Linux kernel. Here it its patch submission process:

Show previous comments
Pavol BabinčÑk

@marcan add comment "/rebase" against your MR and GitLab will take a care of the rest.

Just a tip, if you would like to save couple of commands in the shell. πŸ™‚

Haelwenn /элвэн/ :triskell:
@marcan
Meanwhile FreeDesktop and the like:
- Get stuck on their Gitlab because they locked down it so much to counter spam that you'd need to ask each project for access (of course with the "Request Access" button of Gitlab not being appropriate)
- Send the patch to their mailing-list
- Few days later have to explain that their setup is near impossible to use, and also send URL to the commit in case they don't into email

Bare mailing-list sucks (you can have email with CIs btw) but having to interact with the various Gitlab setups is horribly annoying.
And if there's one thing email has perfected over the ages it's spam mitigation for everyone, while on Gitlab it's a proprietary Enterprise addon.
@marcan
Meanwhile FreeDesktop and the like:
- Get stuck on their Gitlab because they locked down it so much to counter spam that you'd need to ask each project for access (of course with the "Request Access" button of Gitlab not being appropriate)
Drew DeVault

@marcan
>KDE is a project of comparable scale to the Linux kernel.

lol tho

Hector Martin

We've been using Matrix a lot for Fedora Asahi and I really want to like it but just... sigh. It's so clunky and broken in random ways.

Undiagnosable encryption failures/desyncs, notifications not arriving, mismatched feature support between clients, ...

The flagship Element client is a bloatfest, but third party clients always seem to work worse in some way, and even Element iOS is weirdly broken vs. the desktop/web version.

It's really sad that Discord basically does everything better.

Show previous comments
Thib

Hej @marcan, Matrix (and most particularly Element) has accumulated tech debt, but it's on a good way to solving it :)

The Matrix Foundation is going full steam on the matrix-rust-sdk to have one solid implementation that gives a consistent (good) experience across clients.

It's too early to see the results of this work, but we're well aware of the problems and doing our best to address them sustainably.

Centralised systems who sell your data are easier to maintain, but we keep fighting!

n0toose

@marcan tbh, I now seem to understand why Moxie was "uncomfortable with third party clients" on Signal when called to take an official stand without taking against e.g. soft forks like molly.im

Filip 🌱 ❄️ πŸ¦€

@marcan @GrapheneOS also echoed those feelings. IIRC @matrix answered with some assurances that they are working on addressing many of their pain points

Hector Martin

So apparently dang and the HN crowd are so upset I wrote some messages for HN visitors to our website, that they now banned my home IP address πŸ™ƒ

Yes, seriously. I get 403s from any device on my home connection, and yet it works fine on 4G.

Just when you thought they couldn't get pettier. And no, I haven't been doing any scraping/scripting/anything sus.

Show previous comments
[HUGS] getimiskon :OwOid: :blobcatgooglywtf: :verified_neko:
@marcan having to deal with them seems like it's a really huge pain :blobcatsweats:
Drew DeVault

@marcan I also ran into something like this over the weekend, though tbf I *was* doing research to dig up dirt on HN

Hector Martin

OH MY FUCKING GOD.

Pictured: Apple's M2 MacBook Air 13" speaker response (measured with a mic), and the response you get when you zero out every 128th sample of a sine sweep.

They have a stupid off-by-one bug in the middle of their bass enhancer AND NOBODY NOTICED NOR FIXED IT IN OVER A YEAR.

So instead of this (for a 128-sample block size):

for (int sample = 0; sample <= 127; sample++)
// process sample

They did this:

for (int sample = 0; sample < 127; sample++)
// process sample

Legendary audio engineering there Apple.

We can now, very confidently say the audio quality of Asahi Linux will be better than Apple's. Because we don't have blatant, in your face off-by-one bugs in our DSP and we actually check the output to make sure it's good πŸ˜‚β€‹

FFS, and people praise them for audio quality. I get it, the bar is so low it's buried underground for just about every other laptop, but come on...

Edit: replaced gif with video because Mastodon is choking on the animation duration...

Edit 2: Update below; I can repro this across a large range of versions on this machine but none of the other models I've tried so far. It is definitely a bug, very very obvious to the ear, and seems unique to this machine model.

Edit 3: Still there in Sonoma, this is a current bug.

OH MY FUCKING GOD.

Pictured: Apple's M2 MacBook Air 13" speaker response (measured with a mic), and the response you get when you zero out every 128th sample of a sine sweep.

They have a stupid off-by-one bug in the middle of their bass enhancer AND NOBODY NOTICED NOR FIXED IT IN OVER A YEAR.

So instead of this (for a 128-sample block size):

Show previous comments
Bill Zaumen

@marcan Only a year before anyone noticed? I once reported a bug in YACC - an off-by-one error resulting in a stack overflow - and nobody had noticed it for a good 10 years! The bug was in the generated code, apparently part of a template, and I only found it because I was testing a parser by throwing unusual cases at it.

argv minus one

@marcan Another fine reason to Rewrite It In #Rust β„’: iterators prevent a lot of off-by-one bugs.

Hector Martin

Can anyone identify this chip? It's supposed to be a 155mbit fiber optic transceiver, but I'm not sure I can read the logo (GZJL?) and "7901" isn't finding anything...

Show previous comments
Myoukochou

@marcan ZJL 珠桷杰理 possibly??

Growlph Ibex

@marcan Wild, I can't find anything for any variation of that manufacturer logo...

Go Up