Can anyone identify this chip? It's supposed to be a 155mbit fiber optic transceiver, but I'm not sure I can read the logo (GZJL?) and "7901" isn't finding anything...
Speaking of Unicode identifiers being a stupid idea: I have not seen a single Unicode/punycode URL in my almost 10 years in Japan, in real life.
Not. Once. Not in the hostname portion, not in the path portion. Never.
Nobody wants that nonsense here. Seriously. It's a silly novelty and only creates practical problems (and security issues).
You know how Japanese ads and billboards direct people to complex pages/URLs? They give you a search term to plug into Google.
(To clarify, you do get Unicode terms in path fields for things like wikis, but never as part of URLs people are expected to type out, and I've seriously never seen punycode domains.)
Speaking of Unicode identifiers being a stupid idea: I have not seen a single Unicode/punycode URL in my almost 10 years in Japan, in real life.
Not. Once. Not in the hostname portion, not in the path portion. Never.
Nobody wants that nonsense here. Seriously. It's a silly novelty and only creates practical problems (and security issues).
@marcan
Here in spain we have free .es domains for the city councils. I assume that is nearly a mandatory rule to obtain certain money funds. As an example, my city has the logroño.es
@marcan I do see Unicode domains in search results. They tend to be single purpose websites geared toward SEO. I bet it‘s because Google prioritizes a result if the URL matches the search term exactly (e.g. https://wimax比較.com).
Interestingly [kanji].com is often used in logos, but the actual URL is ASCII. I see 価格.com the most, but there are plenty of examples. The Unicode domain in their logo doesn’t exist, not even to redirect. The actual URL is https://kakaku.com.
I'm just going to screenshot a bunch of choice comments to show off HN's legendary moderation, so I can point at this next time someone asks what's so bad about HN.
13 hours ago, flagged but still visible (not dead, so no difference).
Lol, it took 2 weeks for geohot to go from grandiosely announcing he's going to make ML good on AMD hardware with his own code to giving up after running into some bugs.
Psst - if you want accessible endpoint ML, we're getting pretty close to releasing compute support on our Asahi GPU drivers thanks to @lina and @alyssa's work, and @eiln's work-in-progress Apple Neural Engine driver is already running popular ML models on Asahi Linux.
Lol, it took 2 weeks for geohot to go from grandiosely announcing he's going to make ML good on AMD hardware with his own code to giving up after running into some bugs.
I'm going to be really blunt here: if you don't care about trans people, if you even remotely think there's the slightest hint of merit to the blatant genocidal actions that are going on in the US right now, you can fuck right off from my projects, spaces, and communities.
I don't give a fuck about "tech shouldn't be political" garbage takes. Tech is made by people and right-wing legislators in the US are trying to *kill* my colleagues right now. There is no tech without people.
@marcan They don't deserve to be (called) legislators — they're evil opportunists committed to score points by targeting & stigmatizing minorities, & inciting antipathy against.
Ironically, the so-called "GOP" is itself a minority within the U.S., pretending to be a majority in some places by committing organized fraudulent voter suppression & gerrymandering & other methods of electoral fraud.
Yet another person on Reddit surprised that Asahi Linux compiles stuff way faster than macOS.
"But macOS is so optimized for the hardware!" they all say... except Linux is already way more optimized in general than macOS is, for many workloads!
$ time tar xf linux-6.3.3.tar
macOS on APFS: 6.8 seconds
Linux on ext4: 1.0 seconds
Both on an M1 MacBook Air 13". That's how much faster the Linux is at dealing with files than macOS.
The hardware drivers don't matter you're dealing with pure CPU workloads and an NVMe SSD. We already have cpufreq and share the Linux NVMe core, so there's nothing left to optimize there that is specific to this hardware. The only thing missing is deep CPU idle which will unlock boost clocks, but only for single-core workloads (multicore compiling is already at its max).
Yet another person on Reddit surprised that Asahi Linux compiles stuff way faster than macOS.
"But macOS is so optimized for the hardware!" they all say... except Linux is already way more optimized in general than macOS is, for many workloads!
$ time tar xf linux-6.3.3.tar
macOS on APFS: 6.8 seconds
Linux on ext4: 1.0 seconds
@marcan this reminds me how #Linux also was the first #OS to run on #Itanium and how the #GCC is even the best compiler for that architecture - even better than #Intel's own!
But yeah, the only thing that would make @AsahiLinux even faster on #AppleSilicon would be if compiling stuff would be done entirely in RAM wherever possible, leveraging 10x->1000x more IOPS and lower latency.
@marcan
ok but you're not doing (almost) any computation here, tar does not even compress anything, you're just barely reading and writing a set of files onto the hd so your point is ext4 is faster than apfs which might be true, apfs is not known for speed but reliability over ssd, encryption, snapshots support ecc...
I personally like both mac and linux (and use windows at work) so I'm happy either way #crossPlatformHappiness
@marcan
ok but you're not doing (almost) any computation here, tar does not even compress anything, you're just barely reading and writing a set of files onto the hd so your point is ext4 is faster than apfs which might be true, apfs is not known for speed but reliability over ssd, encryption, snapshots support ecc...
A bit of (simplified) X history and how we got here.
Back in the 90s and 2000s, X was running display drivers directly in userspace. That was a terrible idea, and made interop between X and the TTY layer a nightmare. It also meant you needed to write X drivers for everything. And it meant X had to run as root. And that if X crashed it had a high chance of making your whole machine unusable.
Then along came KMS, and moved modesetting into the kernel. Along with a common API, that obsoleted the need for GPU-specific drivers to display stuff. But X kept on using GPU-specific drivers. Why? Because X relies on 2D acceleration, a concept that doesn't even exist any more in modern hardware, so it still needed GPU-specific drivers to implement that.
The X developers of course realized that modern hardware couldn't do 2D any more, so along came Glamor, which implements X's three decades of 2D acceleration APIs on top of OpenGL. Now you could run X on any modern GPU with 3D drivers.
And so finally we could run X without any GPU-specific drivers, but since X still wants there to be "a driver", along came xf86-video-modesetting, which was supposed to be the future. It was intended to work on any modern GPU with Mesa/KMS drivers.
That was in 2015. And here's the problem: X was already dying by then. Modesetting sucked. Intel deprecated their GPU-specific DDX driver and it started bitrotting, but modesetting couldn't even handle tear-free output until earlier this year (2023, 8 whole years later). Just ask any Intel user of the Ivy Bridge/Haswell era what a mess it all is. Meanwhile Nvidia and AMD kept maintaining their respective DDX drivers and largely insulating users from the slow death of core X, so people thought this was a platform/vendor thing, even though X had what was supposed to be a platform-neutral solution that just wasn't up to par.
And so when other platforms like ARM systems came around, we got stuck with modesetting. Nobody wants to write an X DDX. Nobody even knows how outside of people who have done it in the past, and those people are burned out. So X will *always* be stuck being an inferior experience if you're not AMD or Nvidia, because the core common code that's supposed to handle it all just doesn't cut it.
On top of that, ARM platforms have to deal with separate display and render devices, which is something modesetting can't handle automatically. So now we need platform-specific X config files to make it work.
And then there's us. When Apple designed the M1, they decided to put a coprocessor CPU in the display controller. And instead of running the display driver in macOS, they moved most of it to firmware. That means that from Linux's point of view, we're not running on bare metal, we're running on top of an abstraction intended for macOS' compositor. And that abstraction doesn't have stuff like vblank IRQs, or traditional cursor planes, and is quite opinionated about pixel formats and colorspaces. That all works well with modern Wayland compositors, which use KMS abstractions that are a pretty good match for this model (it's the future and every other platform is moving in this direction).
But X and its modesetting driver are stuck in the past. It tries to do ridiculous things like draw directly into the visible framebuffer instead of a back buffer, or expect there to be a "vblank IRQ" even though you don't need one any more. It implements a software fallback for when there is no hardware cursor plane, but the code is broken and it flickers. And so on. These are all problems, legacy nonsense, and bugs that are part of core X. They just happen to hurt smaller platforms more, and they particularly hurt us.
That's not even getting into fundamental issues with the core X protocol, like how it can't see the Fn key on Macs because Macs have software Fn keys and that keycode is too large in the evdev keycode table, or how it only has 8 modifiers that are all in use today, and we need one more for Fn. Those things can't be properly fixed without breaking the X11 protocol and clients.
So no, X will never work properly on Asahi. Because it's buggy, it has been buggy for over 8 years, nobody has fixed it in that time, and certainly nobody is going to go fix it now. The attempt at having a vendor-neutral driver was too little too late, and by then momentum was already switching to Wayland. Had continued X development lasted long enough to get modesetting up to par 8 years ago, the story with Asahi today would be different. But it didn't, and now here we are, and there is nothing left to be done.
So please, use Wayland on Asahi. You only get a pass on using X if you need accessibility features that aren't in Wayland yet.
A bit of (simplified) X history and how we got here.
Back in the 90s and 2000s, X was running display drivers directly in userspace. That was a terrible idea, and made interop between X and the TTY layer a nightmare. It also meant you needed to write X drivers for everything. And it meant X had to run as root. And that if X crashed it had a high chance of making your whole machine unusable.
@marcan Detail disagreement: "if X crashed it had a high chance of making your whole machine unusable".
No, that really isn't true. I was using X11 on BSD from 1988, on System V.4 from 1989, on UnixWare from (I think) 1992, and on Linux from 1993. I don't recall X crashing on the BSD or System V.4 at all. on Linux, X was pretty flaky in those days and crashed not infrequently; on UnixWare it did crash, but not often. I don't recall any X crash that locked the machine.
@marcan I am all in for waylamd everywhere. I just miss the seamless display forwarding you had with X. Like just adding -X to your ssh and the grafics magically appear from the remote machine.
As far as I know, nothing like that is yet existing in wayland.
Soooo my previous toots ended up on Phoronix and here come the entitled users saying how dare you tell me to switch to Wayland.
Repeat after me: Xorg is dead. It is unmaintained. It is buggy and those bugs are not getting fixed. *THIS IS FROM ITS OWN DEVELOPERS*. The people previously working on Xorg are now working on Wayland. They are literally part of the same organization FFS.
If you want Xorg to keep working, fix it yourself. Oh, not interested? Nobody else is either. Guess what, if nobody works on it, it will bitrot into oblivion. Nobody has signed up to fix it. No amount of wishful thinking is going to change that. You can keep using it all you like, but unless YOU sign up to maintain it, it's going to die.
Want Xorg to survive? Take over maintenance. We're all waiting.
*crickets*
Soooo my previous toots ended up on Phoronix and here come the entitled users saying how dare you tell me to switch to Wayland.
Repeat after me: Xorg is dead. It is unmaintained. It is buggy and those bugs are not getting fixed. *THIS IS FROM ITS OWN DEVELOPERS*. The people previously working on Xorg are now working on Wayland. They are literally part of the same organization FFS.
@marcan I’ve been using Wayland for a couple years now with no Xorg and I can honestly say it’s finally here it’s solid enough for daily use, never going back
It's all but unmaintained, broken in fundamental ways that cannot be fixed, unsuited to modern display hardware (like these machines), and we absolutely do not have the bandwidth to spend time on it.
We strive for a quality desktop on Apple Silicon machines, but we have to pick and choose our battles very carefully, because we can't single-handedly fix all the problems in the entire Linux desktop ecosystem. Yes, some Xorg things might work better on other platforms. That doesn't mean Xorg isn't broken, it means those platforms have spent years working around Xorg's failings. We don't have the time for that. Distributions and major desktop environments are already dropping Xorg support. It's pointless to try to support it well today on a new platform.
XWayland will continue to be supported for legacy client apps, and we do plan to spend time on optimizing the XWayland experience. But for anything that goes beyond "displaying windows" (compositors, IMEs, input management, desktop environments, etc.), please use native Wayland applications, since XWayland will never integrate properly for those things (by design).
Yes, not every random app and feature you use on Xorg will have a Wayland equivalent. Deal with it. The major players in desktop Linux have decided it's time to move on from Xorg, and if you want to go against the tide you're on your own.
We do expect Xorg to continue to function for the bare essentials (i.e. showing a working desktop), but that's it. We won't be working on any features or non-desktop-breaking bugs beyond that.
The only reason we shipped Xorg by default is that Wayland compositors were slower with software rendering. The reverse is true now that we have GPU drivers, and we will be switching all default-Xorg-KDE users to default-Wayland in an update (along with promoting the GPU drivers to the default builds) really soon. At that point Xorg will be relegated to SDDM, and once a native Wayland release of that finally happens, we won't be shipping any usage of the X server any more.
Please, please stop using Xorg with Asahi Linux.
It's all but unmaintained, broken in fundamental ways that cannot be fixed, unsuited to modern display hardware (like these machines), and we absolutely do not have the bandwidth to spend time on it.
We strive for a quality desktop on Apple Silicon machines, but we have to pick and choose our battles very carefully, because we can't single-handedly fix all the problems in the entire Linux desktop ecosystem. Yes, some Xorg things might work better on...
@marcan slightly off-topic: I am trying to run a Wayland desktop (Ubuntu-22.04 LTS) in a Podman or Docker container, but this fails. Any pointer to helpful resources?
@marcan in case it helps: even RedHat (in RHEL 9 release notes) explicitly state that Xorg is deprecated and will be removed in a future major RHEL release.
Linux on the desktop has clearly settled on Wayland (and XWayland as a protocol bridge) as the way forward. AFAICT this has basically been the position for ~5 years: all the desktop support work has been going into Wayland.
“Native X11” is pretty much retreocomputing at this point :-)
@marcan in case it helps: even RedHat (in RHEL 9 release notes) explicitly state that Xorg is deprecated and will be removed in a future major RHEL release.
Linux on the desktop has clearly settled on Wayland (and XWayland as a protocol bridge) as the way forward. AFAICT this has basically been the position for ~5 years: all the desktop support work has been going into Wayland.
Note to self: speakersafetyd ALSA mixer interlock design:
- Volume unlock is built on the mixer control exclusive lock feature (need to add an ALSA core callback to the kctl to implement this).
- DVC volumes are initially capped at a safe volume known to be safe for the speakers for all possible audio signals (set kernel-side).
- There is a special global watchdog/unlock control (takes an integer timestamp)
- speakersafetyd must first lock the watchdog control and write the current timestamp (CLOCK_MONOTONIC) to it.
- It can then lock the volume controls. This fails if the watchdog control is not locked or not up to date.
- When a volume control is locked by the same PID as the timestamp control and the timestamp is up to date, it removes its volume cap.
- The timestamp must be refreshed at least once per second while the playback PCM stream is active (== the feedback PCM stream is receiving samples). It is allowed to remain unupdated while the PCM is inactive (volumes remain set, but time since last update is cumulative while PCM runs, PCM becoming active does not reset the timeout).
- If at any point the watchdog expires or the unlock control lock is dropped (indicating the daemon died), all volumes are again reset and capped to a safe level.
- If an individual volume control lock is dropped, that individual control goes back to safe mode (only).
Note to self: speakersafetyd ALSA mixer interlock design:
- Volume unlock is built on the mixer control exclusive lock feature (need to add an ALSA core callback to the kctl to implement this).
- DVC volumes are initially capped at a safe volume known to be safe for the speakers for all possible audio signals (set kernel-side).
- There is a special global watchdog/unlock control (takes an integer timestamp)
- speakersafetyd must first lock the watchdog control and write the current timestamp (CLOCK_MONOTONIC)...
There's this particular brand of online abuse I see, where *seemingly* unrelated people all have one very specific, very unique, and very obvious tell that they all are either the same person or from the same small group of people.
Honestly, I have to wonder just how much abuse is actually concentrated in a few small groups of terminally online people who make it their pastime to create random sockpuppets and continuously attack the same person. The Fediverse has also made this pretty clear: I see many abusers of a given slant all end up on the same bad instances (or random one-off instances for the techie type who set up their own).
If you are or have been the target of this brand of abuse, with similar takes coming from a bunch of seemingly unrelated accounts, remember that. There's a good chance it's all the same 2 people, or the same small group chat of haters, whose heads you live in rent-free to the point they spend their time creating alts instead of doing something productive with their lives.
There's this particular brand of online abuse I see, where *seemingly* unrelated people all have one very specific, very unique, and very obvious tell that they all are either the same person or from the same small group of people.
Honestly, I have to wonder just how much abuse is actually concentrated in a few small groups of terminally online people who make it their pastime to create random sockpuppets and continuously attack the same person. The Fediverse has also made this pretty clear: I see...
@marcan
Huh. Maybe ChatGPT could be useful after all. If we just had a sock-puppet-detecting bot that engaged them with an LLM, it could keep them too busy fighting shadows to hurt real people.
...I'm the eighth person to come up with this idea today, aren't I?
@marcan yeah. I've had a few of those. Different people write with different styles and reiterate the same ideas and for lack of a better term aura.
For example, I tend to be very very verbose. I also have a hint of professionalish grammar. I'm trying to get rid of both of these traits for social media, but it's hard since I've been doing it for so long
I think I never told this story here... how I fixed a server with a very precisely placed piece of tape.
So at Euskal Encounter we got shiny new servers a few years back, and they worked great except one of them developed a peculiar problem. It would not shut down.
When told to shut down, it would either hang, or boot back up, or power back up but then fail to boot. This was a problem, because we normally relied on servers shutting down and staying down during our shutdown procedure. Having to have someone babysitting the machine to yank the power is not great. Plus it meant that if we ever got into that state, we couldn't fix it remotely (and some events are run remotely). Once the problem happened, no amount of shutdown/power up/reboot commands to the BMC would fix it (eventually it would start logging power control errors).
So we pulled the server out after an event, and sent it for RMA. It came back saying the techs couldn't reproduce the problem. And indeed, we powered it up on the bench, and it seemed fine.
Stuck it back in the rack at the next event, and it stopped working again.
At this point I was thinking this must be some kind of electrical issue caused by mechanical stress, so we tore it apart and jiggled all the cables and made sure all the connections were tight.
No dice.
This whole thing took several years, since we could only really work on the machine during events (and I kind of live halfway across the world). It just kept on limping on with that bug since we couldn't find time to dive deep into the issue.
At one point I started thinking... What's the difference between the server being in the rack and not? That all the cables are plugged in, particularly USB and Ethernet cables.
Could it be Wake-on-LAN? So I checked the WoL settings, but it was indeed switched off on all the Ethernet interfaces. And besides, we had two identical servers and only one had the issue. I sniffed the network looking for stuff that might pass as a WoL magic packet, but came up empty.
Still, I couldn't find another explanation, so I did the logical test. Unplugged the Ethernet cables, and tested it. It worked fine. Plugged the cables in. The problem reappeared.
Oooookay.
In particular, it was the 4 cables connected to the add-on PCIe network card.
So I swapped the cards on both servers and guess which other server started having buggy shutdowns!
Just in case, I tried upgrading the firmware on the card, but that didn't help.
At this point I'm starting to think about RMAing the card, but that would take time and it'd be hard to explain what the problem is. Buying another card would be an extra expense, and cause us to have different configurations on both servers (which is less desirable).
And then I thought... I'm never going to use this feature, ever. These are servers with BMCs, we can turn them on over IPMI. So this Ethernet card is sending broken/random wake signals to the PCIe slot when it has an Ethernet link? Okay.
I asked for some tape and scissors, pulled the server out again, took the card out, carefully cut out a small sliver of tape, and placed it over the WAKE# pin on the PCIe edge connector. Put it all back together and tested it again.
Problem fixed.
I think I never told this story here... how I fixed a server with a very precisely placed piece of tape.
So at Euskal Encounter we got shiny new servers a few years back, and they worked great except one of them developed a peculiar problem. It would not shut down.
When told to shut down, it would either hang, or boot back up, or power back up but then fail to boot. This was a problem, because we normally relied on servers shutting down and staying down during our shutdown procedure. Having to have...
@marcan I had to mask out the SMBus pins on an older 4-port card so that I could use it in a more recent thin-client that I'm using as a firewall. Otherwise the machine gave a memory error during POST.
I just realized that one of the things that bugs me is people equating stereotypical Linux problems on Asahi with other platforms, because it's not the same, because we actually care.
Suspend not working, audio not working, display tearing issues, power management being bad... all those things are Linux on $random_platform support memes. And they're memes because they never get fixed, because nobody cares. Acer isn't fixing their ACPI to make suspend not break for you. Nobody is spending time getting your speakers on your OEM laptop sounding good. Intel sure aren't fixing their Linux drivers to not tear on my Ivy Bridge laptop. And good luck getting anyone to even think about debugging why your NVMe drive stays warm in suspend and kills battery life.
Yes, you're going to run into similar things in Asahi today, but the difference is we care. And we're going to fix them. And since we control all the drivers and the pre-Linux bootloaders and everything is open source and we know things can be made to work at least as well as macOS, we can fix them.
I just realized that one of the things that bugs me is people equating stereotypical Linux problems on Asahi with other platforms, because it's not the same, because we actually care.
Suspend not working, audio not working, display tearing issues, power management being bad... all those things are Linux on $random_platform support memes. And they're memes because they never get fixed, because nobody cares. Acer isn't fixing their ACPI to make suspend not break for you. Nobody is spending time getting...
@marcan I have never had any of these issues with Linux, although in saying that I acknowledge that I have exclusively used Linux on my framework laptop and my Valve Steam deck and whenever I use peripherals I make sure to use ones that are made for Mac or Chrome OS or are well known / advertised for their Linux support
If you have a problem with trans people at all, please unfollow and go live in a deserted island, because modern computing and electronics wouldn't exist without them.
@marcan
I have a problem with the 'because" 🥴
It sounds like you shouldn't have a problem with trans people because of their contribution. Were I belive (and I guess you too) that you shouldn't have a problem with trans people. #TransRightsAreHumanRights
@marcan There's a long list of famous entries on this list. Of the top of my head: Lynn Conway (extremely influential work on VLSI) and Sophie Wilson (creator of the Arm ISA).
For those wondering why the hell we need all this safety system stuff for the speakers: because the speakers sound nice and loud and crisp, but only if you drive them well past the max "always safe" volume level. With current kernel settings, that level is at -14dBFS on the 14" M1 Pro MBP. That means that while your system will work without speakersafetyd (once this is all tested and enabled), the speakers will be much quieter.
This is especially true for the tweeters. You can hear that in the stream where I played I Won The Loudness War: during the dubstep parts of the song, the snares sound nice and crisp. At those points, the tweeters are probably putting out 2-4x the amount of power they could handle without melting - briefly. But then when the nasty clipped lead comes in, that overloads them a lot more and the safety daemon clamps down on the tweeter volume. After that part, you can hear them recover over a few seconds and the snares gradually come back.
Most music does not have ridiculous clipped leads like that song, but it very often does have loud snares and cymbals, and other high-frequency transients. Additionally, the tweeters are high-passed in hardware at 800 Hz, and most music does not have that much energy in the high end to begin with relative to the bass, but could. So if you want to set the overall max volume to a safe level, you have to assume the input is a 4000 Hz square wave or something ridiculous like that. And that's how you get that -14dBFS "dumb" level limit, which makes the speakers sound a lot quieter and worse, even though the vast majority of music played at 100% would never come close to needing that much reduction to be safe.
With a dynamic temperature/power limit model for the speakers, you can squeeze out a lot more of that headroom and still remain safe. And that gives you nice and punchy music without requiring harsh limiters or low volume caps to keep the speakers from melting.
And this is one reason why Mac speakers sound better and louder than most. Because most manufacturers don't bother to do this.
For those wondering why the hell we need all this safety system stuff for the speakers: because the speakers sound nice and loud and crisp, but only if you drive them well past the max "always safe" volume level. With current kernel settings, that level is at -14dBFS on the 14" M1 Pro MBP. That means that while your system will work without speakersafetyd (once this is all tested and enabled), the speakers will be much quieter.
This post brought to you by gdb and grep -a, because after typing all that out as a quote toot and deciding that nah, I wanted it standalone, I clicked the "x" next to the quote box (which implies removing the quote association) and that didn't just cancel the quote, it deleted all the text.
So I attached gdb to the Firefox content process hosting this tab, took a core dump, and grepped it for the lost text. I wasn't about to write all that again from scratch.
LRT: I think this really illustrates just how dumb present AI systems are. They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on.
Effectively, AIs like Stable Diffusion and ChatGPT know how to extrapolate and interpolate from their training data. Sure, it looks cool, and it feels intelligent because they're riffing off of a corpus of material produced by actually intelligent humans. But give them a problem they haven't seen before, or for which the obvious "extrapolated" solution is just hilariously and obviously wrong, and they'll show you just how dumb they are. They also have no concept of logic or facts, so there is no expectation of accuracy - an AI won't tell you it doesn't know how to do something, it'll just make up some BS.
Another way to put it is that AI models are just fancy generalized (very) lossily compressed versions of their training inputs. Think about that next time the copyright implications of AI come up again.
LRT: I think this really illustrates just how dumb present AI systems are. They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on.
> They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on
I am feeling extremely seen by this post. Sorry, I will just quietly retreat into my corner.
Hi (some) kernel folks: please stop trying to convince me (and the rest of the world) that email patches are better.
They aren't, and the rest of the world knows that already. We're tired, and countless people have been driven away from contributing thanks to this utterly broken process.
Before you manage to drive *me* and all my contributors away, I *will* start pushing alternate mechanisms for trees I am involved with. If you don't like it, tough luck. Complain to Linus, and make sure he knows if he kicks me out his shiny M2 MacBook no longer gets upstream support.
Hi (some) kernel folks: please stop trying to convince me (and the rest of the world) that email patches are better.
They aren't, and the rest of the world knows that already. We're tired, and countless people have been driven away from contributing thanks to this utterly broken process.
Before you manage to drive *me* and all my contributors away, I *will* start pushing alternate mechanisms for trees I am involved with. If you don't like it, tough luck. Complain to Linus, and make sure he knows...
@marcan 'Complain to Linus, and make sure he knows if he kicks me out his shiny M2 MacBook no longer gets upstream support.' is just toxic dude. I really wish you would stop with that kind of thing, my whole criticism of you before was that you use your social media presence to effectively attack kernel maintainers
I thought maybe I was wrong. But you know, maybe I was right first time.
@marcan Seriously, Review Board is as old as the hills (and came out of a solid company email review culture during our early days at VMware! you can probably still interact with it using email!) and was a dramatic improvement over a pure email process.
Did you know Asahi Linux is introducing support for some brand new Apple Silicon features faster than macOS?
The M1 has a virtual GIC interrupt controller for enhanced virtualization performance. Linux supports it, macOS does not.The M2 introduced Nested Virtualization support. The patches for supporting that on Linux are in review; macOS still doesn't support it.The M2 introduced BTI, a hardware security mitigation. Fedora Asahi ships with it enabled, macOS does not. I'm also going to be working on adding that to Apple's WebKit/JavaScriptCore soon, because it's still missing (which breaks it on Fedora).
Did you know Asahi Linux is introducing support for some brand new Apple Silicon features faster than macOS?
The M1 has a virtual GIC interrupt controller for enhanced virtualization performance. Linux supports it, macOS does not.The M2 introduced Nested Virtualization support. The patches for supporting that on Linux are in review; macOS still doesn't support it.The M2 introduced BTI, a hardware security mitigation. Fedora Asahi ships with it enabled, macOS does not. I'm also going to be working...
Reminder that if you ever see "media engine" on a die map, that person has no idea what they're doing 😂 ("Media engine" is a marketing term and not an actual single hardware thing)
"We are not political, fuck politics, Nazis are welcome"
Is this really the kind of community you want to foster for DLang, @WalterBright? Because what you're doing is not how you get an apolitical community, it's how you get a cesspool of highly political bigots and Nazis.
Time and time again this whole "no politics, we only care about the code" attitude is just a screaming dog whistle for bigots, Nazis, and abusers. This guy just said it out loud.
If you think it isn't and you haven't figured it out yet, you are not fit to lead a community.
@marcan I think it's simpler if you look at politics as the art of getting people with possibly big differences to work together in a project or a society or country or the world. more people means more politics, but it's never zero when your past the tribal cohesion limit
"no politics" is just "my politics" since that's the only way to avoid talking about the implied politics. which means "people like me, and not people unlike me" which simply is fascism
@marcan ZJL 珠海杰理 possibly??
@marcan Which unit/machine is that IC in?
@marcan Wild, I can't find anything for any variation of that manufacturer logo...