Managed to write out an initial draft of a prologue. With any luck, more bits will follow. Then, I'll have a core hypertext story. After that, I'll go back and add audio and video, as well as more side stories and lore.
40 posts total
12
Managed to write out an initial draft of a prologue. With any luck, more bits will follow. Then, I'll have a core hypertext story. After that, I'll go back and add audio and video, as well as more side stories and lore. Had a weird phoneme dream last night. Someone was trying to get me to pronounce their name for me, and they were sounding it out one piece at a time. One of these sounds was somehow in between an "L" and a "Th" sound, and the harmonics were very pronounced like a tuvan throat singer. My dream-brain kept on wanting to randomly label it "ï" for some reason, so it might have been a vowel too. Not convinced this is a valid phoneme (at least for human physiology). @patchlore if th ⟨θ⟩/⟨ð⟩ is a voiceless/voiced dental fricative, and l ⟨l⟩ is a (voiced) lateral approximant, then maybe you've heard a lateral fricative? https://en.wikipedia.org/wiki/Voiceless_alveolar_lateral_fricative https://en.wikipedia.org/wiki/Voiced_alveolar_lateral_fricative (pages have sound samples) Wrote some notes up on how I made "Twilight in the Mushroom Forest" for EB01, where I synthesized an entire forest from scratch. Someday I might turn this outline into a blog post. https://pbat.ch/brain/dz/gestlings/twilight_mushroom_forest/ Found this in a notebook, dated May 5th, 2022. "Birds with phonemes". I feel like I'm close to implementing this concept, finally. I will note that Junior is pretty solidly "gentley structured babbling", so at least I can tick that off the TODO list. Fragment from one of the scores I've been developing for a Gestling. The top part is the text. The bottom part is notation used to generate the correspond sound for the text. The sounds are gibberish, but it is *structured* gibberish. Funny things happen when you anthropomorphize your program. There's this bug in this speech synthesizer I'm working on. Causes it to hiss at the end of the phrase. It sounds VERY pissed off and crazy. It gives me a mild panic, like I've taken in a cute wild animal from the side of the road that's actually turned out to be quite feral. @patchlore [non public message, only visible to tagged people] I think you forgot to change the second [visual] to audio after copy pasting while writing the alt text Cool audio generation, nice visual You have died. Due to a clerical error, your consciousness wakes up on what it perceives as a moving train, very far from home, and even further from that thing you called "reality". The next stop? Cauldronia, a small celestial body that is the homeworld of the Gestlings. It's probably going to be a while before they correct this mistake, so you might as well take a look around and explore. Getting back into procedurally generated kufic inspired symbols, this time 4-bit high glyphs. This proof sheet generates a random handful of 6x4 kufic bitpatterns. The usage here is for developing a way to auto-generate names (symbols) for things while rapid prototyping, that can also be displayed on the monome Grid. While these are all "balanced" according to the core Kufic rules, some of these work better than others as being interpreted as a cohesive "glyph" unit. I'll probably need to introduce more heuristics at some point, but I probably shouldn't get too sucked into it now. Too much to do. Strange symbols demand strange input methods. I've been working out a chorded input system designed to work on an ortholinear 3x3 keypad, such as the one found on a standard numberpad. Input gestures are broken up into shapes of size 3. I've begun curating some of these shapes and giving them names so they are easier to remember. In total, there are 84 possible combinations. It's funny reading a PhD dissertation from a reputable place like Stanford, and you read verbiage which basically boils down to "I randomly tried this approach and it seems to work." The source code for Gestlings, my ongoing explorations in Gesture Synthesis and creating "Sounds With Faces", can be found in this read-only fossil export here: https://git.sr.ht/~pbatch/gestlings mnolth is the underlying engine used to generate the sounds and visuals (also a fossil export): It's finally happening: I've taken the first few actionable steps into integrating sndkit with Uxn. The approach I am taking is to treat sndkit as an external synthesizer chip that you talk to using a serial protocol. An instance Sndkit gets attached to an uxn IO port, and then you can send bytes to it to build up patches and render blocks of audio. I have a proof of concept serial protocol that works with the sndkit API. Writing an Uxn program to generate the bytes comes next. @patchlore omg, it's happening!! Looking forward to see your experiments. I can see a few different directions where you can take this and I'm excited for all of the options. 12
|