Email or username:

Password:

Forgot your password?
41 posts total
patchlore

Just finished making a very small gesture synthesis example in Konilo! [0]

The Gesture sequence is written in uxntal, metaprogrammed from Konilo. The synthesis patch makes underlying calls to the sndkit API to build up a sound. It has a gesture-synthesizer-generator (GSG) which reads from the "hello" uxn subroutine created above and synthesizes a gesture controlling frequency.

0: git.sr.ht/~pbatch/orphium/tree

paul

I figured out how to add my own sigils in Konilo, so now something like "%0.5" will push a constant value of 0.5 to the sndkit stack (I send it as a string, then internally it gets converted to a float value). This allows me to pass around floating point params even though konilo doesn't support them.

patchlore

Managed to write out an initial draft of a prologue. With any luck, more bits will follow. Then, I'll have a core hypertext story. After that, I'll go back and add audio and video, as well as more side stories and lore.

#gestlings

patchlore

Had a weird phoneme dream last night. Someone was trying to get me to pronounce their name for me, and they were sounding it out one piece at a time. One of these sounds was somehow in between an "L" and a "Th" sound, and the harmonics were very pronounced like a tuvan throat singer. My dream-brain kept on wanting to randomly label it "ï" for some reason, so it might have been a vowel too. Not convinced this is a valid phoneme (at least for human physiology).

Nikita Karamov

@patchlore if th ⟨θ⟩/⟨ð⟩ is a voiceless/voiced dental fricative, and l ⟨l⟩ is a (voiced) lateral approximant, then maybe you've heard a lateral fricative?

en.wikipedia.org/wiki/Voiceles

en.wikipedia.org/wiki/Voiced_a

(pages have sound samples)

patchlore

Wrote some notes up on how I made "Twilight in the Mushroom Forest" for EB01, where I synthesized an entire forest from scratch. Someday I might turn this outline into a blog post.

pbat.ch/brain/dz/gestlings/twi

patchlore

Found this in a notebook, dated May 5th, 2022.

"Birds with phonemes".

I feel like I'm close to implementing this concept, finally.

I will note that Junior is pretty solidly "gentley structured babbling", so at least I can tick that off the TODO list.

patchlore

Fragment from one of the scores I've been developing for a Gestling.

The top part is the text. The bottom part is notation used to generate the correspond sound for the text. The sounds are gibberish, but it is *structured* gibberish.

#gestlings

patchlore

Funny things happen when you anthropomorphize your program.

There's this bug in this speech synthesizer I'm working on. Causes it to hiss at the end of the phrase. It sounds VERY pissed off and crazy. It gives me a mild panic, like I've taken in a cute wild animal from the side of the road that's actually turned out to be quite feral.

OliverUv

@patchlore [non public message, only visible to tagged people]

I think you forgot to change the second [visual] to audio after copy pasting while writing the alt text

Cool audio generation, nice visual

patchlore

You have died. Due to a clerical error, your consciousness wakes up on what it perceives as a moving train, very far from home, and even further from that thing you called "reality". The next stop? Cauldronia, a small celestial body that is the homeworld of the Gestlings. It's probably going to be a while before they correct this mistake, so you might as well take a look around and explore.

pbat.ch/blog/posts/2023-07-30-

patchlore

Getting back into procedurally generated kufic inspired symbols, this time 4-bit high glyphs. This proof sheet generates a random handful of 6x4 kufic bitpatterns.

The usage here is for developing a way to auto-generate names (symbols) for things while rapid prototyping, that can also be displayed on the monome Grid.

patchlore

While these are all "balanced" according to the core Kufic rules, some of these work better than others as being interpreted as a cohesive "glyph" unit. I'll probably need to introduce more heuristics at some point, but I probably shouldn't get too sucked into it now. Too much to do.

patchlore

Strange symbols demand strange input methods.

I've been working out a chorded input system designed to work on an ortholinear 3x3 keypad, such as the one found on a standard numberpad.

Input gestures are broken up into shapes of size 3. I've begun curating some of these shapes and giving them names so they are easier to remember.

In total, there are 84 possible combinations.

patchlore

It's funny reading a PhD dissertation from a reputable place like Stanford, and you read verbiage which basically boils down to "I randomly tried this approach and it seems to work."

patchlore

The source code for Gestlings, my ongoing explorations in Gesture Synthesis and creating "Sounds With Faces", can be found in this read-only fossil export here:

git.sr.ht/~pbatch/gestlings

mnolth is the underlying engine used to generate the sounds and visuals (also a fossil export):

git.sr.ht/~pbatch/mnolth

patchlore

I'm beginning to think more about notation, similar to what I was doing back in October/November.

This line noise is actually valid syntax. It has a parser and everything, and is a low-level DSL for producing a Gesture Path[0].

While I can display and parse these sorts of symbols, I don't have a great way of inputting them. That, along with notation systems for higher-level DSLs[1][2], comes next...

0: pbat.ch/gestlings/path/
1: pbat.ch/gestlings/nrt/
2: pbat.ch/gestlings/seq/

I'm beginning to think more about notation, similar to what I was doing back in October/November.

This line noise is actually valid syntax. It has a parser and everything, and is a low-level DSL for producing a Gesture Path[0].

While I can display and parse these sorts of symbols, I don't have a great way of inputting them. That, along with notation systems for higher-level DSLs[1][2], comes next...

patchlore

It's finally happening: I've taken the first few actionable steps into integrating sndkit with Uxn.

The approach I am taking is to treat sndkit as an external synthesizer chip that you talk to using a serial protocol. An instance Sndkit gets attached to an uxn IO port, and then you can send bytes to it to build up patches and render blocks of audio.

I have a proof of concept serial protocol that works with the sndkit API. Writing an Uxn program to generate the bytes comes next.

Devil Lu Linvega

@patchlore omg, it's happening!! Looking forward to see your experiments. I can see a few different directions where you can take this and I'm excited for all of the options.

Go Up