Just donated $100 to the amazing Organic Maps project: Trackerless, fully offline, OSM based, open source mapping for Android/iOS. Great UX & routing, super fast & good coverage even of hiking paths, elevation lines & public transport layers...
From their website: "Organic Maps is one of a few applications nowadays that supports 100% of features without an active Internet connection. Install Organic Maps, download maps, throw away your SIM card (by the way, your operator constantly tracks you), and go for a weeklong trip on a single battery charge without any byte sent to the network."
Just donated $100 to the amazing Organic Maps project: Trackerless, fully offline, OSM based, open source mapping for Android/iOS. Great UX & routing, super fast & good coverage even of hiking paths, elevation lines & public transport layers...
From their website: "Organic Maps is one of a few applications nowadays that supports 100% of features without an active Internet connection. Install Organic Maps, download maps, throw away your SIM card (by the way, your operator constantly tracks you), and...
The past few days I've been thinking a lot again about one of the thought/design models most influential on my own #OpenSource practice: Frank Duffy's architectural pace layers (and Stewart Brand's subsequent extension to different contexts), their different timescales and interactions as basis for resilient system design:
1. Each layer exists & operates independently, moves at different timescales (from seconds to millennia and beyond) 2. Each layer influences and only interacts with its direct neighbors
I always found that model super helpful for analyzing and deciding how to deal with individual projects and components in terms of focus/effort, and asking myself which layer this thing might/should be part of. Lately though, I keep on trying to figure out how to better utilize that model to orient my own future practice, also with respect to the loose theme of #PermaComputing and how to frame and better organize my own approaches to it, incl. how to reanimate or repurpose some of the related, discontinued, but not invalid research & projects I've been doing along these lines over the past 15 years...
I understand and appreciate most of the focus on #FrugalComputing & #RetroComputing-derived simplicity as starting points and grounding concepts for attempting to build a more sustainable, personal, comprehensible and maintainable tech, but these too can quickly become overly dogmatic and maybe too constraining to ever become "truly" permanent (at least on the horizon of a few decades). I think the biggest hurdles to overcome are social rather than technological (e.g. a need for post-consumerist, post-spectacular behaviors), so I'm even more interested in Illich/Papert/Nelson/Felsenstein-inspired #ConvivialComputing, #SocialComputing, IO/comms/p2p, #Accessibility, UI, protocol and other resiliency design aspects becoming a core part of that research and think the idea of pace layering can be a very powerful tool to take into consideration here too, at the very least for guiding (and questioning) how to approach and structure any perma-computing related research itself...
Given the current physical and political climate shifts, is it better to continue working "upwards" (aka #BottomUpDesign), i.e. primarily focusing on first defining slow moving, low-level layers as new/alternative foundations (an example here would be the flurry of VM projects, incl. my own)? Or, is it more fruitful and does the situation instead call for a more urgent focus on fast-moving pace layer experiments and continuously accumulating learnings as fallout/sediment to allow the formation of increasingly more stable, but also more holistically informed, slower moving structural layers to build upon further?
It's a bit of chicken vs. egg! In my mind, right now the best approach seems to be a two-pronged one, alternating from both sides, each time informing upcoming work/experiments on the opposite end (fast/slow) and each time involving an as diverse as possible set of non-techbro minds from various fields... 🤔
The past few days I've been thinking a lot again about one of the thought/design models most influential on my own #OpenSource practice: Frank Duffy's architectural pace layers (and Stewart Brand's subsequent extension to different contexts), their different timescales and interactions as basis for resilient system design:
@toxi I often see people who install solar panels on their appartment, thinking that this emulation of limits can do good, but the underlying structure of their context doesn't need this, and this sense of resilience might inadvertently do more harm than good if the grid actually goes. On your question on wether to go "upward/downward", I think it should be more "toward" meeting limits of your situation or context. :)
@toxi Learning what the actual needs are before building any "slower moving structural layers" seems like the more practical way to me. As opposed to spending resources on things nobody knows whether they end up being used much.
Just remembered that I've got a linkhut account (best/closest thing since/to del.icio.us!) and have collected a list of further reading materials for the 3-day "Limits-aware creative computing" workshop I started teaching earlier this week. More will be added over coming days. Sharing here for others interested:
Its such a joy to watch! Don't know if it has more to do with moving to Mastodon and following a quite different crowd of people, but even before that, for the past few years I started sensing a (re)growing interest in #Forth-style languages and related philosophies, i.e. low fat, low energy, minimalism, self-sufficiency, interactive programming, dynamic systems, DSLs, VMs, emulation etc. Renewed interest in all this also seems to come from people with vastly different backgrounds & ages. Forth being one of the most underappreciated langs ever, 10 years ago it felt (subjective experience, no proof!) most active Forthers were either a) #RetroComputing people who've been using the language for decades and/or b) people working w/ severely resource constrained embedded devices (often an overlap). These groups still exist ofc, but I wouldn't be very surprised to learn if demographics taking an active interest (above and beyond doing some toy examples) have started shifting noticeably.
Back in 2015, the ForthHub org on GitHub had a dozen or so members, today its ~240. /r/Forth also has 3k members now, which isn't too shabby (even though a mere blip in the greater picture, but still...)
Is some of this due to more people getting fed up with heaviness of existing mainstream langs & tooling, looking for lighter alternatives? I don't know. Maybe more people are finding their way to functional programming (incl. getting pre-exposed to REPLs and their life-changing impact/discovery of interactive dev processes) and from there taking the smaller step to explore concatenative/stack-based langs...? 🤷♂️
Enough hypothesizing — it's just all _very_ exciting to watch how this is developing and hopefully new ideas & learnings from other langs/envs finding their way into Forth-lands!
“…Forth does it differently. There is no syntax, no redundancy, no typing. There are no errors that can be detected. …there are no parentheses. No indentation. No hooks, no compatibility. …No files. No operating system.” — Chuck Moore
Ps2. Shameless plug, if you want to try out a Forth in the browser, check out my VM/REPL (incl. screen casts, Forth shaders/webaudio examples): https://thi.ng/charlie
Its such a joy to watch! Don't know if it has more to do with moving to Mastodon and following a quite different crowd of people, but even before that, for the past few years I started sensing a (re)growing interest in #Forth-style languages and related philosophies, i.e. low fat, low energy, minimalism, self-sufficiency, interactive programming, dynamic systems, DSLs, VMs, emulation etc. Renewed interest in all this also seems to come from people with vastly different backgrounds & ages. Forth being...
#genuary2023 Art Deco. In 2013/4 I (over)worked on one of the largest and most exhausting #GenerativeArt projects of my life: Co(de)factory was an interactive installation piece for the Digital Revolutions/DevArt exhibition, commissioned by Google & The Barbican Centre London. The centerpiece was a DIY 3D resin printer used to publicly fabricate objects designed by visitors via a custom WebGL-based visual programming environment (which was pain to get running on Chrome on the Nexus tablets embedded in the plinths). This design tool (written in back then still pre-mature #ClojureScript) was based around my https://thi.ng/morphogen DSL which defines 8 basic tree operators (e.g. reflection, subdivision, skewing, tapering etc.) to generate complex geometries via recursive transformations of a single arbitrary seed box. Not going to talk much more about the project here (maybe another time), other than to say the large 3D printed canopy structure/sculpture (3 meters tall, 2.4m diameter) surrounding the printer, as well as the cladding for the plinths were all created from hundreds of small modules designed with the Morphogen DSL and used the "golden era" of 1920s American Art Deco as main inspiration... You can find the entire source code for all components (incl. the design tool, server backend, fabrication files etc.) on GitHub:
#genuary2023 Art Deco. In 2013/4 I (over)worked on one of the largest and most exhausting #GenerativeArt projects of my life: Co(de)factory was an interactive installation piece for the Digital Revolutions/DevArt exhibition, commissioned by Google & The Barbican Centre London. The centerpiece was a DIY 3D resin printer used to publicly fabricate objects designed by visitors via a custom WebGL-based visual programming environment (which was pain to get running on Chrome on the Nexus tablets embedded...