Skip to content
View frantic0's full-sized avatar
🪐
🪐

Highlights

  • Pro

Organizations

@mixtoolkit

Block or report frantic0

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
frantic0/README.md

👋 Hi, I'm Francisco Bernardo,

I build tools and research software for virtual analog modelling, acoustics, live coding, computational audio, interactive machine learning, and musical HCI.

I'm currently a Research Associate in Digital Musical Instrument Design at the Augmented Instrument Lab, part of the Dyson School of Design Engineering at Imperial College London.

Start here

Here are the best places to explore my work:

  • genam — Generative acoustic metamaterial design and optimisation pipeline and framework
  • sema-engine — Compiler and high-performance audio signal engine for live coding systems and modern web applications
  • sema – Web-based playground for live coding languages design, real-time audio, music and interactive machine learning

Current focus

I’m currently working with virtual analog modelling and simulation via impedance synthesis for reference analog audio circuits, running on ultra-low-latency embedded systems

  • impedance-synthesis-stm32h7 — Programmable synthetic impedance running on STM32 for hybrid analog-digital guitar effects
  • dafx25 — DAFx25 conference paper and companion dataset for reproducible research

What I work on

  • virtual analog modelling via impedance synthesis for analog audio effects
  • acoustic metamaterial design with simulation and optimisation
  • live coding systems for music and machine learning
  • interactive machine learning for musicians and audio developers
  • computational audio and browser-based DSP tools
  • human-computer interaction for sound, instruments, and creative technologies

Previous experience

  • 🔭 Previously, I worked at the Multi-Sensory-Devices Lab at University College London, where I developed acoustic metamaterials and ultrasonic phased arrays for futuristic haptic interfaces, and computational design tools to optimise them.

  • At the Experimental Music Technologies Lab at University of Sussex, I developed Web-based livecoding environments for music and machine learning 🎶 🤖 such as Sema and the sema-engine.

  • My PhD in Computer Science from Goldsmiths, University of London, focused on interactive machine learning toolkit design for music technologists and audio developers.

  • Previously, I worked as a software architect and engineer in the industry, developing interactive digital signage and business intelligence solutions.

Links


I use this GitHub profile to share tools, research code, paper companions, and experimental prototypes across audio, acoustics, machine learning, and interaction design.

Pinned Loading

  1. sema-engine sema-engine Public

    A Signal Engine for a Live Code Language Ecosystem

    JavaScript 32 6

  2. mimic-sussex/sema mimic-sussex/sema Public

    Sema – A Playground for Live Coding Music and Machine Learning

    Svelte 162 48

  3. genam genam Public

    Generative Acoustic Metamaterial – Design and Optimisation Pipeline

    Jupyter Notebook 5

  4. piston-model piston-model Public

    A python implementation of the piston model for ultrasonic phased arrays following the "Theory of Focusing Radiators"

    Jupyter Notebook 1

  5. impedance-synthesis-stm32h7 impedance-synthesis-stm32h7 Public

    Programmable synthetic impedance for hybrid analog-digital effects for STM32

    C 1

  6. dafx25 dafx25 Public

    Companion repository to the DAFx25 paper "Impedance Synthesis for Hybrid Analog-Digital Audio Effects"

    Jupyter Notebook 3