Skip to content

PixelAudio maps arrays of audio samples onto arrays of pixel values.

License

Notifications You must be signed in to change notification settings

Ignotus-mago/PixelAudio

Repository files navigation

PixelAudio

PixelAudio is a Processing Library that maps arrays of audio samples onto arrays of pixel values using space-filling curves such as a zigzag or a Hilbert curve. You can turn a 2D image into an audio signal or turn a 1D signal (including live or recorded audio) into a 2D image. PixelAudio began as a color organ, where sine waves mapped to a Hilbert curve determined the pixel values (RGB colors) in a bitmap traversed by the curve. It later added a real time sampling instrument that can be played by drawing lines. There's a brief video of the drawing/sampling instrument and other features here: https://vimeo.com/1031707765, and a longer example of the color organ here: https://vimeo.com/767814419. These features are part of the examples provided with PixelAudio, which has become a framework for blending images and sound through mapping and transcoding of data and data formats.

Installing and Running PixelAudio

To start with, you'll need to have Processing installed and configured. If this is all new to you, go to the Download webpage for Processing and install it. Then check out the Environment documentation with particular attention to setting the location of your Sketchbook folder. The path to the Sketchbook folder is typically something like "/Users/your_home_directory/Documents/Processing/". Once you have the path configured, navigate to the Sketchbook folder. It contains a number of folders, including one called "libraries."

To install PixelAudio, go to the Releases page and download the latest version of PixelAudio. Extract the files from the downloaded archive. You should end up with one folder, "PixelAudio". Move it into the "libraries" folder in your Sketchbook folder. That's all you need to do to install the PixelAudio library, or any other Processing library.

PixelAudio has no dependencies on other libraries, but to run the examples that come with it you will need to install some additional libraries, which you can do from the Processing Sketch->Import Library...->Manage Libraries... menu command. This opens the Contribution Manager dialog. You will need to install the Minim library to use nearly all the sketches in the PixelAudio examples. Other libraries used in the examples are Video Export, by Abe Pazos, oscP5, by Andreas Schlegel, and the G4P library, by Peter Lager. I also recommend you install the Sound library and Video Library for Processing 4, both from the Processing Foundation.

The Minim Audio Library is the library I use for working with audio signals and audio files. I rely on Video Export to save animations to a video file. Video Export depends on ffmpeg. If you don't have ffmpeg installed, see the Video Export page or the official ffmpeg site for more information. MacOS Silicon binaries can be found here. Instructions for installation on MacOS with Homebrew, MacPorts, or manually can be found here. G4P is only used for the WaveSynthAnimation example. I use oscP5 in the AriaDemoApp to communicate over a network with the UDP protocol.

How PixelAudio Works

In PixelAudio classes, 1D signals and 2D bitmaps are related to each other through lookup tables (LUTs) that map locations in the signal and bitmap arrays onto one another. You could think of the signal tracing a path (the signal path) over the bitmap, visiting every pixel. The signalToImageLUTin PixelAudioMapper lists the position index in the bitmap of each pixel the signal visits. Similarly, the imageToSignalLUT tells you what position in the signal corresponds to a particular pixel. This makes it easy to click on the bitmap and play an audio sample corresponding exactly to the location you clicked, or to transcode an audio signal into RGB pixel values and display them in a bitmap.

The PixelAudioMapper class and the PixelMapGen class and its subclasses provide the core functionality of the library and are abundantly commented. PixelMapGen provides a lightweight framework for creating mappings between audio sample and pixel data arrays. PixelMapGen subclasses plug in to PixelAudioMapper, which can read and write pixel and audio data while remaining independent of the mappings and of the actual audio and image formats. The one restriction (at the moment) is that color is encoded in RGB or RGBA format and audio is encoded as 16-bit floating point values over the interval (-1.0, 1.0). Audio values can exceed these limits in calculations, but should be normalized to the interval for playing audio or saving to file. There are several methods for translating between RGB and HSB color spaces, but display and file output are confined to RGB/RGBA.

It should be relatively easy to write your own PixelMapGen child class and have it immediately available to play with through PixelAudioMapper's methods. PixelAudioMapper also provides many static methods for working with audio and pixel arrays. Other notable classes include the WaveSynth class, which uses WaveData objects for additive audio synthesis to create both a playable audio signal and an animated image that are generated in parallel. Some of the coding examples show how you can read and write JSON files of WaveSynth configurations. There is also a small but effective package of classes,net.paulhertz.pixelaudio.curves.* for point-reduction and curve-modeling.

The examples currently provide a survey of features in the PixelAudio library, particularly for mapping audio signals and bitmaps, using JSON files for WaveSynth and PixelMapGen settings, capturing live audio, playing audio samples interactively, and mixing color channels. See the Examples README for descriptions of each example.

At this point (January 8, 2025) PixelAudio is at the beta testing stage, functional but incomplete. You can download it as a Processing library and run the examples and expect them to do interesting things. The base code is not likely to change much, but testing is not complete. I still have to complete work on the "peel' and "stamp" methods in PixelAudioMapper. The ArgosyArray and Lindenmeyer classes and their examples are nearly complete. Composer Christopher Walczak and I used these classes to produce animated color blocks and musical rhythms in Campos | Temporales (2023).

Release Notes

The first beta release of the PixelAudio library happened November 9, 2024, at Experimental Sound Studio in Chicago, where I was the Spain-Chicago artist in residence. A new workshop and beta release arrived in January, 2025. I hope to get the publication release, version 1.0, out in March 2025, when Christopher Walczak and I are presenting music and animation that uses PixelAudio in performance at the Outside the Box New Music Festival in Carbondale, Illinois.

About

PixelAudio maps arrays of audio samples onto arrays of pixel values.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published