Skip to content
Ramon Blanquer edited this page Mar 12, 2024 · 11 revisions

To see the progress development check out its GitHub Project.

This project is composed of various pieces:

Module Purpose
Sound Detector Discriminate and pick up specific sounds using AI.
Playback Distributor Route the detected sound and bounce it back from multiple speakers on multiple locations.
Sound Player Play any given sound when told by the distributor.
Journal Web App Visualize the sound occurrences. Useful to have an objective overview. If case gets to lawyers it might be useful.

This is a diagram of how they intercomunicate:

Architecture Diagram

If you are only interested in the provisioning and deployment of the architecture:

Document Explains How To...
Provisioning & Deployment Flash a new card, plug into a Raspberry Pi and provision it as master or slave and deploy the project
Raspberry Pi Sound Setup Connect multiple Bluetooth speakers, create a combined sink of various speakers, set up microphone.

Further information on how to get a development environment and how the issues of running container sound on the host (be it a MacBook or a Raspberry Pi):

Document Explains How To...
Docker Container Sound Offer host audio I/O to the container through PulseAudio's TCP interface.
Development Workflow Develop the application container on a non-Pi host with keeping the LSP IDE features and still be able to use the microphone and speakers through PulseAudio.

Other documentation:

Document Explains How To...
Stack Recipes Run hello worlds, proof of concepts and basic examples with the stack tools: InfluxDB, NFS and Ansible.
AI: Transfer Learning Repurpose an already trained classification neural network to classify specific sounds.
Modes of Operation Explains the various ways in which the inference (prediction) can be run.