In a world where smart phones have commercialized precision MEMS sensors, this stage will re imagine these sensor clusters as completely different things. This is exactly what [Chronopoulos] does. It uses four proximity sensors and converts them into a custom gesture input sensor for sound generation. As a result, quadrant, a reusable human-computer interface device, has proved to be able to well detect gestures and convert them into music.
The core of quadrant is a man-machine interface device built around stm32f0 and four vl6180x time of flight proximity sensors. The idea is to transmit the measured distance data from the device as fast as possible, and then convert it into music interaction on the PC. However, it takes some time to calculate the distance, so [Chronopoulos] pipeline reads the array and transmits the data to the PC through USB at a frequency of 30 Hz.
Through the data collected on the PC side, extensive interaction can be carried out. Want a laser harp? No problem, because [Chronopoulos] shows how to “toggle” virtual strings. How about the azimuth sensor? Just reach out to the array and change the angle. Finally, the four sensors also allow you to detect sweeping gestures on the array, such as the swish of hands from side to side. To understand these interactions, please jump to the video presentation at 2:15 after the break.
If you want to delve into the internal working mechanism of this project, [Chronopoulos] has put firmware, schematic and layout files on GitHub with the license of MIT. He even published a paper [PDF] detailing the mathematical principles behind detecting these gestures. Finally, if you just want to get to the point and create your own music, you can also find this song on tindie.
Today, MEMS sensors are living a wonderful second life outside our mobile phones. This project once again proves the richness they provide for new project ideas. To learn more about MEMS sensor based projects, take a look at this self balancing robot and wand.