Found the RedEye Mini dongle for iPhone. Advertised at US$49 on their website. Tried to buy it but they don't ship to Oz. Seems they've given distributor rights to some tin-pot company in Oz and they want AUD$95!!, with the exchange rate almost at parity.
So I checked Amazon and found one shop selling it for US$38 but they wanted $35 for postage! Eventually found Amazon's offer which was the same price but only $9 for postage. So it looks like I will get the dongle for the equivalent of US$49 and the Oz sellers can go complain to the govt. about how they are losing sales to the Internet because no GST makes them cheaper (yeah, almost 100% cheaper).
Not sure if this dongle will be as open to program as the L5 but they might make an exception for the VTV. The use of the audio socket...
Oh blast, the use of the audio socket means it will probably disable the microphones in the iPhone. Screwed whichever way I turn: use L5 and lose charging port or use RedEye and lose voice input. Well at least the RedEye will make a good manual TV controller.
Saturday, January 15, 2011
VTV
I might have the order of the apps around the wrong way here. Maybe I ought to be hacking the OpenEars sample so that, in addition to displaying the words it recognises, it also outputs the corresponding IR codes to the IR dongle. That's actually a lot easier to implement (I think). No need to install the IR code learning section nor the GUI section.
So devel steps:
- Test VTV app with VTV vocab (clone OE sample app). (Especially test headset input.)
- Use the L5 app to learn the hex codes for DTV controller and upload them to L5 hexcodes database.
- Install the hex codes in VTV app.
- Add a "dumb" controller which toggles "on" and "off" when each time voice input is detected. Can test this with my DTV tuner controller (or maybe even Apple FrontRow controller?).
- Add IR output for all the vocab.
VoiceTV
A little bit later...
Have finally got around to working a bit on project. Have downloaded, compiled and run the L5 sample app on both the emulator and my iPhone. Have downloaded, compiled and run OpenEars library and sample app on both the emulator and my iPhone. OpenEars is an XCode wrapper around CMU Sphinx.
It looks like the restricted vocab for the controller will allow Sphinx to be recognise words pretty accurately. However I still need to generate and test a "TV controller" word list to replace the sample app's "mobile toy controller" list.
The L5 now uses the Phillips Pronto hex code format ("We are using a modified subset of the Pronto IR Format (http://www.remotecentral.com/features/irdisp1.htm).") This would be fine except the RemoteCentral library is seriously out of date wrt. Sharp Aquos TVs. It looks like I will have to manually load the codes from a controller.
So all I have to do now is find a way to fool the L5 sample app into thinking it's getting its input from its buttons when in fact it's really getting it's input from OpenEars recognised words.
Have finally got around to working a bit on project. Have downloaded, compiled and run the L5 sample app on both the emulator and my iPhone. Have downloaded, compiled and run OpenEars library and sample app on both the emulator and my iPhone. OpenEars is an XCode wrapper around CMU Sphinx.
It looks like the restricted vocab for the controller will allow Sphinx to be recognise words pretty accurately. However I still need to generate and test a "TV controller" word list to replace the sample app's "mobile toy controller" list.
The L5 now uses the Phillips Pronto hex code format ("We are using a modified subset of the Pronto IR Format (http://www.remotecentral.com/features/irdisp1.htm).") This would be fine except the RemoteCentral library is seriously out of date wrt. Sharp Aquos TVs. It looks like I will have to manually load the codes from a controller
So all I have to do now is find a way to fool the L5 sample app into thinking it's getting its input from its buttons when in fact it's really getting it's input from OpenEars recognised words.
Saturday, August 7, 2010
iPhone app for voice-controlled TV
I was looking to do something useful during my extended sabbatical "between jobs" last year so contacted ability.org.au to see if I could assist with any projects. Their director, Graeme, suggested an iPhone app to allow voice control of a TV. He noted the relatively low-cost of iPhones (compared to typical assistive devices), their ubiquity, the availability of IR dongles for iPhones (nearly all mobile phones prior to iPhone used to include IR capability, how ironic), and the availability of voice-recognition software for the iPhone (specifically Dragon Dictation, Google's voice search and the voice control utility in IOS). So now seemed like a good time to try this. It might even be possible to get some research funding to at least cover costs. (And to be honest, if it works it might be saleable on the App Store for general users, not simply those who can't accurately hit the buttons on a typical TV controller.)
Graeme suggested I look at the L5 Remote dongle for iPhone because they have recently open-sourced the API. I downloaded this package and it does indeed seem usable. The L5 costs US$49.95 (+S&H), about AU$70. Another dongle I looked at is the My TV Remote (originality in naming doesn't seem to be part of the plan). It sells for US$9.99 and plugs into the iPhone audio socket (L5 plugs into docking connector). I emailed the company and although they don't sell to Australia yet, they are considering it. And they are willing to send me their API if I want it. MTR uses audio socket which might make use of headset mike difficult (probably use Bluetooth to bypass). OTOH L5 uses docking connector which might make charging and/or long-term use difficult. (Couldn't find a charging double adapter for dock.) I purchased an L5 and it arrived within a week but it's just sat on my desk while work and life have intervened :-)
My initial hope was to use the recogniser in Google's voice search but so far haven't been able to find an open-source API for it. Dragon Dictation is proprietary and requires licensing which I'd rather avoid if possible.
Last week I discovered the CMU Sphinx project (http://cmusphinx.sourceforge.net/), an open-source voice recognition project. Brian King has made an iPhone Objective-C wrapper available for the pocketsphinx library (http://github.com/KingOfBrian/VocalKit) so I'm currently trying to learn how to use Sphinx.
The project, as I see it, requires the following:
1) A recent XCode and iOS4 SDK installed on my MacBook
2) pocketsphinx library added to XCode's static lib list
3) L5's lib added to XCode's static lib list
4) Some glue code to output IR codes when one of a small list of command words is recognised.
Each of the above steps is a project in itself:
1a) Renew developers subscription with Apple
1b) Download latest XCode and iOS4 SDK
1c) Install
1d) Install/update app signature certificate
1e) Write a test app, compile and test on iPhone emulator
1f) Install and run on iPhone
2a) Download Brian King's iPhone wrapper
2b) Install in XCode as per README
2c) Write and test a "hello world" app
2ca) Do I need a special dictionary or is default dictionary adequate?
2cb) Does Sphinx need training for Australian accent?
2cc) Should I test Sphinx on MacBook first to answer these questions? (Probably yes.)
2d) Modify pocketsphinx output if necessary to ease connect to L5
3a) Download and install L5 Remote app on iPhone
3b) Upload app with test controller's IR codes. (Can use digital tuner remote controller for sampling and testing.)
3c) Verify app works.
3d) Download and install L5 API in XCode
3e) Write and test "Hello world" app
3ea) Specify what a "Hello world" app should do.
3eb) Write, test, debug on iPhone (Can't use emulator for extra hardware lik L5)
4a) Design control program
4aa) GUI is almost non-existent.
4ab) Functionality to copy existing L5 app controls
4b) Code and test on iPhone
4c) Repeat 4a) and 4b) until working :-)
OK, so today we are upto 1e) write and install a test app on my 3GS with iOS4.0.1
More later.
Graeme suggested I look at the L5 Remote dongle for iPhone because they have recently open-sourced the API. I downloaded this package and it does indeed seem usable. The L5 costs US$49.95 (+S&H), about AU$70. Another dongle I looked at is the My TV Remote (originality in naming doesn't seem to be part of the plan). It sells for US$9.99 and plugs into the iPhone audio socket (L5 plugs into docking connector). I emailed the company and although they don't sell to Australia yet, they are considering it. And they are willing to send me their API if I want it. MTR uses audio socket which might make use of headset mike difficult (probably use Bluetooth to bypass). OTOH L5 uses docking connector which might make charging and/or long-term use difficult. (Couldn't find a charging double adapter for dock.) I purchased an L5 and it arrived within a week but it's just sat on my desk while work and life have intervened :-)
My initial hope was to use the recogniser in Google's voice search but so far haven't been able to find an open-source API for it. Dragon Dictation is proprietary and requires licensing which I'd rather avoid if possible.
Last week I discovered the CMU Sphinx project (http://cmusphinx.sourceforge.net/), an open-source voice recognition project. Brian King has made an iPhone Objective-C wrapper available for the pocketsphinx library (http://github.com/KingOfBrian/VocalKit) so I'm currently trying to learn how to use Sphinx.
The project, as I see it, requires the following:
1) A recent XCode and iOS4 SDK installed on my MacBook
2) pocketsphinx library added to XCode's static lib list
3) L5's lib added to XCode's static lib list
4) Some glue code to output IR codes when one of a small list of command words is recognised.
Each of the above steps is a project in itself:
1a) Renew developers subscription with Apple
1b) Download latest XCode and iOS4 SDK
1c) Install
1d) Install/update app signature certificate
1e) Write a test app, compile and test on iPhone emulator
1f) Install and run on iPhone
2a) Download Brian King's iPhone wrapper
2b) Install in XCode as per README
2c) Write and test a "hello world" app
2ca) Do I need a special dictionary or is default dictionary adequate?
2cb) Does Sphinx need training for Australian accent?
2cc) Should I test Sphinx on MacBook first to answer these questions? (Probably yes.)
2d) Modify pocketsphinx output if necessary to ease connect to L5
3a) Download and install L5 Remote app on iPhone
3b) Upload app with test controller's IR codes. (Can use digital tuner remote controller for sampling and testing.)
3c) Verify app works.
3d) Download and install L5 API in XCode
3e) Write and test "Hello world" app
3ea) Specify what a "Hello world" app should do.
3eb) Write, test, debug on iPhone (Can't use emulator for extra hardware lik L5)
4a) Design control program
4aa) GUI is almost non-existent.
4ab) Functionality to copy existing L5 app controls
4b) Code and test on iPhone
4c) Repeat 4a) and 4b) until working :-)
OK, so today we are upto 1e) write and install a test app on my 3GS with iOS4.0.1
More later.
Tuesday, March 3, 2009
My first SMT build
I saw a PIC Logic Tester kit being offered by JayCar which uses SMDs. I built a TTL/CMOS probe in my previous life as a hardware designer in the 70s but hadn't updated to the new lower power technologies. I need a logic probe so this was a good start.
Kit arrived this afternoon and I nearly died when I saw how small the components really are. But under the magnifying lamp I could read their values or markings and the PCB looked a lot clearer. So I fired up my new soldering iron and commenced work.
Six hours later it's finished. Lots of problems encountered and overcome. The worst was when I squeezed too hard on the tweezers and a capacitor shot out and I have no idea where in the room it landed. I spent 15 minutes looking but couldn't find it, so decided to proceed without it, hoping it wasn't too critical. But then I found it under the board itself. 0603 components are the worst for hand assembly.
I also discovered I probably had the iron too cold initially. I started with 350C but moved it to 375C and joins seemed a lot quicker and 'wetter'.
Now I've realised I don't have a circuit to test it on so my next project will be to use the FreeScale sample chip I got a while ago to build a simple counter or whatever and I can check my new logic probe on that.
I was delighted to run the 'blinktest' demo program on my S40C18 and display a 291Hz square wave from one of the pins on my new oscilloscope.
Kit arrived this afternoon and I nearly died when I saw how small the components really are. But under the magnifying lamp I could read their values or markings and the PCB looked a lot clearer. So I fired up my new soldering iron and commenced work.
Six hours later it's finished. Lots of problems encountered and overcome. The worst was when I squeezed too hard on the tweezers and a capacitor shot out and I have no idea where in the room it landed. I spent 15 minutes looking but couldn't find it, so decided to proceed without it, hoping it wasn't too critical. But then I found it under the board itself. 0603 components are the worst for hand assembly.
I also discovered I probably had the iron too cold initially. I started with 350C but moved it to 375C and joins seemed a lot quicker and 'wetter'.
Now I've realised I don't have a circuit to test it on so my next project will be to use the FreeScale sample chip I got a while ago to build a simple counter or whatever and I can check my new logic probe on that.
I was delighted to run the 'blinktest' demo program on my S40C18 and display a 291Hz square wave from one of the pins on my new oscilloscope.
Relearning and retraining
I walked away from a high-paying contract into the middle of an economic downturn/recession/depression/meltdown/end_of_the_world_as_we_know_it. I keep asking myself is it really better to starve for one's beliefs? I just couldn't take the client's money anymore when I knew I wasn't doing anything useful for the client, I hated what I was doing and I realised that I was even losing skills in areas I wanted to work in (e.g. Perl and web development).
So here we are three months later and not a nibble for any of the jobs I've applied for and most of those jobs don't look all that interesting.
So in the meantime I've started self-educating in a completely different area of IT from how I've earned my living for the past 20 years: embedded microcontrollers.
I've been quite overwhelmed by the amount of learning I will have to undertake to program the SEAForth chip I bought. Obviously there's Forth and Intellasys's version of it,VentureForth. Then there's the SEAForth chip itself which is a Forth machine (in fact there's 40 cores, i.e. 40 Forth machines.) Three of the cores have an ADC and DAC each and so I'll need to know more about digital filtering.
The S40C18 doesn't come with much of a library (so far) and there is no USB, HTTP, Bluetooth or WiFi stack in the current version. So presumably I will have to implement my own if I want to use the chip for any sort of comms app.
My initial idea of implementing a wireless microphone adapter has gone through a lot of ups and downs as various chips and articles in magazines look almost to have done what I want but (so far) there is always one or more of my requirements missing.
My initial optimism that the S40C18 would be able to handle the whole task in a single chip has been dashed when I learned that although the individual cores run at approx. 900MHz, the DAC isn't fast enough to run at these speeds. So my design needs some sort of additional chip if I want to use Bluetooth or WiFi. But USB-WiFi adapters are less than $10 so that's not a big issue.
It occurred to me that all the ideas I've had so far for the S40C18 are all variations on the same set of functions. In addition to a wireless microphone, I've thought of implementing a software-defined radio, a guitar-effects stompbox and a digital oscilloscope/logic analyser. All of these use basically the same parts: a signal digitiser (ADC), some filters and a USB or HTTP or WiFI output.
Another area of study was prompted by holding the S40C18 eval board in my hand. I could barely see most of the components let alone work out what they are. Surface-mount devices are obviously where it's at these days. So this prompted me to embark on a hasty update to my hardware assembly skills. I emailed a friend who I know has done some SMT work asking for help for what equipment I need to build SMT circuits and he replied with a multi-page article which he ought to publish, it is so useful.
So I bought a DMM/Oscilloscope, a new soldering station with really fine bits for SMT work, a magnifying lamp, some breadboards, solder, hookup wire and some other bits I've forgotten already.
And so the great hardware adventure begins...
So here we are three months later and not a nibble for any of the jobs I've applied for and most of those jobs don't look all that interesting.
So in the meantime I've started self-educating in a completely different area of IT from how I've earned my living for the past 20 years: embedded microcontrollers.
I've been quite overwhelmed by the amount of learning I will have to undertake to program the SEAForth chip I bought. Obviously there's Forth and Intellasys's version of it,VentureForth. Then there's the SEAForth chip itself which is a Forth machine (in fact there's 40 cores, i.e. 40 Forth machines.) Three of the cores have an ADC and DAC each and so I'll need to know more about digital filtering.
The S40C18 doesn't come with much of a library (so far) and there is no USB, HTTP, Bluetooth or WiFi stack in the current version. So presumably I will have to implement my own if I want to use the chip for any sort of comms app.
My initial idea of implementing a wireless microphone adapter has gone through a lot of ups and downs as various chips and articles in magazines look almost to have done what I want but (so far) there is always one or more of my requirements missing.
My initial optimism that the S40C18 would be able to handle the whole task in a single chip has been dashed when I learned that although the individual cores run at approx. 900MHz, the DAC isn't fast enough to run at these speeds. So my design needs some sort of additional chip if I want to use Bluetooth or WiFi. But USB-WiFi adapters are less than $10 so that's not a big issue.
It occurred to me that all the ideas I've had so far for the S40C18 are all variations on the same set of functions. In addition to a wireless microphone, I've thought of implementing a software-defined radio, a guitar-effects stompbox and a digital oscilloscope/logic analyser. All of these use basically the same parts: a signal digitiser (ADC), some filters and a USB or HTTP or WiFI output.
Another area of study was prompted by holding the S40C18 eval board in my hand. I could barely see most of the components let alone work out what they are. Surface-mount devices are obviously where it's at these days. So this prompted me to embark on a hasty update to my hardware assembly skills. I emailed a friend who I know has done some SMT work asking for help for what equipment I need to build SMT circuits and he replied with a multi-page article which he ought to publish, it is so useful.
So I bought a DMM/Oscilloscope, a new soldering station with really fine bits for SMT work, a magnifying lamp, some breadboards, solder, hookup wire and some other bits I've forgotten already.
And so the great hardware adventure begins...
Tuesday, January 6, 2009
A WiFi microphone adapter Part 1
I assisted with recording a Christmas Eve carol performance. All the usual guff: set up microphones, set up mixer, connect cables, patch into PA system in building, level check, attach recorder, check record levels, record performance. Then take it all apart at end.
The single most annoying part of this procedure is laying out the cables and duct-taping them to the floor so audience doesn't trip over them. It got me thinking. Why not use radio mikes?
Obviously the quick solution was to replace the six microphones with radio mikes. Did a quick check and realised this would not be cheap. So replacement ruled out.
What about a radio adapter for the mikes? No, still hundreds of dollars per mike.
The other painful part for me at least because of the duplication of effort was the mixer and recording process. Once the performance was over I had to transfer the recording to my MacBook and process all the audio once again. And then I realised how badly mixed the audio was and it was almost unusable. Despite being complimented by one of the audience about how good the PA was, I had really messed up the mix for the recorder. My efforts at re-mixing are here: http://www.stmaryssingers.com/recordings/
If I had been able to plug the mikes into a WiFi adapter and feed the resultant packets straight into a digital audio workstation (DAW), thus recording each mike individually, and then mix the mikes and feed the resultant mix to the PA, I would have then had the luxury of a proper (re-)mix for the website recordings.
The idea of building my own WiFi mike adapter was prompted by the SEAforth 40C18 chip sitting next to my laptop. Why not use it for this? The blurb says the chip is ideal for audio and wireless applications.
My first thought was to program a fully self-contained audio to WiFi adapter. Surely it would be a simple matter to pipe audio to one of the ADCs and program the rest of the cores to send the digital packets out over a WiFi socket to my awaiting MacBook? A closer reading of the specs and blurb reveals that the chip isn't fast enough to synthesise WiFi frequencies directly and the designers expect one to use the chip to control an RF chip of some sort. I also remembered that radio is a PITA, even a moving foot or hand can have sufficient capacitance to throw off the signal. I started investigating prices of radio modules.
My initial thought was to use WiFi but I remembered there are two other protocols in the 2.4GHz band: Bluetooth and Zigbee (formerly Home RF). Zigbee was quickly ruled out because it is aimed at quite low data rates, basically for remote device data transfer and control. I also initially ruled out Bluetooth because the 10 meter range was not enough for a remote microphone. I later realised that I was reading the early Bluetooth spec and the more recent version probably has the same range as WiFi.
The other requirement for a mike adapter is battery operation which means low power consumption. Once again I thought I was home and hosed when I found an article in Elektor December 2008 for a wireless headphone kit. But the power consumption is over 100mA for transmitter module which is used. The modules are otherwise perfect for my requirements. Except for price! They are a couple of hundred dollars each.
So once again, I looked for other possibilities. I found a link to a Sony PSP being used to transmit audio over HTTP but it turned out to be a PSP with a WiFi dongle attached to the USB port.
And then it hit me how silly I had been. Bluetooth and WiFi dongles are less than $10 these days. So all I have to do is output data packets to the USB bus and the dongle can take care of the RF problems.
I was still undecided whether to use Bluetooth or WiFi. Revised Bluetooth has almost the same range as WiFi but BT is intended to run at much lower power levels so BT should be lighter on the batteries. But I found an article in Circuit Cellar Jan 2009 for an audio over Internet adapter. It sends streamed audio packets using UDP protocol over Ethernet. It's almost what I want. Most relevantly it gives full details of how to program the PC side of the system. The author, Valens, turns the audio stream into a VSTi virtual instrument and nearly all DAWs can handle VSTi these days.
So I have the C source for Valens' adapter and the C++ source for a VSTi plugin to handle the adapter. Not sure how much extra circuitry will be needed to handle the audio input. I will use one of my NT5 condenser mikes for audio input but it needs 48V phantom power. I vaguely remember an article in one of the electronics hobbyist mags for an audio mixer so presumably I can find 'phantom power' circuitry somewhere in that. Actually a phantom power XLR audio socket is the last thing I need to worry about. Any random audio input would be OK. And in fact even that isn't needed till the end. I could probably allocate a couple of the cores to generate a 440Hz sine wave and use that till the WiFi section is sorted out.
Note to self: having an internally generated audio signal could be very useful when the adapter is actually being used on a sound stage or whatever. Setting each adapter to a different note could also be useful. Each adapter will need a unique identifier for its data so can use the same ID to set audio note. (C-scale or 12 chromatics perhaps.)
The single most annoying part of this procedure is laying out the cables and duct-taping them to the floor so audience doesn't trip over them. It got me thinking. Why not use radio mikes?
Obviously the quick solution was to replace the six microphones with radio mikes. Did a quick check and realised this would not be cheap. So replacement ruled out.
What about a radio adapter for the mikes? No, still hundreds of dollars per mike.
The other painful part for me at least because of the duplication of effort was the mixer and recording process. Once the performance was over I had to transfer the recording to my MacBook and process all the audio once again. And then I realised how badly mixed the audio was and it was almost unusable. Despite being complimented by one of the audience about how good the PA was, I had really messed up the mix for the recorder. My efforts at re-mixing are here: http://www.stmaryssingers.com/recordings/
If I had been able to plug the mikes into a WiFi adapter and feed the resultant packets straight into a digital audio workstation (DAW), thus recording each mike individually, and then mix the mikes and feed the resultant mix to the PA, I would have then had the luxury of a proper (re-)mix for the website recordings.
The idea of building my own WiFi mike adapter was prompted by the SEAforth 40C18 chip sitting next to my laptop. Why not use it for this? The blurb says the chip is ideal for audio and wireless applications.
My first thought was to program a fully self-contained audio to WiFi adapter. Surely it would be a simple matter to pipe audio to one of the ADCs and program the rest of the cores to send the digital packets out over a WiFi socket to my awaiting MacBook? A closer reading of the specs and blurb reveals that the chip isn't fast enough to synthesise WiFi frequencies directly and the designers expect one to use the chip to control an RF chip of some sort. I also remembered that radio is a PITA, even a moving foot or hand can have sufficient capacitance to throw off the signal. I started investigating prices of radio modules.
My initial thought was to use WiFi but I remembered there are two other protocols in the 2.4GHz band: Bluetooth and Zigbee (formerly Home RF). Zigbee was quickly ruled out because it is aimed at quite low data rates, basically for remote device data transfer and control. I also initially ruled out Bluetooth because the 10 meter range was not enough for a remote microphone. I later realised that I was reading the early Bluetooth spec and the more recent version probably has the same range as WiFi.
The other requirement for a mike adapter is battery operation which means low power consumption. Once again I thought I was home and hosed when I found an article in Elektor December 2008 for a wireless headphone kit. But the power consumption is over 100mA for transmitter module which is used. The modules are otherwise perfect for my requirements. Except for price! They are a couple of hundred dollars each.
So once again, I looked for other possibilities. I found a link to a Sony PSP being used to transmit audio over HTTP but it turned out to be a PSP with a WiFi dongle attached to the USB port.
And then it hit me how silly I had been. Bluetooth and WiFi dongles are less than $10 these days. So all I have to do is output data packets to the USB bus and the dongle can take care of the RF problems.
I was still undecided whether to use Bluetooth or WiFi. Revised Bluetooth has almost the same range as WiFi but BT is intended to run at much lower power levels so BT should be lighter on the batteries. But I found an article in Circuit Cellar Jan 2009 for an audio over Internet adapter. It sends streamed audio packets using UDP protocol over Ethernet. It's almost what I want. Most relevantly it gives full details of how to program the PC side of the system. The author, Valens, turns the audio stream into a VSTi virtual instrument and nearly all DAWs can handle VSTi these days.
So I have the C source for Valens' adapter and the C++ source for a VSTi plugin to handle the adapter. Not sure how much extra circuitry will be needed to handle the audio input. I will use one of my NT5 condenser mikes for audio input but it needs 48V phantom power. I vaguely remember an article in one of the electronics hobbyist mags for an audio mixer so presumably I can find 'phantom power' circuitry somewhere in that. Actually a phantom power XLR audio socket is the last thing I need to worry about. Any random audio input would be OK. And in fact even that isn't needed till the end. I could probably allocate a couple of the cores to generate a 440Hz sine wave and use that till the WiFi section is sorted out.
Note to self: having an internally generated audio signal could be very useful when the adapter is actually being used on a sound stage or whatever. Setting each adapter to a different note could also be useful. Each adapter will need a unique identifier for its data so can use the same ID to set audio note. (C-scale or 12 chromatics perhaps.)
Subscribe to:
Posts (Atom)