Re: [Wlug] Need some help with digital signal processing ...
Jeff, I will look at reddit forums. However, I am hoping for someone local for beer and pizza based information interchange. Face to face bandwidth is really fast. -David =================== From: Jeff Moyer <jmoyer@redhat.com> Date: Wed Dec 12 08:33:35 CST 2012 To: David Glaser <dglaser@glaserresearch.net>, Worcester Linux Users Group <wlug@mail.wlug.org> Subject: Re: [Wlug] Need some help with digital signal processing ... David Glaser <dglaser@glaserresearch.net> writes:
Hi Folks,
I need some help with an Android app for the hearing impaired. Essentially, the app is to be used with a wearable android device (Google Glasses or theSony android wrist watch) and will listen for sounds such as the door bell or the tty, or any other household soundthat should be reported to the wearer.
Capturing the sound using an Android device is not the problem. What I don't know is how to compare the incoming sound stream with sound signatures that should cause the event to occur. Once the event occurs, then text can be displayed on the googleglasses or the haptic transducer on the wrist watch can be activated.
This app is intended to be open sourceso I'm looking for volunteer help on the algorithms. I will do the programming.
We've got some smart folks on this list, no doubt, but it might serve you better to ask this question in a more focused forum. Try http://www.reddit.com/r/DSP/ for starters. Good luck! Jeff
Doing DSP in your phone or other android device might become really difficult, really fast. In any case, I'd recommend that you get one of those development boards which runs Android and work with that to start. No need to sacrifice your phone ahead of time. Also, instead of listening for sounds, maybe you can wire up a dedicated device into the home which would talk directly to the haptic device or the glasses? My brother went to RIT (Rochester Institute of Technology) and they have a large deaf population there, to the extent that some dorms are fitted out with the bright stobe lights used to wake/alert deaf people of fire alarms, etc. Listening for arbitrary sounds and trying to match them to a library of triggers sounds (sorry for the pun) like a very hard problem. Even apple is doing Siri by sending your speech to a remote datacenter for processing, since the phones don't have enough juice. Admittedly, that's a more difficult problem space, but it is similar and less open ended than your problem space. Just hooking an Arduino into my doorbell/phone/other sensors and having the arduino talk to the alerting system might be a simpler way to do it. John
John, As a fall-back, I am considering going the Arduino/ZigBee route. You can buy ZigBee doorbells, etc. However, this requires a fair amount of installation/wiring that needs to be done to the house. I lost my hearing during my middle ageand I was given a box full of signalers by the Commonwealth of Mass. Hooking up these signalers is not the easiest of tasks and I was hoping to bypass this via having a wearable computer do the recognition. (BTW - I have a Cochlear Implant that obviated the further need for the signalers). -David On 12/12/2012 03:21 PM, John Stoffel wrote:
Doing DSP in your phone or other android device might become really difficult, really fast. In any case, I'd recommend that you get one of those development boards which runs Android and work with that to start. No need to sacrifice your phone ahead of time.
Also, instead of listening for sounds, maybe you can wire up a dedicated device into the home which would talk directly to the haptic device or the glasses?
My brother went to RIT (Rochester Institute of Technology) and they have a large deaf population there, to the extent that some dorms are fitted out with the bright stobe lights used to wake/alert deaf people of fire alarms, etc.
Listening for arbitrary sounds and trying to match them to a library of triggers sounds (sorry for the pun) like a very hard problem. Even apple is doing Siri by sending your speech to a remote datacenter for processing, since the phones don't have enough juice. Admittedly, that's a more difficult problem space, but it is similar and less open ended than your problem space.
Just hooking an Arduino into my doorbell/phone/other sensors and having the arduino talk to the alerting system might be a simpler way to do it.
John _______________________________________________ Wlug mailing list Wlug@mail.wlug.org http://mail.wlug.org/mailman/listinfo/wlug
I would have to imagine having your phone listen to ambient sound and then filter looking for a whole range of specific sounds is basically going to use all the phones processing power along with its ram... I have to imagine that not hearing the beep when the oven is done pre heating must be annoying Not hearing the washing machine or the clothes dryer buzz would really piss me off. imagine a box. On this box would be al led screen where you could cycle through a menu and pick a input to listen to and report when either that input starts or stops. In the case of a clothes dryer when the vibration sensor stops sensing vibration, chances are good it's done. This box would also have plugs for things like a photo sensor you could stick on the face of your stove right over the led for preheating the oven. All of this stuff could be tied back to your phone that would vibrate and sent an sms message. Possibly you could tie it in with a home automation system to flash lights, etc. Just some ideas. Tim. On 12/12/12, David Glaser <dglaser@glaserresearch.net> wrote:
John,
As a fall-back, I am considering going the Arduino/ZigBee route. You can buy ZigBee doorbells, etc. However, this requires a fair amount of installation/wiring that needs to be done to the house.
I lost my hearing during my middle ageand I was given a box full of signalers by the Commonwealth of Mass. Hooking up these signalers is not the easiest of tasks and I was hoping to bypass this via having a wearable computer do the recognition. (BTW - I have a Cochlear Implant that obviated the further need for the signalers).
-David
On 12/12/2012 03:21 PM, John Stoffel wrote:
Doing DSP in your phone or other android device might become really difficult, really fast. In any case, I'd recommend that you get one of those development boards which runs Android and work with that to start. No need to sacrifice your phone ahead of time.
Also, instead of listening for sounds, maybe you can wire up a dedicated device into the home which would talk directly to the haptic device or the glasses?
My brother went to RIT (Rochester Institute of Technology) and they have a large deaf population there, to the extent that some dorms are fitted out with the bright stobe lights used to wake/alert deaf people of fire alarms, etc.
Listening for arbitrary sounds and trying to match them to a library of triggers sounds (sorry for the pun) like a very hard problem. Even apple is doing Siri by sending your speech to a remote datacenter for processing, since the phones don't have enough juice. Admittedly, that's a more difficult problem space, but it is similar and less open ended than your problem space.
Just hooking an Arduino into my doorbell/phone/other sensors and having the arduino talk to the alerting system might be a simpler way to do it.
John _______________________________________________ Wlug mailing list Wlug@mail.wlug.org http://mail.wlug.org/mailman/listinfo/wlug
-- Sent from my mobile device I am leery of the allegiances of any politician who refers to their constituents as "consumers".
Tim, After your feedback and feedback received during theWLUG meeting, it seems that it is better to use something like an Arduino instead of a phone as the signal processing box. To that effect, I googled "arduino dsp" and got a whole lot of hits on DSP shields for arduino and on projects that illustrate how to write arduino sketches that manipulate the stream. One possibility would be to have an Arduino with a DSP and a bluetooth receiver. The bluetooth would communicate with a phone and/or a wearable. The phone would tell the arduino what sounds to look for - that is, provide signatures that should be matched. The wearableand/or the phone would be used to give the event to the user. I think that this configuration along with a battery could be configured to fit in a small fanny pack. The arduino could also use zig-bee to communicate with a home automation system and pass events to the phone/wearable. Hmmm, I think we are on to something. -David On 12/12/2012 08:33 PM, Tim Keller wrote:
I would have to imagine having your phone listen to ambient sound and then filter looking for a whole range of specific sounds is basically going to use all the phones processing power along with its ram...
I have to imagine that not hearing the beep when the oven is done pre heating must be annoying
Not hearing the washing machine or the clothes dryer buzz would really piss me off.
imagine a box. On this box would be al led screen where you could cycle through a menu and pick a input to listen to and report when either that input starts or stops.
In the case of a clothes dryer when the vibration sensor stops sensing vibration, chances are good it's done.
This box would also have plugs for things like a photo sensor you could stick on the face of your stove right over the led for preheating the oven.
All of this stuff could be tied back to your phone that would vibrate and sent an sms message. Possibly you could tie it in with a home automation system to flash lights, etc.
Just some ideas.
Tim.
The long and short of this is that you need to figure out what the sound looks like. Is it a simple sound that's easy to differentiate by just choosing the highest peak on an FFT? If your doorbell has a peak at 10khz and your dryer has a peak at 5khz, it should be pretty easy to do if 4.9khz < max(fft(audiodata)) < 5.1khz then dryer; if 9.9khz < max(fft(audiodata)) < 10.1khz then doorbell. Most likely it's not going to be that easy. Also there is the possibility that the dryer manufacturer and the doorbell manufacturer used the same component for the buzzer and it's going to be mathematically the same sound pattern, but differ on harmonics blocked by the room. You could need to use the whole fft of the sound, recorded at many different locations in the house and then use nearest neighbor for the convolution of the "new" sound with the old sound and then compared to a database of previous sounds. You would need a large database of example sounds to compare with. Probably at least 100 examples of each sound that would go off and then you would need to set a threshold of "closeness" so that you don't always detect new sounds as one of the old sounds. If this didn't work, then you may have to go with a hefty fingerprint like chromaprint or echoprint. Those would give you so much info about not just the frequencies, but also the rhythm of the sounds. Like maybe your dryer and your oven have the same frequency pattern because the manufacturers used the same buzzer, but they pulse them differently. That's one advantage to these full fingerprint libraries. And somebody else did the science, so you don't need to write a dissertation in DSP to be sure that you have some rigour to your approach. Chromaprint <http://acoustid.org/chromaprint> would be a good library to go with in that case. It's part of the acoustid project and has a great how it works <http://oxygene.sk/2011/01/how-does-chromaprint-work/> blog post for the acoustically curious. The problem would be that it's mainly developed for sounds as complex as music, so I don't know how it would work with a "buzz" noise. It's small enough and fast enough that it can happen quickly enough on a phone. That's how programs like shazam<http://www.shazam.com/>work. They make the fingerprint on the phone from a 10 second recording and push it to a server that processes it. Shazam has riddled the landscape with software patents, so chromaprint isn't a replacement for that, it's just close enough that it may work. Other things to look at: http://code.google.com/p/musicg/ - A java audio DSP library. Does fingerprinting and stuff. Echoprint has been ported to Android: https://github.com/gvsumasl/EchoprintForAndroid If you send me some example sounds, I can look at them in python or something and let you know what I think about different metrics that you could use to compare the sounds. Specifically a recording in a loss-less codec like flac or without compression like wav would be important because MP3 and such destroy parts of the audio waveform. 'Course I could just record my kitchen timer and let you know what it looks like... Your offer to do the programming is great, because I'm no app dev, but I do have some education and experience with signal processing. Randall Mason clashthebunny@gmail.com On Thu, Dec 13, 2012 at 4:20 AM, David Glaser <dglaser@glaserresearch.net>wrote:
Tim,
After your feedback and feedback received during the WLUG meeting, it seems that it is better to use something like an Arduino instead of a phone as the signal processing box.
To that effect, I googled "arduino dsp" and got a whole lot of hits on DSP shields for arduino and on projects that illustrate how to write arduino sketches that manipulate the stream.
One possibility would be to have an Arduino with a DSP and a bluetooth receiver. The bluetooth would communicate with a phone and/or a wearable. The phone would tell the arduino what sounds to look for - that is, provide signatures that should be matched. The wearable and/or the phone would be used to give the event to the user. I think that this configuration along with a battery could be configured to fit in a small fanny pack.
The arduino could also use zig-bee to communicate with a home automation system and pass events to the phone/wearable.
Hmmm, I think we are on to something.
-David
On 12/12/2012 08:33 PM, Tim Keller wrote:
I would have to imagine having your phone listen to ambient sound and then filter looking for a whole range of specific sounds is basically going to use all the phones processing power along with its ram...
I have to imagine that not hearing the beep when the oven is done pre heating must be annoying
Not hearing the washing machine or the clothes dryer buzz would really piss me off.
imagine a box. On this box would be al led screen where you could cycle through a menu and pick a input to listen to and report when either that input starts or stops.
In the case of a clothes dryer when the vibration sensor stops sensing vibration, chances are good it's done.
This box would also have plugs for things like a photo sensor you could stick on the face of your stove right over the led for preheating the oven.
All of this stuff could be tied back to your phone that would vibrate and sent an sms message. Possibly you could tie it in with a home automation system to flash lights, etc.
Just some ideas.
Tim.
_______________________________________________ Wlug mailing list Wlug@mail.wlug.org http://mail.wlug.org/mailman/listinfo/wlug
David> As a fall-back, I am considering going the Arduino/ZigBee David> route. You can buy ZigBee doorbells, etc. However, this David> requires a fair amount of installation/wiring that needs to be David> done to the house. That was where I was going with my thoughts too. Not that I'm trying to kill your idea, I'm just trying to think how it would work in a reliable manner, since false positives are going to quickly get people ot NOT use your app. David> I lost my hearing during my middle age and I was given a box David> full of signalers by the Commonwealth of Mass. Hooking up these David> signalers is not the easiest of tasks and I was hoping to David> bypass this via having a wearable computer do the David> recognition. (BTW - I have a Cochlear Implant that obviated the David> further need for the signalers). Ouch, not fun! My eyes are terrible, but luckily correctable back to 20/20, though I'm getting old and going to need bifocals sooner or later. Ugh. Maybe instead of having an app which does this, would it be easier to have a small dedicated piece of hardware? When you got the signalers, did you have to wire them all up and run the wires everywhere? Moving to zigbee might be he answer to that side of the problem, though getting the sensors to work reliably is the hardest part I suspect. Good luck! David> On 12/12/2012 03:21 PM, John Stoffel wrote:
Doing DSP in your phone or other android device might become really difficult, really fast. In any case, I'd recommend that you get one of those development boards which runs Android and work with that to start. No need to sacrifice your phone ahead of time.
Also, instead of listening for sounds, maybe you can wire up a dedicated device into the home which would talk directly to the haptic device or the glasses?
My brother went to RIT (Rochester Institute of Technology) and they have a large deaf population there, to the extent that some dorms are fitted out with the bright stobe lights used to wake/alert deaf people of fire alarms, etc.
Listening for arbitrary sounds and trying to match them to a library of triggers sounds (sorry for the pun) like a very hard problem. Even apple is doing Siri by sending your speech to a remote datacenter for processing, since the phones don't have enough juice. Admittedly, that's a more difficult problem space, but it is similar and less open ended than your problem space.
Just hooking an Arduino into my doorbell/phone/other sensors and having the arduino talk to the alerting system might be a simpler way to do it.
John _______________________________________________ Wlug mailing list Wlug@mail.wlug.org http://mail.wlug.org/mailman/listinfo/wlug
David> ---------------------------------------------------------------------- David> _______________________________________________ David> Wlug mailing list David> Wlug@mail.wlug.org David> http://mail.wlug.org/mailman/listinfo/wlug
participants (5)
-
David Glaser
-
dglaser@glaserresearch.net
-
John Stoffel
-
Randall Mason
-
Tim Keller