4
$\begingroup$

So, the idea started when I came across human echolocation, it turns out everyone that has hearing uses passive echolocation intuitively even without knowing it, but more interesting is people that can echolocate actively by clicking their tongues or tapping their canes on the ground, and then I saw a project for a device that is a heavy bulky speaker that you're supposed to wear on your neck and should allow blind people to have spatial perception.

So I had an idea for a blind character to have a wristband or headband(for emmitting sound in all angles) because it would be more comfortable than clicking your tongue constantly or having to carry a cane.

But it could be annoying for people around him, couln't it? The constant clicking?

So i thought about using infra and ultrassound, for a better definition up close without losing "sight" of things far away.

The thing is, humans can't hear infrassound neither ultrassound, so I though about using earpices in the inner ear that emit sound in these unaudible frequencies and then converts it to audible sound, using the structure of the outer ear in reverse to focus the sound better, it could even act as an active hearing protection against the potentially damaging sounds to the precious hearing of our blind charater, AND it could also amplify quieter sounds so he could hear things that others couldn't.

That's DAREDEVIL right there folks, with technology!

Now, the PROBLEM is that i dont't know if the speaker on a earpiece could provide tridimensional sound needed for spatial orientation.

REFERENCES:

Action Lab sound magnifier earpiece demonstration

The speaker that is supposed to be worn around your neck for active echolocation (In my research I've also seen a wristband made for the same purpose that seems way more convenient to carry around)

$\endgroup$
6
  • 1
    $\begingroup$ Remember that questions on this site need to have a single specific ask. It's pretty unclear what you're actually asking of us. You have a device, and you are unsure about a few things, but the only question in your entire post is a rhetorical device about how chirping is annoying. Please edit this post so that it clearly asks a single specific question. $\endgroup$
    – sphennings
    Commented Oct 13, 2022 at 3:13
  • $\begingroup$ Am I correct in assuming your question is whether or not earpieces are able to realistically represent or at least simulate omnidirectional sound? $\endgroup$
    – Joachim
    Commented Oct 13, 2022 at 5:48
  • $\begingroup$ Human anatomy can only support so sophisticated an echolocation. Real echolocating mammals either are content to just know whether there's a thing in front of them at a certain range (bats) or have extensively evolved bodies supporting complex echolocation (dolphins). This is the reason you don't have blind people who can actually echolocate to the level of daredevil. $\endgroup$
    – stix
    Commented Oct 13, 2022 at 18:32
  • $\begingroup$ @Joachim Yes, preciselly. $\endgroup$ Commented Oct 15, 2022 at 0:39
  • 1
    $\begingroup$ Tridimensional sound is not a problem. You can use several smaller speakers to triangulate the locations of objects to find exactly where they are. You can get tridimensional earpieces by having the two earphones play in stereo. $\endgroup$
    – Daron
    Commented Nov 17, 2022 at 20:09

3 Answers 3

1
$\begingroup$

Convert the Sensation

There is no reason the device needs to give sonic feedback to the user. The picture can be formed using echolocation then converted into a more useful form. That way the ears are still free for hearing.

For example the user wears several pressure pads on their face. These pads exert pressure based on how far the pad is to the scenery. In this way the user can move their head around and feel like they are rolling it against a miniature model of what is in front of them.

$\endgroup$
4
$\begingroup$

You're on the right track, but you're missing a couple of important pieces.

In order to effectively echo-locate, a person needs two pieces of information. The incoming echo is only one of them. The other is the time at which the click is sent out. If the person isn't consciously generating those clicks, then they have a .17 second delay between when it is made and when they notice it, which is enough to throw off their perception of how far the target is.

The second thing you're missing is that our perception of 3d sound is contingent upon us knowing what direction our head is pointing. Experiments have demonstrated that we use small head movements to adjust where our ears are pointed in order to generate effective spatial audio awareness, kind of like if we only had one eye, but bobbed our heads around to get parallax information.

I'll apologize in advance, but this is the part where I pee in your Wheaties.

It's theorized that the blind people who actively perform echolocation have retrained the spatial awareness centers of their visual cortex to accept information from their auditory cortex. This suggests that active sonar would only be possible for the blind, and that it would take massive amounts of retraining to get used to it.

Additionally, this echolocation can only be used in quiet environments, while the person's heart rate isn't elevated. This means that our hero could be disabled by a kitchen blender, by yelling at them, or by getting them excited. Feel free to write your story around such issues.

$\endgroup$
5
  • 4
    $\begingroup$ There is a fair amount of research at a perception lab at Berkley that indicates that sighted people can learn echolocation and use it (blindfolded) to identify the size of targets. There is also an organization visioneers.org that helps train blind people to use echolocation. $\endgroup$
    – UVphoton
    Commented Oct 13, 2022 at 17:57
  • $\begingroup$ @UVphoton, thanks for adding that. Yes, my eyes are fine, and I can use echolocation to spot a non-fuzzy and mostly flat object out to about six meters if its profile is greater than about fifteen degrees. It's not too hard to train this by mounting a frying pan on a tripod. I can use it to avoid running into walls in the dark, but it doesn't prevent me from tripping over chairs. Going from that to building a 3d model of the world is like trying to store 30 digits of PI in your short-term memory. It doesn't work; you have to co-opt wetware from somewhere else. $\endgroup$ Commented Oct 13, 2022 at 18:38
  • $\begingroup$ Yes, having high spatial resolution probably doesn’t work, I was thinking more in the context of hallways, open doors, maybe sensing if the surface was changing from one type to another like from grass to asphalt. I am guessing the frequency of a tongue click might also be limiting even with processing. $\endgroup$
    – UVphoton
    Commented Oct 13, 2022 at 19:59
  • $\begingroup$ Why couldn't the person using the earpiece hear the sounds being emitted? My idea is that it would emit constant clicking or even frequencies. $\endgroup$ Commented Nov 18, 2022 at 1:25
  • $\begingroup$ @PauloRaposo, fair question. The reason I don't consider that an option is Sony's ATRAC audio compression technology: sony.net/Products/ATRAC3/tech What it does is remove sounds that the human ear can't perceive due to frequency overlap. Any echolocation technique would need to filter out the outgoing sound from the returning sound in order to hear the returning sound at all. Creating the click with our mouth gives us a tactile signal that we can use for that cancellation. $\endgroup$ Commented Nov 18, 2022 at 16:35
0
$\begingroup$

One big problem is the microphone and speaker on the earpiece. humans get direction from the small differences in sound as they reflect from our outer and middle ear, an earpeice eliminates all that, because sound is only be read at one point and emitted at one point. All those echoes and changes are eliminated.

worse those echoes are unique t each and every ear, and it takes months or years for humans to learn the echoes of each ear, so you can't even simulate them even if you had dozens of directional microphones.

also a clicker machine for blind people to use echolocation already exists, they use dog clicker, it is actually used in teaching it.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .