19
$\begingroup$

TL;DR

Let's say that computers were invented at a school for the blind in mid-1800s. How would today's technology, based on these non-screen-based computers, be different?

ETA: To clarify and hopefully narrow this enough -- I'm assuming that both the newer products will be influenced by functional computers predating movies & television: radio & telegraph/telephone may be more of the communication models. Also, that just like numpads on phones and computer keyboards are arranged differently due to vestigial bits from their separate origins, and our "Save" icon may confuse those who hadn't grown up with 3.5" floppies (I'm from the 5.25" era myself - Apple //c!), and we still call that thing in a car a "glove compartment" despite not wearing specific driving clothing any more).

So while sighted potential users greatly outnumber the blind ones, they're from a world where computers have always been fully accessible to the blind (so accessibility is not an afterthought), and that has probably driven the development of the CS field for quite a while.

Background elements

Braille had already been invented by the early 19th century, and it was derived from a military application (Night Writing, for Napolean's army) -- much like our computers (stored programs, some of the more theoretical elements were codified during WWII ) https://en.wikipedia.org/wiki/Braille

Punch Cards for Weaving had been invented in 1803 -- for a while schools for the blind were often trade schools-- the first one (https://en.wikipedia.org/wiki/Institut_National_des_Jeunes_Aveugles (first school for the blind was also named "National Institute of the working blinds", and was famous for graduating Organists.)

So now let's say they got an early Jacquard Loom head type machine (instead of Organs) from https://en.wikipedia.org/wiki/Jacquard_loom#Importance_in_computing

... The ability to change the pattern of the loom's weave by simply changing cards was an important conceptual precursor to the development of computer programming and data entry. Charles Babbage knew of Jacquard looms and planned to use cards to store programs in his Analytical Engine. In the late 19th century, Herman Hollerith took the idea of using punched cards to store information a step further when he created a punched card tabulating machine which he used to input data for the 1890 U.S. Census.

(Note that this Loom appears to also be a French invention.)

Charles Babbage & Analytical Engine - according to Wikipedia (sorry that I keep going back to that source, but I'm assembling fragments of things I thought I knew or picked up (I'm no tech historian), and Wikipedia's the easiest place to assemble the threads.) -- he was self-taught from reading many mathematicians, some of which were French, and was definitely fighting the British Establishment.

from https://en.wikipedia.org/wiki/Charles_Babbage#Computing_pioneer

While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit

So a computer doesn't need to be print-derivative

We have punched cards (tangible, non-alphabetic) manipulating rules and representations of numbers. As Ada Lovelace said:

"We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves."

So what if punched cards went in, braille results came out? Things may have stayed mechanical longer, instead of moving to processors as we know them, but there'd also be almost a century's extra progress. "Screens" may have moved to Refreshable Braille Displays - but would there be "windows" and other simultaneous processing?

The "World Building" or AltHistory part --

Just like the Internet was very US-focused in the beginning, so it has some legacy effects on domain names and rules, perhaps in this world, the computer world (and thus internet?) were dominated by French research, and blind computer scientists. Look at the Minitel for an example of France being way ahead of the curve! They started as phone-book replacements, but provided message boards and finance stuff.

why I'm asking

I'm documenting navigation of applications designed with minimal concern for accessibility. My particular job seems to be describing how to navigate web applications for screen readers. Screen Readers (which read aloud text to blind/low-vision computer users) address everything in a pretty linear way. (Also, we have to keep all navigation keyboard-focused -- it's more predictable than a mouse.)

When windows pop-up, where did the focus go? Do the users know there's a new dialog on screen? Where does the focus go when the error message goes away? (To the last place it was, to the line with the error, or to the top of the page?) It's easy for the sighted to notice a missing field, the blinking cursor, or that something changed on the screen: but if the default were audible and tactile? How would the interfaces change?

What different communication elements may be emphasized? Would casual computers (like cell phones) do the same things or different ones?

I know answers could go in a steam-punk way, but they doesn't have to, or the proposed tech doesn't need to stay that way.

$\endgroup$
12
  • 2
    $\begingroup$ Seems like we would all learn braille in school, and handwashing before using a keyboard/braille-output would be socially enforced. Braille is not a fast way to transmit information, so brevity would be valued over format, and formatting must add value or context to the message. It's a whole different way of thinking about communication. $\endgroup$
    – user535733
    Commented Aug 22, 2019 at 16:38
  • 2
    $\begingroup$ "When windows pop-up, where did the focus go?" The focus went where it went. The screen reader should not try to guess; it should of course ask the underlying windowing system which window has the focus. "Do the users know there's a new dialog on screen?" Usually, but definitely not always. I have typed inappropriate input in the wrong window many many times. $\endgroup$
    – AlexP
    Commented Aug 22, 2019 at 18:03
  • 3
    $\begingroup$ "For the sighted, the focus is normally obvious": that used to be the case, but then Windows 10 came and now it isn't all that obvious... Sometimes the window with focus has a colored titlebar, sometimes it doesn't, sometimes it doesn't have a titlebar at all. I've learned to anticipate where the focus is and always click pre-emptively in the window which I think has the focus to confirm that the text insertion cursor is there. $\endgroup$
    – AlexP
    Commented Aug 22, 2019 at 20:17
  • 3
    $\begingroup$ @AlexP Not when Windows 10 came out. On User Experience, I asked Why is low contrast between active and inactive window title bars considered a good thing? a year before Windows 10, by which time lack of contrast between active and inactive window title bars in Windows had already been a frustration for me for some time. $\endgroup$
    – user
    Commented Aug 23, 2019 at 6:17
  • 1
    $\begingroup$ There is a sci-fi short story titled, The Country of the Blind. It only relates tangentially to the world you are building, in that the society has been organized around the needs and capabilities of blind people, and a sighted person does not find that an advantage. $\endgroup$ Commented Aug 23, 2019 at 11:00

9 Answers 9

9
$\begingroup$

I think the biggest difference would be in the development of user interfaces.

If computers had been designed primarily by and for blind users, I imagine a much more sophisticated version of the Refreshable Braille Display would be in common use by now. I'm imagining a grid of keys instead of a single row forming a kind of tactile screen. This would allow for parallel processes happening in different zones on the grid, like windows. Users could tap in a particular zone to get an audio readout of that process, to advance the readout, or to drop the cursor and start typing; much like modern haptic screens, different touches could indicate different actions. An audio cue could alert users of a pop-up alert, which would always appear in a designated zone. Afterwards, the user could return their hands to whatever process they wanted. Audio cues could also alert users to things like empty fields; if the grid is labeled like a battleship board, then an alert like "Input required in Zone M6" could be used to direct the user.

If blind people continued to be the primary developers of computers past the initial stages, advances in tactile interfaces would probably have replaced the advances in graphics. A tactile screen, like the one described above, would be a mechanical marvel, but wouldn't require much processing power to run; certainly nothing like playing a video. So the push for more and more powerful processors wouldn't have been as great. The tactile screen might be able to produce static images, by pushing pins up to form the outline of a shape, but probably most entertainment on computers would be in audio form. The podcast boom would have come much sooner, probably replacing the YouTube boom.

I hope that helps!

$\endgroup$
4
  • $\begingroup$ I love it! that's part of what I was curious about. So we may not have had a moore's law style of increase, since processing power wouldn't be the biggest limiter? $\endgroup$ Commented Aug 22, 2019 at 19:17
  • $\begingroup$ I believe that would be true, though I'd be open to arguments to the contrary. With a tactile interface, you would be limited by the human ability to process audio and haptic information. So, once you had the ability to run the tactile screen and produce crisp sound, the push to improve processing power would not be commercially urgent. Only in the last few years, where the need to process large data sets has become critical, would there be a real need for powerful, commercially available processors. $\endgroup$ Commented Aug 22, 2019 at 19:42
  • 4
    $\begingroup$ Speech-recognition would probably also be in a much more advanced state than it is now. Alexa and Siri and the like would've been developed decades ago and the modern versions might be on the level of the Star Trek TNG computers. (I personally hate talking to computers, but if I was blind, I'd probably appreciate it more.) $\endgroup$ Commented Aug 23, 2019 at 18:03
  • $\begingroup$ Well, our modern laptop may become just a topological keyboard. And Topological Designer may become the profession,not graphic designer. $\endgroup$ Commented Aug 27, 2019 at 20:14
70
$\begingroup$

Youngsters. The first computers read and wrote punched cards or punched paper tape; they did not have any kind of user interface where being blind or sighted mattered.

It was perceived as major revolution when some smart technician adapted a typewriter to be able to print computer output; electric teletypewriters were then adapted so that operators could type commands into the computer. But teletypewriters are still purely linear devices.

Up until the late 1960s or early 1970s most users did not even see the computer or come anywhere near it. One wrote a program on a special form, the nice ladies in the card punch room converted it to punched cards, the cards were given to an operator through a wicket, and a note was made in a register; one day later one queued to receive the cards back, together with whatever output the program had produced, printed on 132-column fan-folded paper.

(Ever wondered why terminal emulators have options for 80 or 132 characters per line? That's why. A punch card could hold 80 characters, and was assimilated to one line of input. One line of computer printout had 132 characters. Those numbers were burned in the collective memory of informaticians.)

Up to this day operating systems in the Unix lineage are ready to interact with the user via a dumb terminal, with no graphics and no full-screen character cell capabilities.

The conclusion is that it doesn't matter where the first computers were made. It doesn't matter whether the inventor and the first users were blind or sighted. The first computer terminals which had the ability to run full-screen cursor-addressable character-cell interfaces (not graphics, just a rectangular array of characters) became available in the mid-1970s; that is, a staggering 25 years after the introduction of the UNIVAC I, the first commercially available programmable computer, and 30 years after the first well-known programmable computer, ENIAC, became operational in production for the U.S. Army. A full human generation separates the first computers from the first user interfaces where being sighted was necessarily an advantage.

$\endgroup$
11
  • 11
    $\begingroup$ @April: Computer screens became a thing quite late. For decades computer user interfaces, if they even had a user interface, were centered around teletypewriters, most usually Teletype Model 33 or (in the glorious socialist world) clones of it. When screen-based terminals first appeared (in the late 1960s, early 1970s) they had no extra functionality, and were even called "glass teletypewriters". Basically, as soon as the general tech level allows for full-screen user interfaces, computers will get full-screen user interfaces. $\endgroup$
    – AlexP
    Commented Aug 22, 2019 at 20:11
  • 14
    $\begingroup$ Get off my lawn, you internet-in-your-pocket-having weebs. $\endgroup$
    – Mazura
    Commented Aug 23, 2019 at 3:11
  • 1
    $\begingroup$ @AlexP Remember we are supposed to be assuming computers designed for the blind. I thought dot matrix did not come into general use until relatively late. I used line printers and typewriters with ball type mechanisms before I saw a dot matrix printer. $\endgroup$ Commented Aug 23, 2019 at 18:42
  • 3
    $\begingroup$ @PatriciaShanahan: We are talking about computers "invented at a school for the blind". Invented. Wherever they were invented does not really matter; the first computers, and the computers which followed for decades, had no user interfaces to speak of. What matters is that when computers actually get user interfaces those interfaces will most certainly be made to be suitable for the vast majority of users. And, really, the teletypewriter consoles used from the late 1950s to the 1970s used dot-matrix printers. (7x5 dots per character; ugly, not suitable for general use, but good enough.) $\endgroup$
    – AlexP
    Commented Aug 23, 2019 at 20:29
  • 2
    $\begingroup$ @PatriciaShanahan: And anyway, daisy-wheel printers or ball printers could have been made to switch between normal printing and Braille with a simple change of the wheel or the ball. Sighted users became advantaged over blind users only when computing became interactive; it started here and there in the very late 1960s (for example, the TENEX operating system for the DEC PDP 10) and only gained momentum in the mid-1970s with operating systems such as Unix, RSX-11M, and VMS. $\endgroup$
    – AlexP
    Commented Aug 23, 2019 at 20:39
14
$\begingroup$

I see no differences in how computer would have developed.

The first computers used punched cards to take input and give output (one of the favorite prank among nerds in those days was to swap two random cards in the physical folder containing them, when the owner was not paying attention), and graphics came much later.

And the reason is that when you move to mass usage, you have to rely on something fitting the masses. Punch cards aren't. Braille isn't, except for those who have to learn it. But we as species use sight as main mean of communication, so it is inevitable the usage of graphics.

$\endgroup$
9
  • 1
    $\begingroup$ I'm proposing a bit of an AU -- given that a core of blind pre-computer scientists are focusing on the problem, and we're dealing with legacies of that system (just like area codes have legacies of the dial system, with the most populous-at-the-time regions having the easiest-to-physically-dial codes, or like our computer keypads have 9 in the upper right, but phones have 9 in lower right.... $\endgroup$ Commented Aug 22, 2019 at 15:54
  • $\begingroup$ That's true from the data processing point of view, but not true from operational perspective. Even the earliest computers had a lot of lights on their control consoles. And as with any visual consoles, all those lights had to be perceived by the operator at once. To make it usable for blind operators, consoles and panels would need significant redesign. $\endgroup$
    – Alexander
    Commented Aug 22, 2019 at 16:33
  • 1
    $\begingroup$ @Alexander: Operators were not users. Operators were technicians who ran the computer for the users. I know, I was an user; the operators were god-like. Users never even came anywhere near the computer. $\endgroup$
    – AlexP
    Commented Aug 22, 2019 at 17:55
  • 2
    $\begingroup$ Not necessarily. The blinkenlights could have been replaced with tactile buttons/bumps with no loss of functionality. It's not as if anybody was ever expected to react in real-time to them; their only use was in diagnostic mode, and in that mode they were stable so that there was no need to see them all at the same time -- a blind operator could feel them with the fingers. $\endgroup$
    – AlexP
    Commented Aug 22, 2019 at 18:48
  • 1
    $\begingroup$ @Alexander "of course they can be replaced, but that would change the design." You should go back and read the main post, because they are specifically asking for "how computers would be different" in an "alternate universe" where "accessibility is not an afterthought", i.e. where blind people are the Operators. $\endgroup$
    – Aster
    Commented Aug 25, 2019 at 11:21
6
$\begingroup$

There's a story from the age of the Altair. The Altair was one of the first computers that a hobbyist could afford. You put it together yourself, and then hopefully it worked. It became an odd solution in search of a problem. Nobody quite knew what to do with it. There were "computing clubs" where people met to try to figure out what it could do.

Altair 8800

In one such meeting, there was one student huddled in the corner rapidly flipping switches. You see, you programmed this by flipping a row of 20 or so switches and read the output on a grid of LEDs. Part way through, somebody accidentally tripped over his power cord, clearing the RAM, forcing him to start over again.

When he finally finished, he produced an AM crystal radio, tuned it, and from the speaker erupted a tinny version of Twinkle Twinkle Little Star. The room went quiet. He had varied the size of loops in the program to cause the computer to emit AM modulated noise from the CPU itself.

I tell this story because it points out that making sound was possible remarkably early on, and with relatively cheap hardware. Sound is also something that humans are remarkably good at.

So I would expect sound to replace a substantial portion of the output. In the early days, you knew what output you wanted. It wasn't like you searched through reams and reams of paper. You didn't have the spare CPU cycles. You made it do exactly what you wanted. Thus, the linear output format of an auditory signal would be very effective.

Your blind individuals would certainly be capable of appreciating a tremendous array of musical sounds. That information could be conveyed in bells and whistles much faster than we would generally think. Different chords could be used to pack information with remarkable density. Directional sound could be used to pack it even further.

You would certainly still want paper solutions for permanent records, but those are easier to play with. Its the transient signals that are hard, and sound would take care of that very well.

$\endgroup$
7
  • $\begingroup$ Are you sure that this guy named Steve Dompier and the song wasn't "Fool on a Hill"?: MITS (August 1975). "Worlds Most Inexpensive BASIC language system". Radio-Electronics. Vol. 46 no. 8. p. 1. MITS advertisement $\endgroup$ Commented Aug 23, 2019 at 11:22
  • $\begingroup$ But, but, but. Altair was founded in 1985. That's not "early", working stored program computers were almost 40 years old by then! (Manchester Baby, 21st June 1948). $\endgroup$ Commented Aug 23, 2019 at 14:49
  • 3
    $\begingroup$ And in a Galaxy, far, far, away, there was a little robot hero named Artoo Deetoo, who normally only communicates to the world through a series of beeps, burps and whistles, and yet everyone around understood him. $\endgroup$
    – CGCampbell
    Commented Aug 23, 2019 at 16:37
  • 2
    $\begingroup$ @MartinBonner I have no idea about the Altair company but the Altair computer came out in 1974 (its referenced in the article I linked, for instance, and that was 1975) $\endgroup$ Commented Aug 23, 2019 at 19:12
  • $\begingroup$ 1968-1970, my algebra teacher’s husband programmed a PDP-8 to play recognizable music on a pocket radio placed near it. $\endgroup$
    – WGroleau
    Commented Aug 24, 2019 at 4:01
5
$\begingroup$

I think a good technology to consider in comparison is the telegraph. The telegraph also began as a technology processing bits of information that while accessible, in that they used the sound/touch of tapping, was also cumbersome to use in that it required the user to learn a specialized code to both input and interpret. So, you had a specialized profession develop around the telegraph, which gave way when an interface easier for the layman (the telephone) was developed. So, if you have the adoption of punch card computers a good century before Cathode Ray Tubes were sophisticated enough to create purely visual displays, you need to think about how you would stop CRTs from overwhelming punch cards and Refreshable Braille Displays.

Keeping telegraphs in mind, one interesting possibility is if your early punch card computers could interface directly with telegraph lines. The French were already pioneers at long distance communication: under Napoleon, signal towers were built connecting Paris to the frontiers of the country. What if a series of punch cards at a central computer in Paris could be sent to a punch card writer in Marseilles almost instantly? You could have a sort of internet under Napoleon III!

Ultimately, though, I think tactile interfaces are going to be hard to catch on very widely, even with these boosts. The best bet is to try to stimulate a jump to more audio displays. This is where a lot of technology is trying to move now: a natural language interface, like Siri or Alexa. Maybe if a punch card internet develops, you'd still have specialized data entry types for input, but the displays would instead become temporary phonographs?

Honestly, there are a lot of repercussions that could come from this, but good luck exploring! Some other resources to look at are 'Jacquard's Web' by James Essinger, a non-fiction book on the development of the Jacquard loom and some of its significance, and 'The Difference Engine' by William Gibson, which not only launched the steampunk genre but also deals with Babbage-style central computers as the major point of departure for the world.

$\endgroup$
2
  • 1
    $\begingroup$ I like this -- I was sort of picturing a Napoleon + Discworld Clacks system. I do think since we're a while before movies and tv, but this is close to the telephone/radio age, we could end up with Siri/Alexa before 8bit graphics! (I read Difference Engine ages ago, and a bunch of Neal Stephenson, but I barely remember it now. I'm not too into steampunk as a genre, due to the way it seems to skip over the colonialization issues, but I will look for the nonfiction recommendation!) $\endgroup$ Commented Aug 22, 2019 at 19:20
  • 1
    $\begingroup$ Morse code circa mid-1800's didn't catch on. Why? Morse is extremely computer-friendly. Visual cortex consumes major neurons - audio neurons similar: information rate can be high (Adept users go >60 words/min). It is 2-dimensioned: many signals combine using differing audio frequencies...focus can switch from one context to another instantly. In browser apps it is hands-off, a distinct advantage. Going human->computer requires two switches and can go at least as fast as a keyboard - much simpler. Does Morse learning curve differ that much from Braille? $\endgroup$
    – glen_geek
    Commented Aug 24, 2019 at 15:09
2
$\begingroup$

I think that computers would develop very different mechanisms to display information.

Some form of Braille dot matrix that stimulated 5 or 10 fingers at a time would run out a practicality for many applications. It would work for simple question answer type problems, but data visualization wouldn’t work well.

But, humans have sensitive skin on their faces and palms that can feel changes in heat very well, especially in cold environments. I can remember sitting in a cold movie theater next to a woman I liked very much, I could feel her sitting next to me from 12” by her body heat.

And we can hear binaurally and can feel sounds on our skin.

So I can imagine a very complex presentation system using sound, temperature and infra-sound projected at my face and palms encoding a highly sophisticated set of information dense symbols as a display for a blind race of computer users. It would be massively parallel in its capacity to represent data.

And, it would probably limit the number of people that could become programmers or at least interpret the data sets. Since only very perceptive individuals could correctly discriminate the data. But, I think that would weed out the stupid programmers, and I am, in principle, all for that.

$\endgroup$
2
$\begingroup$

This reminds me of the braille interface that the character "Whistler," played by David Strathairn, used in the movie "Sneakers." Maybe they are real things but it would take a non-blind person to build these things.

As an alternate path you can focus on the person. Blind people could become very adept at quickly processing streams of data and making intelligent decisions. They could be sought after as a kind of "organic AI" in faster stock trading reactions in a time where computers of the day cannot be programmed to be so intelligent. Organic AI, what a misnomer.

Like the "computers" in Dune but more interesting.

$\endgroup$
1
$\begingroup$

How about if we throw Braille out of the room for a bit. Instead, think of something that combine the other senses. First thing that comes to mind was the musical instrument used to make the music in the movie Forbidden Planet. The idea is to make it more accurate and able to response to hands and fingers movements in 3D. Sound would tell the user where things are and even give a texture to those things. Now, instead of seeing files and directories as flat entities, they can be interpreted as something in space; maybe they will behave like spheres inside spheres. Programming could be getting one blob and dropping in another somewhere in the space. Drag and drop is much more realistic. Adding more feedback would improve the system. For instance, you could have the fingers being tracked (talking here more modern technology) so air or heat could be blown around them to help improve on the boundary and texture experience.

I remember watching a show about someone who saw no colour and got a camera implanted on his head in an arm which would describe colours by sound. He then one up we normal beings by expanding the spectrum to outside visible light.

$\endgroup$
0
$\begingroup$

So a computer doesn't need to be print-derivative

But it's very inefficient for users if it isn't.

According to this Quora question, Braille reading speeds are around 125 words/minute on average, and may reach as high as 200 words/minute for exceptional Braille readers. For telegraph operators, Morse code was even slower - according to Wikipedia, around 20 words/minute for an average operator, with a world record of 75 words/minute. For reading print or screen though, according to this online test, 200 words/minute would be average or perhaps on the low side of average, and speed readers can manage up to 1000 words/minute.

So taking in information from any other medium will always be slower. There is a strong incentive for the means of communication to be more efficient, and that will always be an obstacle to any other display mechanism.

Then we have the technical problems with a Braille reader. However you construct it, a Braille readout device can only display a very small number of words at a time. According to the Braille standard, each Braille character takes 6.2mm. This makes it equivalent in size to 17.5pt text. This is very much larger than normal printed text, which reduces the information density for any given page or screen. And until extremely recently, Braille readout devices required an array of small solenoids to drive pins forming the letters, which was bulky, unreliable and above all expensive. Even with alternative technologies though, it is still impossible to get around the low information density of a Braille readout device.

And then we get onto the sheer practicality of the invention taking place amongst people who are not sighted. Whilst non-sighted people could certainly have been responsible for the basic concepts, the practicalities of engineering construction make it impossible that they could have designed and constructed it. There simply is no equivalent of engineering drawings in Braille. So "Let's say that computers were invented at a school for the blind in mid-1800s"? Let's say that they could no more do that than they could grow wings and fly.

TL;DR: Basic concept is impossible, any alternative display technology would never be a success.

$\endgroup$
7
  • 2
    $\begingroup$ Engineering drawings not existing in Braille? Maybe. 3d projections don't make much sense as 2d tactile illustrations. I mean, I'm blind and I made Braille diagrams and instructions for carpentry projects, but none so detailed as all that (1/32 inches is as accurate as I can get, and that's a good day). $\endgroup$
    – CAE Jones
    Commented Aug 23, 2019 at 16:53
  • $\begingroup$ Addendum: I can cut or drill within 1/32". I've tried doing things with logic circuit chips, and couldn't make heads or tails of that, but the chips were big enough that, had they been designed with blind people in mind, they could have been tactually distinct. I'm not convinced that would have been enough for me to do anything with it, though. $\endgroup$
    – CAE Jones
    Commented Aug 23, 2019 at 17:08
  • $\begingroup$ @CAEJones Good work on that! :) And apologies if I sounded too much of a downer on the abilities of blind people. I know there's plenty of stuff which is perfectly doable - we had a blind piano tuner, for instance - and I don't want to do the equivalent of mansplaining. But realistically some things aren't going to work well without a good way to transmit the information, and I can't figure out a way that would work. $\endgroup$
    – Graham
    Commented Aug 23, 2019 at 21:15
  • $\begingroup$ There's no particular reason why you couldn't put a Braille identifier on top of a traditional DIP-packaged microchip. The industry doesn't do that because electronics assembly is done by either sighted people (who can read laser-etched or painted markings) or robots (who learn the identity of each component by other means, say a tube or reel of identical parts with a barcode on the side). There's also a continual drive to make the components smaller, so for state-of-the-art products it's all robots these days. $\endgroup$
    – Chromatix
    Commented Aug 23, 2019 at 22:16
  • $\begingroup$ @Chromatix Remember a Braille character is 6.2mm. That gets you one letter for an 8-pin DIL at best. $\endgroup$
    – Graham
    Commented Aug 24, 2019 at 8:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .