Tuesday, March 31, 2015

Shields Up! Lay In A Course For Mars



No one can deny that Gene Roddenberry was a futurist, even if that 
wasn’t his profession. Futurists like Michio Kaku emulate
 the ideas that Roddenberry put forth in an entertainment venue but 
gave people so much to think about and shoot for.
Gene Roddenberry wasn’t a scientist. He took only a few college courses, and most of those were writing classes. He was an accomplished pilot, so he knew about lift and some basic physics, but his only civilian job outside of writing was as a Los Angeles police officer.


His first TV scripts in LA reflected this line of work; he wrote for TV shows called The Lieutenant, Have Gun - Will Travel, and Highway Patrol. So where did all that sciencey technology come from?

Roddenberry was definitely a futurist. This series of posts has shown, if nothing else, just how savvy he was in creating fictional technologies that had an uncanny ability to become science realities. But, for the life of me, where did he come up with gravitons – subatomic particles that assign gravity to matter? He was walking a beat in LA in the 1960's. That sounds like a lot more than just a convenient story-telling convention.

Gravitons played a role in several of the Star Trek technologies, including today’s topic - deflector shields, or just “shields.” There are a couple of different explanations as to how the shields on the USS Enterprise worked, but the earlier and more accepted explanation in the Star Trek cannon is that the ship had emitters that sent out graviton fields.


Star Trek proposed two kinds of shields, one was large and ellipsoid. It 
protected a large area besides just the ship. The second was contoured 
and was held just meters outside the hull. The shields also had
problems – you could fire through them unless you matched their 
frequency and you couldn’t transport through them.
The gravity field generated around the ship by the emitters protected the it by warping space-time and deflecting matter/energy away from the hull. The force field wasn’t based solely on electromagnetic energy, but it must have played a role, since Geordi, Mr. Scott, and Spock were constantly suggesting to alter the shield frequencies.

The idea of an electromagnetic shield is much closer to our reality at present, since we haven’t yet identified a graviton particle. Electromagnetism was a great choice for Roddenberry, since we all have experience with magnetic fields (two similar poles on magnets will repel each other). Electrical fields likewise repel similar charges. This sounds like a force field we could believe in for the defense of a ship.

Humans on Earth in 2015 don’t have a real need for shields geared to interstellar battle – we haven’t blundered into space wars yet. But we do have a very pressing need for deflector shields in space. And we’re coming close to achieving them.

NASA, the ESA, and many other space programs are taking aim at Mars. We have sent probes, rovers, and satellites; now it’s time for humans to make the trip. But this brings big problems along with the big promise. Space is full of cosmic rays, high-energy electrons, high-speed protons and even heavier atoms. They can all kill you over time or fry your equipment.

Radiation in space will make you sick at the least, and don’t underestimate the problem of being sick in space – think about vomiting in a space suit. But it can also damage DNA and most certainly lead to infertility, given enough time and exposure.
All this damage could occur inside the space ship on a long journey to Mars or beyond, not just on space walks. Most high-energy radiation will pass through the hull of a spacecraft and do damage to the occupants. We need protective shields to keep out the bad particles and waves.


Six months on ISS doesn’t give an astronaut anywhere near the 
radiation exposure that six months on Mars, or going to and from
Mars, would. The reason is that the ISS is still within the Earth’s 
magnetosphere, so it’s protected from most of the dangerous
radiation. To go to Mars, we’ll have to take
our own shield along.
Star Trek: Insurrection showed us an example of using a force field to protect the crew. When Picard and mates were observing Ba’ku from a cloaked duckblind, they used a “chromodynamic shield” to deflect or block the metaphasic radiation that inundated the planet. A force field protected the crew, although it was protecting them from rays that would stop their aging and did in fact restore Geordi’s eyesight for a while.

We don’t have a chromodynamic shield, so we've been looking to more conventional mechanisms of shielding. We could always make the walls of a long distance spacecraft thicker. Concrete would work pretty well, if it was dense and about 2 ft thick. A foot or so of aluminum might do just as well. But these are very heavy. Heavy things don’t make for good space gear.

Interestingly, water is a great absorber of radiation. We could put it between the walls of a spacecraft and it could do a pretty good job of protecting the crew and the electronics.  Hydrogen gas might work as well; notice how water is just hydrogen and oxygen. The sleeping quarters on the ISS are lined with impregnated polyethylene as an additional radiation shield.

But what might work best? – human waste. A privately funded mission to Mars led by Dennis Tito plans to use the astronaut's own excrement as a radiation shield by packing it between the walls of the spacecraft. Organic molecules and water block radiation very nicely, and they’ll be producing more shielding every day. It’s a strange thought that a Mars mission might be jeopardized by constipation.


Dennis Tito is a billionaire investment manager, but first he 
was an engineer. He was the first person to purchase a ride 
into space (Russian rocket) and now he wants to fly 
people around Mars – not to Mars - just a flyby in 2018 
or so. The planets will be aligned to give a 501 day round 
trip then. He wants to use their waste as radiation shielding.
Thank goodness science has kept looking for radiation shields. It's quite the boon that we have natural examples to learn from. The ionosphere of Earth is a great deflector. It’s the reason short wave radio operators can send weak signals very, very far. They bounce off the bottom layers of the ionosphere and back down to Earth, called skywave or skipping. The lower the angle on the way up, the far they will be over the horizon when they bounce back down.

The ionsophere (80-1000 km altitude) is part of the atmosphere of Earth that protects us from cosmic radiation. It consists of ionized air molecules; the ionization comes from the Sun’s energy. What's an ionized gas called?  – plasma.

So we have a plasma shield around Earth – remember this as it will come up again. The magnetosphere (a 40,000 nanoTesla field goes out hundreds of thousands of km) is produced by the spinning of the Earth’s metallic outer core. It participates in the protection because the ions of plasma in the ionsophere are charged, and electrical charges in a magnetic field produce an electric field.


The magnetosphere, in coordination with the
plasmasphere, shunts most of the electrons of
the solar wind and the high energy protons
around the Earth. Where the magnetic lines
come out of the Earth at the poles, you have the
polar cusps. Some radiation can get in there –
we see them as the auroras.
A new study shows that the plasma interacts with the magnetic field and it becomes more important when there are solar storms that greatly increase the energy of the radiation coming at earth. The plasmasphere, a portion outside the ionosphere, reacts to greater energies coming from the Sun and will plume out to be more protective. 

All this protection comes from the fact that ions in plasma are charged, and the magnetic field is charged – and like charges repel. So the high speed electrons of the solar wind and the protons and heavy ions of cosmic radiation that come close to Earth are repelled by the magnetosphere, the plasma sphere, and most importantly by the electric field produced by the interaction between the plasma and the magnetic field. The vast majority of charged particles and waves are swept around Earth and merge again safely behind us. Now that’s a force field.

Several research groups have begun to think about how this could be mimicked on a small scale to protect astronauts in space. A 2005 project from NASA contemplated using vectran balloons covered in gold that could be charged to positive or negative values. Placed above a moon base and electrified, the balloons might create a magnetic bubble that would shunt radiation away and produce a protected cavity underneath.

No one has thought more about producing a plasma shield than Dr. Ruth Bamford of the Rutherford Appleton Laboratory in England. Since 2008 she has been working on producing mini-magnetospheres that would buffer the small amount of plasma in space; using a magnetic field to hold it in place and build up its density. Together, they would produce an electric field just like the Earth does, and this would shunt radiation and particles away from the protected object.


On the left is the Reiner Gamma lunar swirl. On the right is the 
Reiner crater – no, not for Carl Reiner. We used to think 
the swirls (three on the moon) were dead areas, no magnetic 
field, no water, no nothing. Now we see they are the protected 
areas and are the most interesting places on the Moon.
NASA has also thought about this, using a plasma cloud (probably made from hydrogen gas) on the Sun side of a spacecraft, held in place by a superconducting wire mesh. Unfortunately, superconductors only work to produce a magnetic or electric field if below their transition temperature. And even for the best of materials (YBCO and BSCCO) this is somewhere in the range of -265˚F. If the mesh was exposed to the Sun in space, it would be several hundred degrees at least. Better keep thinking.

A discovery in 2013-2014 brought the thinkers back to Dr. Bamford's mini-magnetospheres. It was discovered that small parts of the moon’s surface are protected from radiation. It turns out that these areas produce weak magnetic fields (few hundred nanaoTesla), and those fields are holding the thin plasma of space in place above them. The field concentrates the plasma, and together they produce a protective electric field to deflect particles and keep the surface of the moon at those spots from being irradiated. Irradiation turns the surface dark, while these “lunar swirls” remain light colored.


This is not a cartoon. The pinkish gas is plasma
and on top of the middle cylinder is a magnet. The
magnetic field deflects the plasma and some builds
up in density on the leading edge. This leading edge
and the magnetic field form an electric field that
would shunt more particles. The dark area around
the magnet is a protected cavity, no cosmic radiation
gets to that point. It’s a real-life deflector shield.
Bamford’s discovery of the mechanisms behind the swirls made her idea of a mini-magnetosphere plasma shield more attractive, since the protective magnetic forces on the moon are much weaker than previously estimates had thought necessary. Therefore, a smaller (lighter, less energy consuming) superconducting coil could be used to create a magnetic field and hold a thin layer of plasma in a bubble around a spacecraft. Bamford’s group has built such a force field in their lab and predicts that a 1.5 ton apparatus could do the job in space!

But wait, there’s more. A plasma shield could also protect a ship from high energy weapons. Plasma has the capability to absorb photons of energy like from lasers or phasers!!! And since plasma has to be at a very high temperature to keep the electrons from re-associating with the nuclei, being in space would help since there would be no air to carry the heat away from the plasma. It would stay hot and maintain itself. In fact, incoming weapons fire would reinforce the plasma state by adding energy.

Next week – we need to talk more about shields. We’re building some pretty cool ones on Earth right now. And some using plasma are already here.



Contributed by Mark E. Lasbury, MS, MSEd, PhD




Bamford, R., Kellett, B., Bradford, J., Todd, T., Benton, M., Stafford-Allen, R., Alves, E., Silva, L., Collingwood, C., Crawford, I., & Bingham, R. (2014). An exploration of the effectiveness of artificial mini-magnetospheres as a potential solar storm shelter for long term human space missions Acta Astronautica, 105 (2), 385-394 DOI: 10.1016/j.actaastro.2014.10.012

Bamford, R., Gibson, K., Thornton, A., Bradford, J., Bingham, R., Gargate, L., Silva, L., Fonseca, R., Hapgood, M., Norberg, C., Todd, T., & Stamper, R. (2008). The interaction of a flowing plasma with a dipole magnetic field: measurements and modelling of a diamagnetic cavity relevant to spacecraft protection Plasma Physics and Controlled Fusion, 50 (12) DOI: 10.1088/0741-3335/50/12/124025

Walsh, B., Foster, J., Erickson, P., & Sibeck, D. (2014). Simultaneous Ground- and Space-Based Observations of the Plasmaspheric Plume and Reconnection Science, 343 (6175), 1122-1125 DOI: 10.1126/science.1247212





Thursday, March 26, 2015

Angelina Jolie’s Preemptive Strike Against Cancer

Angelina Jolie is back in the news, but not to promote a new film. Rather, she is promoting a personal decision to remove parts of her body before they turn cancerous. Two years ago, she underwent a double mastectomy to avoid the potential of developing breast cancer. This week, her sequel to this surgery was to have her ovaries and fallopian tubes removed. She wrote about this experience on March 24 in the New York Times.

On the surface, this may seem like an overly aggressive tactic to skirt cancer. However, Jolie’s family history is replete with tragic cancer deaths and she herself is a carrier of a mutant BRCA1 gene – more on that momentarily. Considered together, these attributes put Jolie in a high-risk category for cancer, so she elected to remove the time bomb from her system. Several doctors have applauded her decision given the circumstances. Jolie’s willingness to share her stories has created such an increase in awareness of genetic testing for disease that people call it the “Angelina Effect”.

Cancer…don’t mess with Angelina.

As discussed recently on THE ‘SCOPE, cancer is like a cellular rebellion and there is evidence that the cause of that rebellion is largely bad luck. In some cases, all it takes is one bad gene to incite the riot and in Jolie’s case it is BRCA1, which stands for BReast CAncer 1. BRCA1 is a key “biomarker” for cancer, meaning that the sequence of this gene can be an indicator for the likelihood that the cell housing it could go rogue and cause cancer one day. According to one study, women possessing a mutation in BRCA1 have a cumulative lifetime risk of 50%–85% of developing breast cancer and up to 60% of developing ovarian cancer. 
Looking at the diagram above, you don’t need to be a scientist to realize that BRCA1 is a cellular multitasker - best known for its tumor suppressive ability. A mutation in this important gene is likely to screw up a lot of things in the cell, potentially giving the green light for cancer to develop.
BRCA1 is a protein linked to many diverse cellular functions, some of which involve cell growth and the repair of damaged DNA. A mutation in the gene encoding BRCA1 can compromise the activities of the corresponding protein, wreaking havoc in the cell and potentially causing it to start replicating uncontrollably. So BRCA1 is a hero of sorts, a police officer that keeps cells in line. But if the officer is wounded, the cell has a ripe opportunity to rebel and take over the body in the form of cancer.

Angelina’s character Lara Croft can’t wait to get into tombs, but Angelina is doing all she can to delay entry into her own.
Removal of otherwise healthy organs that might go cancerous is not a trivial decision. First of all, biomarkers are informative but not a guarantee that disease is inevitable. Second, no surgery is without risk. Third, in Jolie’s case, her latest surgery will prompt early menopause and eliminate her ability to have more children. Fourth, while prophylactic surgery greatly reduces the risk of cancer, a small chance remains it could still develop. Finally, there are other less invasive treatment and monitoring options for individuals carrying a BRCA1 mutation or other risk factor associated with cancer. Consultation with a physician and oncologist is essential in order to weigh these risks against the results of genetic testing.

A preemptive strike against cancer by removing the suspect organ is not always a good strategy – consider brain tumors, for example!
 
Contributed by:  Bill Sullivan
Follow Bill on Twitter.


James, C., Quinn, J., Mullan, P., Johnston, P., & Harkin, D. (2007). BRCA1, a Potential Predictive Biomarker in the Treatment of Breast Cancer The Oncologist, 12 (2), 142-150 DOI: 10.1634/theoncologist.12-2-142

King, M. (2003). Breast and Ovarian Cancer Risks Due to Inherited Mutations in BRCA1 and BRCA2 Science, 302 (5645), 643-646 DOI: 10.1126/science.1088759

Tuesday, March 24, 2015

A Universal Translator By Any Other Name…


Without the Universal Translator (UT) we wouldn’t be celebrating the 50th anniversary of Star Trek next year. Who wants to watch a TV show where people can’t communicate with one another and can’t figure out what they have in common? You might as well watch family Thanksgiving dinner videos.



Kirk and the Gorn captain were made to “fight it out” by the Metrons. 
Settle your differences man to alien was a big deal on the original 
series. The Gorn were reptile like, and they had similar technology to 
Federation. Notice the silver cylinders, each character has a universal
translator. The Gorn have also been spotted on The Big Bang Theory
so their territory is growing.
As a story telling convention, the UT allowed for near instantaneous communication between species that had never met before. With the communication problem solved, the story could move on to conflict resolution and figuring which female alien Kirk was going to kiss.

In the Original Series, the UT was a silver cylinder; you can see the Gorn and Kirk with them in the clip. By The Next Generation, they were incorporated into "com badges." In one episode, Riker and Counselor Troi had them as implants. The Ferengi had them in their ear, an apparently Quark’s had to be adjusted with a Phillips screwdriver every once in a while – although that may have been to remove ear wax.

Humans have about 6000 spoken languages on Earth as of March, 2015 – 6001 if you want to include rap. We're in quite the hurry to build translators that would help us understand one another - anything to avoid years of high school classes that lead to stronger brains but also bad foreign names and poor attempts at cooking.

In some ways, our translators have already passed those of Star Trek, but in others ways we're far behind. Most of our problems have to do with understanding just what things all languages have in common and what things are purely cultural, contextual, and completely without precedent. Let’s take a look at our efforts so far.

If you watched Star Trek, you may already realize the way we have surpassed some of their technology. The UTs of Kirk and Picard were for spoken language only. They still had to keep a crew member as a translator to figure out what signs meant on another ship or how to interpret alien consoles. We already have that licked.


Romulan text is supposedly related in visual character to Vulcan. 
One- someone studies this stuff? Two – I like the color scheme. If 
you came across this screen on a ship’s console, you’d know
it was important – the optical character reader/ translator in Word Lens 
would come in handy here. Three – I find it hard to believe there 
isn’t a Romulan font package you can buy at the App store.
Optical character readers have come a long way in the past few years. We now have cameras and software that can view written words in one language and automatically project them on the screen as translations in another language.

Google has one (Google Goggles, now Google Translate for Android), and there’s an app for that on the iPhone/iPad (called Word Lens, from Quest Visual, bought by Google Translate in 2014, see video below).

And of course we have translators for written words – you type in what you want to say, and the software gives you a reasonable (meh) translation. Try translating a phrase in and out of a language several times and see what you end up with – it’s like a multicultural game of telephone operator.
The latest amazements are the vocal translators, but only for languages we have programmed in. Skype translator was introduced in late 2014. You speak in Spanish or English while having a video chat. On the other end, it comes out in English or Spanish. Why? Because that’s the only translation they offer as of now. How? It's based on speech recognition software. It also gives you a written transcript of the conversation so you can post all the hilarious errors on Twitter (like for autocorrect).

It’s in the vocal translation arena that the Star Trek UT excelled. It was so good it that the TV series just accepted that the translator was there, never broke down, and let us hear everything in English. They didn’t even bother making the aliens’ lips (if they had them) move out of synch with the English translation!


The Rosetta Stone was discovered in 1799 by one
of Napoleon’s soldiers. It was a decree from 196
BCE on behalf of King Ptolemy V. The top is the
decree in ancient Egyptian hieroglyphs, the middle
is the same decree in Demotic script, and the
bottom is the decree in ancient Greek. Having the
same text in three languages allowed us to decipher
hieroglyphics for the first time.
Most importantly, the Star Trek UT had one feature that none of ours currently do. It could decipher and translate languages that had never been encountered before – like rap.

In principal, the Federation members would have their new alien acquaintances talk into the translator for a while. The device, using deciphering algorithms and the linguacode matrix (invented by an Enterprise linguist), would learn it and then translate it. This seems hinky to me.

Every time a new word was encountered, it would seem to me that the translator would have to either wait till it heard it enough times to decipher its meaning or extrapolate its meaning from context. Neither of these things could occur in real time. It seems to me that the “talk into it” phase would be very long.

Basically, the hardware of a translator is easy. It’s the software that we have to work on. A 2012 paper presented to the Association for Computational Linguistics (yep, just call ‘em the UT geeks) used statistical models to try and train language programs better.

Up to this point in time, vocabulary has been the choke point in trying to speed deciphering and translation. By using the statistical commonalities of all languages (if they can be found and relied upon), the need for so much vocabulary would be eased.

Any of these real-life software algorithms (or the fictional linguacode matrix) will be based on ideas presented in the 1950’s by American linguist, philosopher, and political activist Noam Chomsky and others.


Noam Chomsky was born in 1928, and he hasn’t
been quiet since. He isn’t boisterous by any means,
but he has an opinion he’s willing to debate you on
for just about everything. Linguistics is his game, but
woe is the person who believes he only knows the
structure of language – many a debating opponent
has skewered by his blunt, and ungilded prose/speech. 
Chomsky put forth the hypothesis that all languages had universal similarities. He claims the existence of a biologic faculty in all organisms of high brain function that exists for innate language production and use; basically he’s saying the language is genetic. With this approach, it should be possible to write software that could break any language into these similar patterns and then decipher it.

Ostensibly, the more languages that were encountered, the better the UT would work. On the other hand, maybe there’s not a biologic universality to language, but word order is mimicked in all language – how we build a language is universal.

Either one of these scenarios would make it easier for a computer program to take a completely unknown language and put it through algorithms that might discern order and then meaning.

But a recent study is inconsistent with these ideas. According to a 2011 paper in Nature, word order is based more on historical context within a language family than in some universal constant or similarity. They found that many different sentence part combinations, like verb-object (or object-verb) or preposition-noun (or the reverse) for example, are influenced by other structure pairs within the sentence.

One word preceding the other in some languages caused a reversal in other pairs, while the reverse might be true in other language families.  The way that sentence structure via word ordering evolved does not follow an inevitable course – languages aren’t that predictable. Bad news for computer-based word order help.


In 2600 BCE the Indus valley civilization had a
population of over 5 million. Cities have been
excavated and impressive art has been found. The
tile above shows a rhino, polka-dotted at that,
apparently with polish on its toenails. The symbols
above may be a written language. It’s a big deal which
language it might be related to, since Pakistan and
India are still fighting over this region.
It’s a roller coaster ride trying to figure out if computer power is going to solve our UT problems. We were at a low point with the paper above, but in 2009 we got some speed over a hill. In the 2009 paper, a computer algorithm to predict conditional entropy was used in an effort to investigate a 5000 year old dead language.

The Indus civilization was the largest and most advanced group in the 3000 BCE world. Located in the border region of today’s India and Pakistan, they may have had a written language – we can’t tell. They had pictograph carvings, but what they mean is up in the air. There is no Rosetta stone like we found for ancient Egyptian, and no one speaks or reads the Indus now.

The algorithm for conditional entropy is used to calculate the randomness in a sequence of…. well, anything. Here they wanted to see if there was structure in the markings and drawings. The results suggested that the sequences were most like those in natural languages.

But, just to prove it’s never that simple, linguist Richard Sproat (works for Google now) has contended that the symbols are non-linguistic. In 2014, he did his own larger analysis with several different kinds of non-linguistic symbols, and showed that the Indus pictographs fall into the non-linguistic category.

He rightly points out that computational analyses have a downfall in that biases could enter based on what type of text is selected and what that text depicts. I don’t think someone could pick up English if all they had to study were shopping lists.

But in other old languages, more progress has been made. One paper used a computer program to decipher and translate ancient language of Ugaritic in just a few hours. They made several assumptions, the biggest one being that it had a known language family (Hebrew in this case). This may not be possible when dealing for the first time with some new alien language.


Picard and Captain Dathon of the Tamarians had to
come to some meeting of the minds in order to survive
the beast on El-Adrel IV. He spoke only in metaphor, a
fact that Picard is slow to pick up on. Me - I just wonder
how Dathon didn’t drown when it rained. By the way, you
can get a T-shirt with just about any of Dathon’s sayings.
image credit - It's All About: Star Trek
They also assumed that the word order and alphabet usage frequencies would be very similar between the lost language and Hebrew. They then played these assumptions off one another until they came upon a translation. Ugaritic was deciphered by brute human force a while back, but it took many people many years to do it. This is how we know that the computer algorithm got it right – it just took 1/1000 of the time.

But, even if we find universalities in language, the computer won’t be enough. An example comes from Star Trek itself, in an episode of ST:TNG called Darmok. The universal translator told Picard exactly what the aliens were saying, but it didn’t make any sense.

Their language was based on their folklore and history. All their phrases were metaphors of events in their past. So unless the UT knew this species’ particular history, it could only translate the words not the meaning. Language is more than words in an order; language is the collective mind of a group connecting them to each other and to their world.

Next week, deflector shields.




Contributed by Mark E. Lasbury, MS, MSEd, PhD






Sproat, R. (2014). A statistical comparison of written language and nonlinguistic symbol systems Language, 90 (2), 457-481 DOI: 10.1353/lan.2014.0031

Dunn, M., Greenhill, S., Levinson, S., & Gray, R. (2011). Evolved structure of language shows lineage-specific trends in word-order universals Nature, 473 (7345), 79-82 DOI: 10.1038/nature09923

Rao, R., Yadav, N., Vahia, M., Joglekar, H., Adhikari, R., & Mahadevan, I. (2009). Entropic Evidence for Linguistic Structure in the Indus Script Science, 324 (5931), 1165-1165 DOI: 10.1126/science.1170391

Snyder, Benjamin, Regina Barzilay and Kevin Knight (2010). A Statistical Model for Lost Language Decipherment Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010


Tuesday, March 17, 2015

I See, Said The Blind Man


There was a Star Trek fan named George La Forge. A muscular dystrophy patient, George became friends with Gene Roddenberry at Trek conventions. Years later, when proposing a new Star Trek series, Roddenberry named a character in George’s memory (he had died in 1975) - Geordi La Forge.



The irony is so thick – LeVar Burton said that
wearing the VISOR that gave his character sight
took away 90% of his vision. He tripped over
everything on the set for the first couple of years.
In later movies of the franchise, he used ocular
implants so that he didn’t have to wear the
visor,  just contacts.
The character played by LaVar Burton was articulate, funny, intelligent, and a more than capable engineer. But mostly what people remember is that he was the blind man who could see better than sighted men. No, he didn’t necessarily have insights into their souls; he could literally see things they couldn’t. The mechanism of his sight – VISOR (Visual Instrument and Sensory Organ Replacement), consisted of what else – a visor.

It fit over his eyes and picked up electromagnetic radiation. The signals were processed and transduced (converted) to electricity. These were sent along wires that went into his temples (where the bolts on the side of the visor were) and attached to his optic nerves. All of this to relay electric impulses based on what EM waves the visor detected.

On each optic nerve was an implant that transduced the electrical signals so that they stimulated the individual neurons in the optic nerve responsible for assembling the picture. Think of it as a TV screen in your mind. There are certain neurons responsible for every pixel. Stimulate the right ones and you can build a picture. It isn’t anywhere near that simple but we don’t have time for a discussion of hypercolumns and visual processing.

Geordi’s visor went nature one better. It expanded his visual range beyond that of visible light. Humans detect only about 1% of the EM spectrum. Visible light is just about in the middle of the spectrum, but below it is infrared (heat) and microwaves, and above it are ultraviolet, X-rays, and gamma rays.


Goldfish sense infrared as noise in rods and
cones, not with photoreceptors specific for those
wavelengths – or with a pit like snakes do. This
allows them to hunt in murky waters. Only juveniles
see UV and this is with specialized cone
photoreceptors just for UV wavelengths. Why – I
don’t know UV is higher energy and damages the
cones, so they are gone by adulthood.
Butterflies can see in the ultraviolet range; it shows them patterns on flowers. They use these patterns more often than colors or shapes to find nectar. Snakes can sense infrared waves, although they do it with their pit organs not their eyes. On the other hand, piranha and goldfish do sense infrared with their eyes – heck, goldfish are the only animals that can see both infrared AND ultraviolet light!

So the question is – how close are we to producing a Geordi La Forge visor? Closer than you think – close enough so you can buy one now.

Believe it or not, research into artificial sight started way back in 1792. Alessandro Volta (the name sound like a word you’ve heard?) connected two copper wires to a bimetallic pile he had constructed. By the way, bimetallic piles are basically batteries – he invented the battery! He connected one wire to the corner of a person’s eye, and the other one he touched to the roof of their mouth.

The person saw blobs of light, even in a darkened room; electricity controlled what the eye would “see.” This is exactly the technology we’re using to help the visually impaired regain their sight; we just use a slightly more refined systems to stimulate the optic nerve via neural prostheses.

The Argus II (Second Sight, Sylmar, CA) is on the market now in the U.S. and Western Europe. This is a system that uses a camera mounted on a pair of glasses. The recorded images are processed and converted to electrical impulses in a hand held unit and these signals are broadcast to radio receiver implanted behind the ear or under the eye. This then relays the signals by micro-wire to an implant in the patient’s retina.

Even if some of a person's photoreceptor cells of the retina don’t work anymore, like in age-related macular degeneration or retinitis pigmentosum (two of the most common causes of progressive blindness in the U.S.), the retinal ganglion cells that take the receptor information and transmit it to the visual cortex are still intact.


Visual prosthetic implants can be epi- or subretinal,
clamped on the optic nerve, fed into the lateral
geniculate nucleus, or attached directly t the visual
cortex of the brain. The cortex offers the easiest
surgical target, and the maximal amplifications
of the signal.
This is why a retinal implant can work in some blind people. The Argus II has sixty electrodes attached to different retinal ganglion cells with microwires. The small electrical impulse sent from the camera fires the different electrodes in a pattern and this triggers the nerve impulse in the proper retinal ganglion cells.

About 100 people have been fitted with the system, each at about $145,000, but that doesn't mean that 100 people who couldn't see much now see perfectly. Extensive training is needed to help the patient interpret what they can now “see.” This is because, unfortunately, current systems just make some cells fire, they don't perfectly match true sight – vision is too complicated for that.

Yes, I said current systems - plural. There are more ways to do this. The Alpha IMS system (Germany) doesn’t use glasses and a camera. It has a subretinal neural prosthesis that has both the ability to detect light (via photodiodes), and then directly pass those signals to 1500 electrodes attached to the retinal ganglion cells. It’s all self-contained and powered by a wireless coil implanted below the skin of the ear.

The testing on the Alpha IMS was reported in 2013. Over a nine month period test subjects had a significant improvement in object detection and field of vision. Almost half could spontaneously, or with training, read letters. The safety study was concluded in 2014 and the device is now going on the market in Europe.

Importantly, this R&D group showed that it does matter where the implant electrodes are placed. If placed over the fovea (area of most acute detection on the retina), the patients do much better. Even a 15˚ movement eccentric to the fovea severely degrades the performance.

Finally, Bionic Vision (Australia) has tested a 24 electrode subretinal implant- but they are planning on putting together a fully functional artificial eye by 2020. This bionic eye will contain the diodes, the electrodes, and the power source to replace the eye completely, perhaps even with muscular attachments for movement.


The Monash Vision Group (Australia) is develop-
ing a visual cortex implant prosthesis. The
camera (a) sends images that are processed in the
pocket held device (b). This sends the impulses via
an antenna (d) wirelessly to the implant in the
visual cortex (e). You could also send them to a
computer, so you see what they see.
So far, we’ve talked only about the retinal implants, but there are other ways to stimulate the visual cortex. You can clamp the implant on the optic nerve – just like what was supposedly done with Geordi. You can also stimulate the lateral geniculate nucleus (further back on the way to the visual cortex) or the visual cortex itself. There are good and bad points to each of these.

On the up side, the further back you do the stimulation, the more types of blindness you could help. If you stimulate the retina – you have to have some working retinal cells. Many blind people have no working retinal cells, so these types of implants won’t work for them. But if you stimulate the visual cortex directly, it wouldn’t matter if the patient had defects in their eyes, optic nerves, or lateral geniculate nuclei. This gives us a clue as to the type of blindness Geordi had – his implants were on his optic nerves, so they and his lateral geniculates must have been functional.

On the down side, there is visual processing that occurs all the along the path from the eye to the visual cortex. The current neural prostheses don’t restore perfect, or even good vision. The retina does some image processing and so does the lateral geniculate. So when the impulses get to the visual cortex, they have been partially ordered and translated. A visual cortex implant loses all this processing. Even a retinal implant misses some processing.


For implants that use image inputs outside the eye,
it would be like you would never blink or close your
eyes. As long as the camera was working, you would
be seeing things. Even in your sleep. You would be
tough to sneak up on. Even though Raiders of the
Lost Ark was a great film, this never happened in any
of the classes I taught.
A 2012 study demonstrated this. There are about 20 different types of cells in the retina. Some for color, some for movement, some for speed, some for acuity. It takes coordinated firing of all of them to give a clear picture. Using optogenetics (the turning on and off of genes with light) this group is starting to figure out the retinal code. Solving this will make implants much better at showing life-like pictures - like upgrading to HD from a black and white Philco (images with Argus II and Alpha IMS only show dark and light).

A different group in a 2014 paper is studying this problem as well. They recorded the patterns of firing for different retinal cell types when a moving object was detected, then reproduced them electrically, the result was more life-like vision. They’ve only done this so far in isolated retinas, not in humans. The point is, lots of work needs to be done to improve the vision that patients are being given; scientists lack the specificity and precision to mimic natural vision as of now.


The Final Cut was a 2004 sci-fi film where Robin
Williams was an editor. He took all the visual memories
recorded by an implant during your many or few years
and edited the images into a shorthand story of your life.
It bombed, even with Mira Sorvino and Jim Caviezel. I’m
guessing Williams wished everyone could unsee it.
Ways to improve visual neural prostheses may include more electrodes for greater acuity or field of vision, better training for patient’s to interpret what they sense, and better electrode placement and attachment. But it may also include more advanced algorithms. Groups are working on incorporating facial recognition software so that more can be recognized with fewer data points. There are also algorithms to detect and highlight the most important objects in a field or the distance to those objects. Some groups are using images at multiple depths of focus (confocal imaging) to identify the most pertinent objects and suppressing the rest of the signal.

Or, we may go completely Geordi-like. The VP of Second Sight says there is no reason that visual prostheses couldn’t also process inputs in the infrared, ultraviolet or even X-ray ranges. It would just be a matter of including those sensors in the input apparatus. Even more bizarre, we could hook the entire system up to WiFi and broadcast/record what the person “sees.”

Robin Williams had a movie where his job was to edit visual memory recordings from deceased people and play them back as a self-eulogy (The Final Cut). Once again, life imitates art, which imitates life, and on and on.

Next week – we’re trying to produce a universal translator, but we don’t know every language in the universe. I see a fundamental flaw here.



Contributed by Mark E. Lasbury, MS, MSEd, PhD



Wang, J., Wu, X., Lu, Y., Wu, H., Kan, H., & Chai, X. (2014). Face recognition in simulated prosthetic vision: face detection-based image processing strategies Journal of Neural Engineering, 11 (4) DOI: 10.1088/1741-2560/11/4/046009

Jung, J., Aloni, D., Yitzhaky, Y., & Peli, E. (2014). Active confocal imaging for visual prostheses Vision Research DOI: 10.1016/j.visres.2014.10.023

Nirenberg, S., & Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision Proceedings of the National Academy of Sciences, 109 (37), 15012-15017 DOI: 10.1073/pnas.1207035109

Stingl, K., Bartz-Schmidt, K., Gekeler, F., Kusnyerik, A., Sachs, H., & Zrenner, E. (2013). Functional Outcome in Subretinal Electronic Implants Depends on Foveal Eccentricity Investigative Ophthalmology & Visual Science, 54 (12), 7658-7665 DOI: 10.1167/iovs.13-12835