There was a Star Trek fan named George La Forge. A muscular
dystrophy patient, George became friends with Gene Roddenberry at Trek
conventions. Years later, when proposing a new Star Trek series, Roddenberry
named a character in George’s memory (he had died in 1975) - Geordi La Forge.
It fit over his eyes and picked up electromagnetic
radiation. The signals were processed and transduced (converted) to
electricity. These were sent along wires that went into his temples
(where the bolts on the side of the visor were) and attached to his optic
nerves. All of this to relay electric impulses based on what EM waves the visor detected.
On each optic nerve was an implant that transduced the
electrical signals so that they stimulated the individual neurons in the optic
nerve responsible for assembling the picture. Think of it as a TV screen in
your mind. There are certain neurons responsible for every pixel. Stimulate the
right ones and you can build a picture. It isn’t anywhere near that simple but we don’t have
time for a discussion of hypercolumns and visual processing.
Geordi’s visor went nature one better. It expanded his
visual range beyond that of visible light. Humans detect only about 1% of the EM
spectrum. Visible light is just about in the middle of the spectrum, but below
it is infrared (heat) and microwaves, and above it are ultraviolet, X-rays, and
gamma rays.
So the question is – how close are we to producing a Geordi
La Forge visor? Closer than you think – close enough so you can buy one now.
Believe it or not, research into artificial sight started
way back in 1792. Alessandro Volta (the name sound like a word you’ve heard?)
connected two copper wires to a bimetallic pile he had constructed. By the way,
bimetallic piles are basically batteries – he invented the battery! He
connected one wire to the corner of a person’s eye, and the other one he
touched to the roof of their mouth.
The person saw blobs of light, even in a darkened room; electricity controlled what the eye would “see.” This is exactly the technology
we’re using to help the visually impaired regain their sight; we just use a
slightly more refined systems to stimulate the optic nerve via neural prostheses.
The Argus II
(Second Sight, Sylmar, CA) is on the market now in the U.S. and Western Europe.
This is a system that uses a camera mounted on a pair of glasses. The recorded
images are processed and converted to electrical impulses in a hand held unit
and these signals are broadcast to radio receiver implanted behind the ear or
under the eye. This then relays the signals by micro-wire to an implant in the
patient’s retina.
Even if some of a person's photoreceptor cells of the retina don’t work
anymore, like in age-related macular degeneration or retinitis pigmentosum (two
of the most common causes of progressive blindness in the U.S.), the retinal
ganglion cells that take the receptor information and transmit it to the visual
cortex are still intact.
About 100 people have been fitted with the system, each at
about $145,000, but that doesn't mean that 100 people who couldn't see much now see perfectly. Extensive training is needed to help the patient interpret what
they can now “see.” This is because, unfortunately, current systems just make some cells fire, they don't perfectly match true sight – vision is too complicated for that.
Yes, I said current systems - plural. There are more ways to do
this. The Alpha IMS system (Germany)
doesn’t use glasses and a camera. It has a subretinal neural prosthesis that
has both the ability to detect light (via photodiodes), and then directly pass
those signals to 1500 electrodes attached to the retinal ganglion cells. It’s
all self-contained and powered by a wireless coil implanted below the skin of
the ear.
The testing on the Alpha IMS was reported in 2013. Over a
nine month period test subjects had a significant improvement in object
detection and field of vision. Almost half could spontaneously, or with
training, read letters. The safety study was concluded in 2014 and the device
is now going on the market in Europe.
Importantly, this R&D group showed
that it does matter where the implant electrodes are placed. If placed over the
fovea (area of most acute detection
on the retina), the patients do much better. Even a 15˚ movement eccentric to
the fovea severely degrades the performance.
Finally, Bionic
Vision (Australia) has tested a 24 electrode subretinal implant- but they
are planning on putting together a fully functional artificial eye by 2020.
This bionic eye will contain the diodes, the electrodes, and the power source to replace
the eye completely, perhaps even with muscular attachments for movement.
On the up side, the further back you do the stimulation,
the more types of blindness you could help. If you stimulate the retina – you
have to have some working retinal cells. Many blind people have no working
retinal cells, so these types of implants won’t work for them. But if you
stimulate the visual cortex directly, it wouldn’t matter if the patient had
defects in their eyes, optic nerves, or lateral geniculate nuclei. This gives
us a clue as to the type of blindness Geordi had – his implants were on his
optic nerves, so they and his lateral geniculates must have been functional.
On the down side, there is visual processing that occurs all
the along the path from the eye to the visual cortex. The current neural
prostheses don’t restore perfect, or even good vision. The retina does some
image processing and so does the lateral geniculate. So when the impulses get
to the visual cortex, they have been partially ordered and translated. A visual
cortex implant loses all this processing. Even a retinal implant misses some processing.
A 2012 study
demonstrated this. There are about 20 different types of cells in the retina.
Some for color, some for movement, some for speed, some for acuity. It takes
coordinated firing of all of them to
give a clear picture. Using optogenetics (the turning on and off of
genes with light) this group is starting to figure out the retinal code. Solving this will make implants much better at
showing life-like pictures - like upgrading to HD from a black and white Philco
(images with Argus II and Alpha IMS only show dark and light).
A different group in a 2014 paper is studying
this problem as well. They recorded the patterns of firing for different retinal
cell types when a moving object was detected, then reproduced them electrically,
the result was more life-like vision. They’ve only done this so far in isolated
retinas, not in humans. The point is, lots of work needs to be done to improve
the vision that patients are being given; scientists lack the specificity and precision to
mimic natural vision as of now.
Ways to improve visual neural
prostheses may include more electrodes for greater acuity or field of vision,
better training for patient’s to interpret what they sense, and better
electrode placement and attachment. But it may also include more advanced algorithms.
Groups are working on incorporating facial recognition software so that more
can be recognized with fewer data points. There are also algorithms to detect and highlight
the most important objects in a field or the distance to those objects. Some groups are
using images at multiple depths of focus (confocal imaging) to identify the most pertinent objects and suppressing the rest of the signal.
Or, we may go completely Geordi-like. The VP of Second Sight says there is no reason that visual prostheses couldn’t also process inputs in the infrared, ultraviolet or even X-ray ranges. It would just be a matter of including those sensors in the input apparatus. Even more bizarre, we could hook the entire system up to WiFi and broadcast/record what the person “sees.”
Or, we may go completely Geordi-like. The VP of Second Sight says there is no reason that visual prostheses couldn’t also process inputs in the infrared, ultraviolet or even X-ray ranges. It would just be a matter of including those sensors in the input apparatus. Even more bizarre, we could hook the entire system up to WiFi and broadcast/record what the person “sees.”
Robin Williams had a movie where his
job was to edit visual memory recordings from deceased people and play them
back as a self-eulogy (The Final Cut).
Once again, life imitates art, which imitates life, and on and on.
Next week – we’re trying to produce a universal translator,
but we don’t know every language in the universe. I see a fundamental flaw
here.
Contributed by Mark E. Lasbury, MS, MSEd, PhD
No comments:
Post a Comment