Friday, January 30, 2015

The Friday Five

Highlighting some of the coolest science news we’ve seen lately.

1. Which will be the best Super Bowl commercial? Science may have the answer!

2. Well, excuse me! It happens to all of us. The science behind brain farts.

3. Twitches, hiccups, yawns, oh my!  The science behind different involuntary behaviors.

Science quote of the week:

“Two things are infinite:  the universe and human stupidity; and I'm not sure about the universe.” –Albert Einstein

Contributed by:  Bill Sullivan
Follow Bill on Twitter: @wjsullivan

Thursday, January 29, 2015

Heaven or Hallucination?

Benjamin Franklin once proclaimed, "In this world nothing can be said to be certain, except death and taxes." While most of us can begrudgingly deal with taxman, we have a much harder time facing the Grim Reaper. It is this fear of the finite that has put the notion of an afterlife at the center of many world religions. Like a good book, we simply don’t want our life's story to end, so most people believe that there must a sequel.

Long ago, people used to think that Heaven was up in the sky. Led Zeppelin even implied that Heaven was accessible via a stairway available for purchase. A more modern idea is that Heaven is transcendental, perhaps in another dimension that is inaccessible to scientific instruments.
What does science have to say on the subject of Heaven and the afterlife? Ancient notions that Heaven resides on mountaintops or in the clouds have been dispelled, and our exploration of the universe so far has not uncovered any evidence of a physical Heaven. The failure to find evidence does not necessarily negate the possibility, but our knowledge about the universe has prompted a change in how most people conceptualize Heaven. Since Heaven is now considered by most to be an ethereal realm unreachable to the living, scientific analyses do not apply and the afterlife must remain a matter of faith.
However, some argue that there is tangible evidence of Heaven based on eyewitness accounts of people who've been there during a “near death experience”, or NDE. When evidence is put forth, science is obligated to scrutinize the claim. People surviving a NDE awake with an unshakable feeling that they’ve traveled beyond the confines of their body. You may have heard about the recent case of Alex Malarkey, a young boy who was in an automobile accident in 2004 that left him paralyzed. With the help of his father, they penned a bestselling book in 2010 called, “The Boy Who Came Back From Heaven”. But a couple weeks ago, Alex (now 16 years old) admitted that his story was, um, malarkey. Alex now claims that he made up the whole thing as a child because he “thought it would get him attention”. Consequently, the book has been pulled and the million or so people who purchased it are feeling as deflated as a New England Patriots football.

Rock singer Bryan Adams also once thought he'd died and gone to Heaven.
But turns out he was just love-struck.

We are so eager to feast on these personal accounts of an afterlife that a whole new genre of entertainment has been christened “Heavenly Tourism”. Heavenly Tourism is now a big business, even getting a major motion picture in 2014Some cases appear to bring real credibility to the phenomenon, such as Eben Alexander, M.D., the neurosurgeon who wrote the bestseller “Proof Of Heaven” after his NDE. While science is not a system designed to test matters of faith, researchers can examine what is going on in the brain during NDEs.
Flatliners” was a movie from 1990 about a group of medical students who tried to reproduce NDEs in the lab. The real miracle is that most of these actors were able to resuscitate their career after this movie.
Dr. Steven Laureys heads a Coma Science Group in Belgium that studies NDEs very seriously. His research is revealing that patients who have a NDE form memories during this period that are unusually vivid, feeling “even more real than real”. Dr. Laureys asserts that the lucid nature of these NDE memories fools many people, including Dr. Eben Alexander, to believe they were real events. But Dr. Laureys attributes these powerful experiences to a dysfunctional brain.

According to Dr. Laureys, there is no evidence that consciousness exists independent of brain activity. In other words, patients forming memories during a NDE were not dead, and the images they retain were the natural result of residual brain activity, which can persist for some time even after the heart stops beating. Further evidence that heavenly visions are not real is that they can be reproduced when certain parts of the brain are artificially stimulated. Oliver Sacks has also written extensively about how the stimulation of certain brain areas can produce an array of transcendental experiences that feel absolutely real. Psychedelic drugs can have a similar impact on the brain.

Supportive findings have emerged from studies that record brain activity in dying rats. In rats that would be considered “clinically dead” by human medical standards, researchers observed a surge in specific brain activities that are signatures of “hyper-consciousness”, the same type of phenomenon that Dr. Laureys observes in patients reporting a vivid NDE. 

Neuroscientist Andrew Newberg studies the effects that certain religious practices have on the brain, pioneering a new discipline he calls "neurotheology" that aims to identify the biological underpinnings of spirituality. His studies have revealed why NDEs often leave the impression that you traveled down a tunnel towards a bright light. According to Newberg, peripheral vision is lost during a NDE, producing the sensation that one is in a tunnel.

The more we study NDEs, the more it becomes clear that there is a neurochemical basis that explains the imagery and sensations. Collectively, these studies raise a red flag about the validity of Heavenly Tourism, so buyer beware. Those offering to be your tour guide may be teaching you more about neurology and psychology rather than what may await us when the brain truly shuts down. Heaven is outside the realm of scientific examination, so the afterlife remains a matter of faith.

It has been posited, however, that our growing scientific knowledge gives less credence to the supernatural, making an afterlife seem highly improbable. Stephen Hawking has proclaimed, "I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark." John Lennon once asked us to image no Heaven. While the thought of a finite existence is unfathomable to many, the truth is that the only existence we can be certain of is the one we are living here and now. Embracing the possibility that life is a one-take movie can inspire us to do wondrous things with the time we have alive. Knowing that we will not be reunited with friends and family in the Great Beyond should prompt us to cultivate better relationships with them now. The logical course of action is to treat our life as a fragile and precious commodity, taking good care of the body and mind and enabling others to do the same, which interestingly agrees with the prime directive of most religions. 

Contributed by:  Bill Sullivan
Follow Bill on Twitter.

For more information:
Thonnard, M., Charland-Verville, V., Br├ędart, S., Dehon, H., Ledoux, D., Laureys, S., & Vanhaudenhuyse, A. (2013). Characteristics of Near-Death Experiences Memories as Compared to Real and Imagined Events Memories PLoS ONE, 8 (3) DOI: 10.1371/journal.pone.0057620

Borjigin, J., Lee, U., Liu, T., Pal, D., Huff, S., Klarr, D., Sloboda, J., Hernandez, J., Wang, M., & Mashour, G. (2013). Surge of neurophysiological coherence and connectivity in the dying brain Proceedings of the National Academy of Sciences, 110 (35), 14432-14437 DOI: 10.1073/pnas.1308285110

Newberg AB (2014). The neuroscientific study of spiritual practices. Frontiers in psychology, 5 PMID: 24672504

Blanke, O. (2005). The Out-of-Body Experience: Disturbed Self-Processing at the Temporo-Parietal Junction The Neuroscientist, 11 (1), 16-24 DOI: 10.1177/1073858404270885

Physicist Sean Carroll recently gave a lecture that debunks the notion of an afterlife.

Tuesday, January 27, 2015

Star Date: Pretty Darn Soon

The 50th anniversary of Star Trek is a reason to celebrate.
I guess Kirk is too cool to dance and Spock thinks
dancing is illogical.
2016 will mark the 50th anniversary of the first season of the first series of Star Trek. In that first episode we meet James T. Kirk, Dr. McCoy, Spock, Uhuru, and some guy in a red shirt who meets a horrible fate almost immediately.

In the fifty-one years since Gene Roddenberry pitched the series as, “Wagon Train in space meets Gulliver’s Travels,” many of its technological gadgets have come closer to being real. The original series was set in the 2260’s, so we’re way ahead of schedule on producing workable versions of some of those props. For instance, the tricorder sensor was a repurposed salt shaker.

I figure the only decent way to prepare for next year’s 365-day celebration is to describe where we stand in making all those toys a reality. The purpose of this Star Trek refresher is to rekindle, or just plain kindle, a fire in you to finish the research. That, and about three billion dollars of funding should do the trick.

Let’s start with the replicator. Introduced in the original series, the replicator started out as a way to make food and recycle just about anything. In later series, spare parts and just about everything else was made by replicator, including air. The only rules; no weapons and nothing living. Well… we may be able to go Star Trek one better.

The replicator produced the food and the dishware.
Then you could recycle the dirty dishes into your
next martini.
The theory behind the replicator was that it rearranged subatomic particles to produce atoms of different elements. Then these atoms were assembled into whatever material and form were requested. To recycle dirty dishes or that dead Romulan, the replicator would reduce the object to its subatomic particles. Your late night cheeseburger might have been part of a old sock just minutes before.

While we can’t yet manipulate subatomic particles, we have developed ways to make things on demand. It’s called additive manufacturing; you know it better as 3-D printing.

In basic terms, 3-D printing produces a solid object from liquid or solid material in a build up process, as opposed to cutting extraneous material away from a block. In more technical terms, there are several ways to do additive manufacturing.

Stereolithography is the oldest technique for 3-D printing.
Liquid build material is cured using a UV or laser light.
In stereolithography, a vat of liquid plastic is the build material. A thin layer is spread across the build tray and a laser is used to cure the precise areas that correspond to the first layer of the object. The tray is lowered and another thin layer is spread and cured. This is repeated until the object is completed. This is the oldest of the 3-D printing technologies, first described in 1986, and is still the fastest way to print an object.

On the other hand, in inkjet based printing or powder bed printing, the movable head dispenses a bit of liquid binder onto a bed of powder build material. With light, the binder locks the build powder at that point to the layer below it. The table is then lowered, a new layer of powder material is laid down, and the computer design guides the head to dispense binder at the correct points.

Inkjet 3-D printing is similar to sterolithography, but the
build material is not liquid and the binding comes from the
print head, not from a laser or UV light.
In fused deposition modeling, liquefied build material is laid down and fused together by UV radiation or laser. What is interesting about this (and some other) methods is that you can use several different materials (metal plastic, different colors) in one build.

With fused deposition, you can easily include support material to build up columns for parts of the object that would otherwise be unsupported in the manufacturing process. Now the cool part – the build material can be metal or plastic or glass, while the support material can be something water soluble.

When your build is finished, you can throw it in some water and the supports will disappear, leaving only your desired product. In sterolithography, the support columns are made of the same material as the product, so they have to be cut away.

Fused deposition printing can use different materials for
supports and products. The material is liquefied in the
head before it is deposited.

Finally, there is selective laser sintering. This technique uses powdered metal or plastic. As in stereolithography, a thin layer is spread over the build surface and a laser is used. However, in this case the laser sinters the pieces together, compressing them with heat and pressure into a solid – but not to the point of melting them.

NASA did its first additive manufacturing in space in November of 2014. The International Space Station just got its first 3-D printer. In a small bit of irony, the part they manufactured was a replacement part of the printer itself. The ISS has a fused deposition modeling printer, so our replicator in space may descend from this technology.

Also ironic, the first printed part couldn’t be separated from the build tray. The binder apparently works better in microgravity, so it fused too well with the platform on which the part was built. There’s always a learning curve.

Sintering is just another way to bind the material
particle together.
The original replicator was for making food, and NASA is working on to this as well. There are 3-D printers on the market today that will print food for you. NASA has funded a small business grant to look into the possibility of printing food for long space trips.

Printing food is in some ways very similar – chocolate bunnies or pasta shapes are easy, but it can get more elaborate. Nature Machines has a product called the Foodini that can print burgers, pizza, etc. The technology is similar to other printers, except that the temperatures and textures are different for each ingredient and they have trouble getting many things to hold a 3-D shape against gravity.

The food binder technology is a bit behind – strong enough to hold but edible, and something that will match the flavor, texture, and consistency that one would expect from a certain food. We are actually doing better with medical uses than we are with food.

The software used to design printed objects can be fused to MRI, CT scan or X-ray information to help design very accurate stents, casts, valves, and other plastic or biocompatible material parts to be used in or on the human body. Heart valves are especially useful. A 2015 paper explains printing of metal/glass scaffolds to repair skull defects. Another use described in a 2015 study is for on demand printing of surgical gear needed in war zones.

One possible method to bioprint a vessel. Lay down cells specifically
within an agarose mold. Let them solidify for a time, then put
them in a bioreactor containing growth factors and mild
electrical stimulation so the muscle cells in the walls of
the vessel can mature.
Here's we can go Star Trek one better, 3-D printers are also being used to print living tissues and pretty soon, organs.  3-D bioprinting uses biochemicals and different cell types to build 3-D tissues of various types. A 2014 review explains in common terms the promise and problems with 3-D printing tissues and organs.

One of the problems that must be overcome before organ bioprinting can be realized is the vasculature. For a tissue or organ to survive, it must have a blood supply. This is harder to print because it means having a tubular structure within a solid organ. See the TED video below about printing kidneys.

A new study might have the answer. Using a two print process, the tubular structure is printed using endothelium, muscle in hydrogel tube supports, and then the tissue is printed around it. This must be accomplished before we can take the next step, in vivo bioprinting. In this technique, bioprinting will occur right in or on the human body. That smells a lot like the digital regenerator in The Next Generation. Yes, NASA is funding studies to produce a “bioreplicator” as well.

Next week, let’s tackle a primarily medical device, the tricorder. Think hard about it this week, a workable version might be worth 10 million dollars to you.

Contributed by Mark E. Lasbury, MS, MSEd, PhD

click here if link on video doesn't work

Yu, A., & Khan, M. (2015). On-demand three-dimensional printing of surgical supplies in conflict zones Journal of Trauma and Acute Care Surgery, 78 (1), 201-203 DOI: 10.1097/TA.0000000000000481

Murphy, S., & Atala, A. (2014). 3D bioprinting of tissues and organs Nature Biotechnology, 32 (8), 773-785 DOI: 10.1038/nbt.2958

Kolesky, D., Truby, R., Gladman, A., Busbee, T., Homan, K., & Lewis, J. (2014). Bioprinting: 3D Bioprinting of Vascularized, Heterogeneous Cell-Laden Tissue Constructs (Adv. Mater. 19/2014) Advanced Materials, 26 (19), 2966-2966 DOI: 10.1002/adma.201470124

Tuesday, January 20, 2015

The Electrical Grid Needs Fattening Up

Heimlich was the caterpillar from one of my kids’
favorite movies, A Bug’s Life. He ate and ate because
he needed to store energy. When the time came to
metamorphose into a butterfly, he would need to
convert that stored energy to forms his body could
use. We need the same thing for our national
energy grid.
When presented with food, you can convert it all to energy (stuff your face), or can save some for later (leftovers). We can make use of much food over time and not lose any of its energy, most of the food won't spoil before tomorrow or the next day. Likewise, your body can store energy it takes in, some as glycogen, and some as fat. We can go back later to burn the fat when we need extra energy – although we usually don’t.

This isn’t the case with the national energy grid. Whatever energy is generated, it goes directly into the copper wires of the transmission network. This is fine most of the time, because you can run more or fewer generators, and they can be made to work at higher or lower efficiency to meet immediate needs.

The real problem comes when we try to be green. Some fuel for generators can be used when needed, but other sources of energy have to be used when they are available. For example, it doesn’t matter whether you burn coal today or ten years from now, you still get the energy from it.

But think about wind power. If you don’t generate electricity from the wind as it blows, then you can’t go back later and use it – it’s gone with the wind. Same with solar energy, if you don’t harvest today’s sunshine, you can’t come back tomorrow and find it. Sure, you can use tomorrow’s sunshine, as long as it’s sunny – but not everyday is sunny.

As more and more electricity is generated from green sources, we need to harvest as much of it as we can when we can. This means that we need to be able to store energy in some form. This large-scale energy storage is the focus of much current research and even more construction.

If we can’t store electricity as electricity, it means we have to convert (transduce) it to some form of potential energy. Research and engineering is showing that we can do this in several ways. Let’s look at a few:

Compressed air storage – One source of potential energy is air under pressure. Of course, it would have to be a whole bunch of air, like an abandoned mine volume of air - or one speech from a politician. Several of these large-scale energy storage mechanisms have been set up in Europe, using mines or caverns.

Here is a schematic for compressed air storage of
energy. The compressor motor uses energy from the
grid to pump air into a huge space – maybe somewhere
that has given up its natural gas. Then when it escapes,
it runs turbines that return electricity to the grid. I bet
fracking opponents might have a problem with this as
well – geologically speaking.
Some caverns are better than others. Salt caverns (McIntosh, Alabama) are good because the crystal is flexible without being porous; no reaction occurs between the walls and the oxygen. When there's less demand on the grid, air compressors use the extra electricity to pump air into a sealed mine. Newer proposals seek to use pipelines to store the compressed air.

Under pressure, the air can remain as a source potential energy for an undetermined time. When the grid needs more electricity, the pressurized air is allowed to escape, passing across turbine blades and turning a generator. Basically, it’s electricity to wind to electricity.

Compressed air storage is about 45% efficient. If you use the heat created by compressing the air (pushing the molecules together creates friction and heat) to heat the air when it expands (usually it cools greatly, like when you spray off your computer keyboard with that can of air and the long red straw), you can increase the efficiency to about 70%. That ain’t bad.

Hydrogen – Hydrogen gas is a very dense fuel source, meaning that you get a lot of energy for the amount of fuel you use. However, you first have to produce the hydrogen. One way is to split water, just like plants do during photosynthesis. While plants use the power of the sun to split water, we can use electricity – this is called power to gas generation. Gas generated is usually hydrogen from water, but methane can also be produced from carbon dioxide plus water.

The hydrogen gas produced is then stored, similar to the compressed air storage described above. For more efficiency in storage, the hydrogen can be cooled and pressurized to be stored as a liquid. When electricity needs to be generated, the gas can be burned to heat water for conventional turbine generators, or it could be put through large fuel cells, as we discussed two weeks ago.

Caverns and mines can be utilized for storage, but Germany uses mostly hydrogen pipelines for storage, and has done so for many years. In fact, German hydrogen storage is some 5000x greater then their pumped water storage capability. I’d worry about explosions. Remember the Hindenburg - that was hydrogen gas.

This is how pumped water storage works. They
picture shows the direction for daytime and nighttime,
but it could be anytime energy is excess or needed. The
reason for day and night labels is that you can make it
in the day when more is needed and store it at night,
when energy usage is lower. In addition, using electricity
at night is cheaper (less demand = less cost).
Pumped Water - A new pumped water storage facility is near the approval stage in Montana. The Gordon Butte project will build a pair of 1.3 billion gallon (6.4 billion liter) reservoirs, one atop the butte and one 1025 ft (312 m) below, at the bottom of the butte. The U.S. and China have many of these facilities, the largest of which is located on the Virginia, West Virginia border (Bath County, 3 GW).

When there is excess energy in the grid, it will automatically be used to power pumps that will move water from the lower reservoir to the upper. This mass of water, positioned in the top reservoir is a powerful source of potential energy.

During peak usage hours, the water is allowed to fall to the lower reservoir, through a turbine that then powers a generator. At this point, the system acts exactly like a typical hydroelectric plant. These mechanisms are very efficient, returning 75-85% of the energy invested in them. The problems: you need sufficient space at two nearby locations, but at very different elevations, and two, reservoirs are very expensive to dig. I wonder if drought would be a problem.

The real advantage to pumped water storage over other large-scale storage methods is the timing. Pumping or generating can begin within just 5-7 minutes of declared need, while compressed air storage facilities take more than 30 minutes to ramp up.

Electric vehicles will need be chargeable wherever they
are if we are to make them a source of energy storage
for the grid. Here is the new thing – wireless rechargers.
The top image shows them implanted in parking spots,
while the lower images has them embedded in the
parking blocks. Yes those concrete obstructions in
parking lots are called parking stops or parking
blocks – bet you didn’t know that.
Electric Cars – One intriguing idea is to use privately and publicly owned electric vehicles as a storage dump for energy. The batteries of such cars can be connected to the grid in a two-way fashion. You plug them in at night to recharge the battery, but a V2H (vehicle to home) system also allows you to draw energy from the car battery in case your house power lines are down.

A study from 2011 used mathematics (eww!) to estimate the viability of electric vehicles as a large scale energy storage mechanism. In general, two things will have to happen. One, 10 million or more people (in U.S.) need to own these cars. And two, they have to be able to plug in their cars at work. Only with work and home charging will enough cars be plugged in at any one time so that a grid need will be met, either by pumping more energy into the car batteries or taking a bit of energy from each car.

There are several other methods – large, rechargeable batteries are starting to be used. Painesville, OH has a 1 MW vanadium battery in use, as well as large flywheels, or thermal storage. You can investigate these yourself and figure out how best to make green energy pay off in the long run..

Contributed by Mark E. Lasbury, MS, MSEd, PhD

F. K. Tuffner, Member, IEEE, and M. Kintner-Meyer, Member, IEEE (2011). Using Electric Vehicles to Mitigate Imbalance Requirements Associated with an Increased Penetration of Wind Generation Power and Energy Society General Meeting, 2011 IEEE , 1-8

Thursday, January 15, 2015

2-7 Offsuit: Is Cancer Just "Bad Luck"?

There are many forms of cancer that ravage the body, but the key feature they share is uncontrolled cell growth. Virtually any cell type can suddenly go rogue and start reproducing itself again and again – this is what we call a tumor. Some of these rogue cells venture to other parts of the body where they don’t belong and establish a new colony there – this is called metastasis. As cancerous tumors grow and spread around, they can do a number of things that endanger the life of the patient, such as interfere with organ function and steal nutrients from other cells or tissues.

This cartoon illustrates a general model for the development of cancer. A "benign" tumor is not considered cancerous because they do not invade other parts of the body. In contrast, "malignant" tumors, like that ugly looking thing above, are cancerous because they invade nearby tissues. If cells from a malignant tumor get into the bloodstream, they can establish life-threatening satellite tumors elsewhere in the body, making them all the more challenging to eliminate.

Cancer is caused by a change, or mutation, in one of our cell’s DNA. Our DNA contains tens of thousands of genes that encode proteins that make our cells tick. Some of these proteins regulate cell division, but they are normally shut off after the job is done. A mutation that turns one (or more) of these regulatory proteins back on can turn that cell into a Xerox machine stuck on "copy". Since there are so many different types of genes that can mutate in a wide variety of cell types, a “one size fits all” cure is very difficult to conceive.

Scientists (and many pseudoscientists) have long been trying to identify things in our environment that cause mutations that lead to cancer. Others have argued that cancer is just “bad luck” and that our genes play a larger role. This is important to sort out:  should we invest more money to identify potential carcinogens in the environment or to find ways to repair “bad” genes?

Every now and then, someone gets lung cancer who never took a single puff on a cigarette. Why? To understand the answer, consider poker. You can study dozens of books on how to play to win, practice for 10,000 hours, pay hundreds of dollars to learn all the secrets from the professional players. But none of this will help you if the dealer gives you junk cards. To look at this another way, there are some people who start chain smoking at twelve and live to be 90 with no trace of cancer (perhaps breathing through a tube in their throat, but no cancer). That’s like a rookie at the poker table being dealt a straight flush. Long story short:  cancer is not always the patient’s fault, and a lack of cancer is not always indicative of a healthy lifestyle.

In Texas Hold’em poker, you begin with just two cards. Being dealt a 2 and 7, offsuit, is considered the worst possible hand you can get. In contrast, being dealt two aces is one of the best starting hands. The genes that combined to form your DNA are analogous to the cards you would be dealt at a poker table. Unlike the poker game, though, you can’t win by bluffing.

Researchers have found plenty of environmental agents that can mutate DNA. For example, exposure to UV radiation is one of the more notorious risk factors for skin cancer. But there are a few people who worship the sun and never get skin cancer. In addition, most children have not had extensive exposure to environmental carcinogens, yet, tragically, they can still get cancer. In 2014, it was estimated that 15,780 children and adolescents ages 0 to 19 years would be diagnosed with cancer and nearly 2,000 would not survive. Facts such as these support the notion that cancer is largely due to bad genes, not necessarily the environment.

Scientists at Johns Hopkins recently set out to tackle the question by constructing mathematical models of the disease. Their findings might take you by surprise:  in the majority of cases, the reason why a cell starts running all the red lights is due to a random mutation that occurs during cell division. In other words, lifestyle choices and even your genetic makeup play a lesser role in your chances in getting cancer. Let that sink in for a moment: RANDOM mutation - not mutation caused by UV light, engine exhaust, or some other carcinogen. Since the mutation appears to be a random mistake made by cell division enzymes, the authors dubbed this "bad luck".

DNA replication is a complex process in which the two strands are separated and used as a template to make a complementary second strand. But replication enzymes are not perfect (if they were, there'd be no evolution) and sometimes insert the wrong DNA base, causing a mutation.
This new study reminds us that every cell division contains an inherent risk that the daughter cell acquires a mutation that makes it divide like gangbusters. This doesn’t mean you should grab a carton of Marlboros to smoke as you suntan on the beach while devouring a couple extra-charred burgers for lunch.

Highlighted in this study was the finding that not all cell types give rise to cancer equally. Not surprisingly, tissues with a higher number of stem cell divisions are more prone to cancer, which explains why we don’t hear a lot about duodenum cancer. Importantly, the researchers identified several types of cancer that are influenced more by our lifestyle choices or inherited mutations: colon cancer, basal cell carcinoma, and lung cancer.

The findings essentially assert that since cells divide they are veritable time bombs. Somewhere down the line a mistake is going to happen regardless of environmental insults, and if that mistake occurs in the wrong gene, cancer can ensue. These are noncontroversial statements and not news to most people. However, the idea that "most" cancers are due to "bad luck" is a more controversial conclusion. A major limitation is that the model did not incorporate some of the most common cancers, such as breast and prostate cancer, because the frequency of stem cell divisions is unclear. Readers would be wise to check out this article by David Gorski at Science-Based Medicine, which provides detailed insight into the strengths and weaknesses of the experimental design. The World Health Organization was so opposed to the message this study sends that they issued a press release critical of the study.

Obi-Wan (Ben) Kenobi famously said, “In my experience, there’s no such thing as luck.” Some scientists who take issue with the Hopkins study would agree with Ben. 
At the end of the day, since we don’t yet know how all genes operate, much less which ones you might have in your DNA, it is wise to take common sense steps to minimize your exposure to known carcinogens and take advantage of tests designed to detect cancer at its earliest stage. Bad luck may be a major factor in cancer, but there are plenty of simple lifestyle changes you can make to try and beat the odds.

Contributed by: Bill Sullivan
Follow Bill on Twitter.

Tomasetti, C., & Vogelstein, B. (2015). Variation in cancer risk among tissues can be explained by the number of stem cell divisions Science, 347 (6217), 78-81 DOI: 10.1126/science.1260825

Ward E, DeSantis C, Robbins A, Kohler B, & Jemal A (2014). Childhood and adolescent cancer statistics, 2014. CA: a cancer journal for clinicians, 64 (2), 83-103 PMID: 24488779

Tuesday, January 13, 2015

Delicate Arteries Of Energy

Solar flares could take out the electrical grid – and have.
A moderate flare in 1989 caused a power outage in
Canada for millions of people for over nine hours.
Studies show that a solar electromagnetic pulse (EMP)
from a flare is much more likely to occur than
an intentional EMP attack. How big was the flare in the
image above? It was about 100,000 km (62,000 miles) long.
Every once in while I have a morning where bad luck just seems to follow bad luck. One thing leads to another, every turn brings a new and bigger disaster. This is called a cascading failure, and is a real possibility for the American electrical system. A 2013 study stated that cascading failure of the grid isn't just possible, it's inevitable; it's just a matter of when.

It occurs to me that the subject of today’s post – the national power grid - is a delicate artery of modern society. Considering our reliance on electricity, a energy hemorrhage caused by a severe weather event, a breakdown in infrastructure, a solar event of just medium size, or just a few cleverly placed acts of terror could bring us to our knees. Do you know why? Do you know how electricity comes to your house?  Let’s look at the national grid and see why it's vulnerable.

We stated last week that the majority electricity is produced by generators attached to turbines – the turbines being spun by steam or water. In 2012, there were more than 19,000 generators operating at 7000 power plants in the United States (those producing 1 megawatt or more). That’s one power plant for every 45,000 people in the U.S, or one for every 550 square miles in the continental U.S., on average.

You can see that the loss of one generating station could have an effect on the distribution of power, and a few going down at the same time could cause a significant problem. Luckily, there are backup generators that can be brought on line in case of problems or if more power is needed, but the point is still made.

Turbine blades spin a shaft. The shaft is attached to
a magnet that then also spins. It passes by coils of
wire; this generates electricity.
Electricity is made from each generator, but to be more efficient, the spinning magnet passes by three wire coils instead of one, so that three lines of electricity are produced from one generator (see animation). This is referred to as three phase power generation, and its presence can be noted in the electrical lines you see every day.

When alternating current is generated it switches from one direction to the other and back, reaching maximums on either side of a middle line. If you graph it out, it looks like a sine wave (see picture below). Energy is lowest when it is near the zero line, either going down or coming back up. By having three lines of electricity generated, each 120 degrees out of phase, there is almost always a line near a maximum value. This makes for efficient power generation. Four lines isn't much more efficient than three, but three is much more efficient than two – draw it out and see.

Each coil set in a generator produces a current that
is 120 degrees out of phase with the others, in total
it produces a much more even current for high
demand uses.
For high energy requiring equipment, like in industrial settings, three lines are needed to provide enough energy. Having three lines out of phase provides a near even power output, one line is almost always at a maxima. This is the biggest advantage of three phase generation.

The three lines of electricity produced by the generators are inserted into the national electric grid. This is the interconnected network of lines and stations that move electricity from the generating stations to the consumers. Every electrical line you see, from the one entering your house, to the high tension lines that tower over the farm fields, is part of the electrical grid. There are also the lines you can’t see because they are buried – so the grid is even bigger than you think.

When electricity leaves the generating station it is stepped up to a much higher voltage (155,000 to 765,000 volts) so that it can be transferred long distances along the transmission network in a smaller number of wires and with lower energy loss. These high-tension lines travel to the local power company that will distribute it to customers via a local distribution network that it owns.

The high tension line on the left carries two sets of four
three phase systems (1 and 2). The four lines in each
group (1a, etc) are of the same phase. Nearer your house,
the lines look more like what is on the right. The three
phase lines are on top, much of the other stuff is for TV,
telephone, or for distribution of the electricity to houses.
Look at the high-tension lines in the picture to the left or those near you. You will always be able to pick out a series of three lines, along with ground wires that protect the grid from lightning strikes and the like. Those three lines are the three phase power lines, one for each line of electricity.

At power substations, the voltage is stepped back down (around 10,000 volts), but not down to the level at which it will enter your house. From here, the electricity is passed through a bus that splits it to many directions, and through the distribution network. In many bus splits, there will be higher voltage and lower voltage lines, depending on the distance and customer need.

The three lines (still three phase power) move out into the neighborhoods, and get stepped down to the customer usage level, 240 amps. There are buses on the line that split electricity off to each house, this is the first time that they don’t travel in their group of three. The single line enters your house via your electrical meter which records the amount of electricity that is delivered over time in kilowatt hours.

We are moving toward smart grids. They are smaller than
a national grid, use different source of generating power,
and can move in two directions, so customers can generate
power and contribute to the grid.
From here, all you need to have is a satisfactory wiring plan for you house and you can plug in your charging cord and juice up your phone, or plug your blender into an outlet and make some margaritas.That is, until the grid goes down and you have a blackout.

The grid has some built in protections against black outs, mostly through redundancy. Any part of the entire grid can’t really be described as grid-like unless it is redundant. Power must come from more than one source, so that electricity can be subverted to areas of higher need, and so that a loss of some part of the grid can be compensated for by other sources. Notice that when a tree falls across a line, it is a specific area that goes black. This represents the part of the distribution that is not redundant.

If that were to occur in the transmission or distribution networks (although it would have to be an awfully tall tree), then the redundancy would pull power from another source and through other grid lines. The vulnerability lies in taking out several high tension lines that are the redundancy for a large area, or more likely, in taking out the high voltage transformers.

This is a main power step up transformer, the type that is
vulnerable to terrorist attack. This one doesn’t have lines
connected to it since it is under construction. You can see
how it would be costly and time consuming to replace.
A report by the Congressional Research Service in mid-2014 stated that the high voltage, step up transformers are a likely target for terrorists because they serve large areas, are expensive and hard to replace, and because hitting only a few could cripple a large population. In addition, the transmission lines are just out there in the open. There's no way that they can be protected from terrorist attacks. See now why the grid is so vulnerable?

You notice that there are no dumping grounds for excess energy produced here. The electricity made goes directly into the lines. Copper wires can’t hold electricity, they just transmit it. But what happens if you make too much - is it wasted? Next time we will look at the burgeoning field of large-scale energy storage.

Contributed by Mark E. Lasbury, MS, MSEd, PhD
As Many Exceptions As Rules

Paul W. Parfomak (2014). Physical Security of the U.S. Power Grid: High-Voltage Transformer Substations Congressional Research Service Reports
Bashan, A., Berezin, Y., Buldyrev, S., & Havlin, S. (2013). The extreme vulnerability of interdependent spatially embedded networks Nature Physics, 9 (10), 667-672 DOI: 10.1038/nphys2727