Electric trains and delayed gratification

Electric TrainWhen I was a kid (I think maybe 11), my brother Jim and I decided we wanted to have an electric train set.

We looked at the different sizes (called “gauges”); HO gauge is sized in between the two other popular gauges and we decided it would be best for us. Small enough to be able to build a decent layout in a fairly small space, yet still look right and to scale. Since our intent was to create a permanent layout with a town, switch yard and countryside, we wanted something that had lots of available buildings, train cars and other scale accessories. HO gauge had (and still has) probably the greatest selection of accessories so it fit our needs perfectly.

Once we had sorted out exactly what we wanted we went to the various catalogs and selected the source; we knew we had to start small because of budget but wanted a good basic train set. It was going to be enough track to make a large oval, a couple of switches and then of course the train itself. All told the cost was $28.00, which was a princely sum to two kids.

We never seriously considered hitting Dad up for the money; our family was not poor (Dad was a dentist in a tiny little Illinois town), but we weren’t wealthy either, and we just didn’t have the dynamics of lots of gift-giving, nor of our parents paying for all of our stuff. The basics, of course, and they’d buy toys for us now and then, so we never felt like we were deprived in any way, but both Mom and Dad lived through the Depression and were frugal. Like other kids we had chores and a small allowance, but most of the time we were taught to look to our resources and “save up” if we wanted something. So Jim and I put together a plan to save up the $28.00 we needed to pay for our train set. I recall I was old enough to mow lawns, so Jim and I pooled our earnings and set aside our money all one spring and summer. This was obviously long before Excel and we didn’t know anything about accounting spreadsheets, so we tracked our progress by hand on a sheet that showed each entry:  how much we had, how much to go before we had enough, and projected dates of completion.

By fall we had enough money to order our train set.

I’ll never forget how exciting it was for us when we finally ordered the train from the Murray catalog. When it arrived, we set it up on the floor of our living room and played with that train set for hours. This was the first thing that we had specifically worked toward and planned for; I think it’s safe to say that I’ve never bought something since then that’s been more gratifying. In retrospect, I know Dad could have easily funded us, but he knew that we would appreciate much more something we had worked so hard for.

I also think a big part of what made it so fun was the anticipation; the delayed gratification. We worked literally for months to get the money to buy our train. Today, it’s rare that you hear of anyone paying cash. Credit cards have become so common that when someone says they don’t have any the automatic assumption is that they must have some kind of problem; maybe they’ve had trouble with cards or declared bankruptcy so they can’t get anyone to extend credit. I think a big part of that is the message that delaying gratification is a bad thing; if you want something why not go get it right away and start enjoying whatever benefit you’re supposed to derive? Don’t worry about paying for it; you can use your trusty MasterCard and spread the pain over months or years! Unfortunately you find yourself still paying for something long after its useful life. Combine the unwillingness (inability?) to delay gratification with planned obsolescence and you’ve got a marketer’s dream come true. And a society that’s never content, and never out of debt.

I think of the feeling Jim and I had when that train set finally got to our house and it strikes me that maybe we’ve missed a valuable lesson there somewhere.

Posted in Family, General commentary on the world as I see it... | Leave a comment

Grandpa Shaddle’s workshop

When I was growing up in north central Illinois, my grandparents lived just across the street. Most houses in the Midwest have full basements, but my grandfather’s was special. Grandpa Shaddle loved woodworking and had a workshop in his basement, with an old workbench, a power jigsaw, sander and wood lathe, along with the usual assortment of hand tools. He would store walnuts down there to snack on; not bags of store-bought walnuts, but the real thing in the shell that he’d crack and then use some retired dental picks to pry out the good part. I remember it as being heaven, especially in the winter. Dry and warm, and smelling of walnuts and woodshavings. I can still conjure up that smell, over a half a century later.

My grandfather was born in 1883, and Dad was in his 30’s when I was born in 1951, so there was a nearly 68-year age difference between Grandpa Shaddle and me. But I was Dad’s firstborn son, and I’ve been told that when I was born people said I was the spitting image of my Grandpa, so I’m guessing he felt close to me in spite of the age difference. I didn’t really think much about the age difference at the time; I just figured everyone’s grandfather was that much older than their grandkids.

When I was 9 or 10, for some reason I got interested in Grandpa’s workshop. I think I wanted to make some kind of an airplane model or something, and his basement looked like a great resource. Or maybe I pestered him to let me use his equipment; it’s a bit fuzzy to me all these decades later. But in any case I spent time with him in his basement learning an appreciation for wood and wound up getting sawdust in my veins.

He taught me to respect hand tools; to understand that there is a right way and a wrong way to work with wood. He taught me shop safety and cleanup (although the latter lesson may not have sunk in completely) and how to use the hand tools to produce things of beauty as well as utility. I think much of my appreciation for wood and woodworking can be traced back to how it reminds me of those afternoons in Grandpa Shaddle’s basement.

Years later I got to thinking about our age difference. Here’s this old man (nearly 80) who takes this kid of 9 or 10 under his wing to attempt to transfer his love of woodworking to another generation. It must have been challenging to him; I’m sure my attention span was short and his hands probably hurt (after years of old-school dentistry I’m sure he had arthritis) but he was extraordinarily patient with me.

I think of that time when I get out my woodworking tools. Over 50 years later, getting to work in my shop is still one of the most satisfying things I can do.

 

Posted in Family | Leave a comment

Total recall

I grew up in a small farming community in north central Illinois. It’s still pretty much like it was some 6 decades later, but with different players and the Internet. Everyone knows everyone else; people belong to bowling leagues and it’s a great place to raise kids, from what I can see. I have lots of great memories from growing up there.

One of them is of an event that could have ended badly, but didn’t partly because of the nature of small towns and the people that live there. I was maybe 3 or 4 and decided it would be a great idea to go visit my dad in his dental office. It seemed like a long way (although it was only about 4 or 5 small-town blocks), so I got on my trike and headed out.

Somewhere along the way things went sideways, because my idea of how to get to Dad’s office was hazy at best, and I wound up pedaling down the middle of the highway, traffic backed up behind me. Fortunately for me, Johnny Metz (a close family friend) came to my rescue and got me back home safely. I can distinctly remember holding his hand as he put my tricycle in the trunk of his car (a 50’s-era Chevy or similar, with the big trunk and rounded taillight housings). It’s one of those crystal-clear recollections that we all have, where we can transport ourselves instantly into the scene. Interestingly, in my mind’s eye, part of the time I’m standing off to the side just a bit, watching this happen, while part of the time I’m seeing it from “ground level,” through my 3-year-old eyes. Either way, it’s as clear to me now 6 decades later as it was then.

But here’s the thing: it never happened to me. It was my brother Jim. Everything was as I remember it, but it was Jim and not me. Years later I was telling the story to friends in front of my mother, who interrupted me with the disturbing (and at first unbelievable) news that it was not my memory at all! Apparently when I heard the story being told by my parents as a little kid, it had such an impact on me that I put myself into the scene instead of Jim.

There’s a growing body of research today that shows that eyewitness descriptions are not very reliable. The metaphor that’s emerging around memory is that, instead of a videotape that plays in your mind, it’s more like telling (and retelling) a story. Every good storyteller knows that a story will change slightly (or not so slightly) depending upon the purpose of the story and who is listening. While the basic story stays the same, details can (and do) vary from telling to telling. It’s similar with recollections: while we may think we see all the details clearly, and all we need to do is focus on the background for it to jump into clarity, the reality is very different. Our minds (the “storytellers” here) take the parts of the event that had some emotion or significance around them, and then fill in details that help to paint the picture, rather than replaying exactly what actually happened as a video recorder would do.

Similarly, we’ve all had experiences where we saw a particular event, and then compared notes with other people who watched the same event. There is always a difference in what people see and how they see it; in some cases a dramatic difference. People notice different things, and in some cases remember things that didn’t happen, or at least happened differently, depending upon who you talk to. Add that to the “story-telling brain” metaphor, and what emerges is a very sketchy connection between historical events and how they are recalled

There’s a famous study of people being shown a videotape of a group of people tossing a basketball around. They are asked to count how many times the people wearing white T-shirts pass the ball. But then they are asked to describe other details in the video. (Spoiler alert: a significant majority miss a very obvious and important event in the video while focusing on the accomplishing the assignment). While you may say it’s not fair to extrapolate from that to questioning an eyewitness account; “These people were being told to focus on something specific so it makes sense they’d miss other details!” True, but researchers have also shown how easy it is to insert details into eyewitness accounts that weren’t ever in the original. For example, showing a video of a street scene and asking the participant to recall the color of a particular car in the background which, in fact, did not appear at all. Participants would name a color, and in many cases, even fill in additional details where none existed. In other words, the suggestion that there was additional detail became reality for the participants, whose brain then filled in details as part of a “story.”

This has created obvious problems in criminal proceedings. A number of convictions based on eyewitness accounts have been vacated when DNA evidence established the accused could not have been at the scene. Juries frequently hear instructions that eyewitness accounts are not to be considered absolutely reliable. While this has always been true to some extent (several eyewitness accounts are frequently related to allow jurors to compare them), brain research is showing how unreliable memory can be.

Like Josh Billing said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (from Everybody’s Friend, or Josh Billing’s Encyclopedia and Proverbial Philosophy of Wit and Humor).

At the very least it’s making me less willing to speak with absolute conviction when I talk about my memories.

 

Posted in Family, General commentary on the world as I see it... | Leave a comment

Looking for keys under a streetlamp

StreetlightWhy do RCTs of nutritional supplements often show no benefit?

As I pointed out in a previous post, many of the large intervention trials over the past few years that have looked at the effect of nutritional supplements on health conditions have not shown a benefit. Some have even shown trends toward harm. How can we understand this?

There could be straightforward reasons, taking issue with the study design or conclusions drawn. Sometimes studies are flawed. The researchers used a poor quality supplement, they didn’t use a high enough dosage, they were looking at the wrong end points, they didn’t give the supplement long enough, they didn’t control for diet or other lifestyle variables, they gave it too late, etc. etc. etc. And while all these may be valid problems with the studies, it seems a little like hand-waving arguments to me. I’ve been thinking that maybe the problem is much more basic than that. Maybe the real issue is that applying the randomized, double-blinded, placebo-controlled crossover trial is the wrong way to test for efficacy of nutritional supplements.

For starters, each person’s physiology is unique. (I mentioned Roger Williams and his concept of biochemical individuality in a previous post.) Even with medications, a person’s response to a drug is dependent upon a large number of variables, including their genetic makeup, their nutritional status and even the specific foods they eat. A simple example to illustrate: naringenin, an ingredient in grapefruit juice, can change how a drug is metabolized by affecting key enzyme pathways in the liver. Changing the activity of those enzymes changes how quickly the drug is eliminated from the body. Some drugs even carry warnings to not drink grapefruit juice when taking the drug. The whole construct that it is possible to actually reduce the variables down to just one (the purpose of the RCT) is being challenged by some scientists, as every person is going to have their own unique response. Still, for most researchers these limitations are considered acceptable since there’s no other game in town.

Secondly, and I think most importantly, this procedure, which was developed primarily to test single compounds and is used in developing pharmaceuticals drugs, does not lend itself well to complex compounds like foods. For example, a chemical analysis reveals hundreds of different chemicals that make up garlic; and furthermore, where it was grown, when it was harvested and how long it’s been stored will cause changes in the chemical composition. It’s still garlic, but it’s going to have significant differences to other garlic, so how can it be considered a single (and consistent) variable? So what that has led to is the attempt to isolate “the active,” meaning whatever is causing the desired effect. For example garlic is known to help reduce elevated cholesterol through interfering with a particular enzyme (HMGCo-A reductase) that controls a key step in the de novo (in the body) production of cholesterol. The “active” ingredient is believed to be allicin, the compound that gives garlic its characteristic pungent odor and flavor. But as I said above, garlic has hundreds of different chemicals, and it’s totally within reason that there is no single compound that accounts for all the known effects of the herb, but the combined effect of several or many of these compounds. And that’s just garlic. Consider the challenge associated in trying to pin down which specific food or constituent of the diet is contributing “the” effect that’s helping, when a diet consists of literally hundreds of different components.

Here’s another example. There is a chemical found in grapes called resveratrol. It’s a very powerful antioxidant, and epidemiological studies indicate that it’s associated with lots of health benefits. Here’s the challenge though: when isolated and studied as a single nutrient, it takes much higher levels to reach the effect seen in the epidemiological work, which measures its effect as part of a diet. That would indicate that the chemical by itself is not as effective as when it’s consumed with other things in the diet.

My contention is that we will NEVER find a single nutrient that, by itself, provides dramatic health benefits. I think we have evolved to require lots of different things from lots of different foods that, when consumed together, contribute to improved health and reduction of disease. A friend of mine talks about “Eating a rainbow,” meaning that we should choose a wide variety of colorful fruits and vegetables, as the components that provide the color (such as resveratrol) are also what carry the health benefits. So it’s no single food, but a wide variety of different foods, consumed as part of a highly varied diet.

Therefore, and what this post is leading up to, is that I think scientists will never be able to use the randomized control trial to establish which nutrients are most beneficial, specifically because they are looking at it the wrong way.

There’s an old joke about a guy walking down a street at night and sees another person looking around for something under a streetlight. He goes over to him, and after being told the other person is searching for his car keys, he decides to help. But after a few minutes of fruitless searching, he asks exactly where the other guy was standing when he dropped his keys, and his new friend points to a spot about 20 yards away. Incredulous, our friend asks why he’s searching here rather than where he dropped his keys, and is told “There’s no light over there. I couldn’t see a damn thing, so I decided to search over here under the streetlight.”

I think our scientist friends are looking where the light is, not where the keys are.

Posted in Nutrition and eating, Science | Leave a comment

Nerdly aside: randomized controlled trials (RCTs) defined

The placebo effect is a well-known phenomenon in medicine. If a person is told they are being given an extremely powerful and effective new medication, a significant percentage will improve, even if they are not told the product they are being given actually has no benefit (it’s inert). The power of suggestion is so strong that they will actually improve, just because they believe they are supposed to.

To determine whether a particular approach is an effective intervention for a given medical condition it’s necessary to eliminate the placebo effect and reduce the variables to one:  the test product. The gold standard to accomplish this is called a randomized controlled trial, or RCT. Or to give it a more complete and descriptive title, the randomized, placebo-controlled, double blind, crossover trial.

To break it down:

  • randomized means that a fairly large group of people with similar characteristics (age, sex, general health status, etc.) are divided at random into two groups. Every person in the trial must have exactly equal chances of being in either the test or the placebo group, so either a random number-generating computer program is used, or there are charts that do the same thing.
  • A placebo is a substance or treatment meant to fool people into thinking it’s the same as the product or procedure being tested, but that has no benefit. So if it’s a medication being tested, a pill is made to look exactly the same as the test product, but is made of an inert substance, usually sugar.
  • Double-blind means that neither the test subjects nor the people conducting the test know whether a given subject is in the product or placebo group, to avoid unconsciously causing a bias by anyone (testers or subjects).
  • When the test begins, everyone is told the same thing and then given their coded product (again, neither the subjects nor the tester know which is the real McCoy and which is the placebo). Everyone follows the instructions carefully (hopefully) and extensive records are kept during the test period. Enough time is given to show some significant change, and then the groups are switched (the “crossover” part). Now the people who were taking the placebo are getting the test product, and those originally taking the test product are getting the placebo. If the product being tested has a benefit, the original group will show improvement while the placebo group does not (or less of an improvement, due to the “placebo effect.”) During the crossover portion of the trial, the former placebo group will now begin to improve (assuming the test product works), while the group that was taking the test product (but is now taking the placebo) will regress.

Anyhow, this process is supposed to weed out any possible placebo effect, and as I said is considered the Holy Grail of clinical tests.

Posted in General commentary on the world as I see it..., Nutrition and eating | Leave a comment

It’s 2016!

Happy New YearHard to believe, but another year is behind us and a new one just beginning. Actually, that’s true every single morning, but it seems we only take note of it on this day. It would be interesting to find out how some guy centuries ago (I’m not being sexist; back then it was always “a guy”) decided that this would be the day we’d mark as the beginning of our annual trip around the sun, and not any of the others. I’m guessing it had something to do with the winter solstice, but most of those folks were pretty good at figuring out the exact day of the solstice, so setting a day over a week later seems a little arbitrary to me. Maybe it was tied to the end of the celebration of the solstice or something. One day I’ll take some time to research it and figure it out.

So here we are at the beginning of another year. 2015 was a pretty good year for us, I’d say, although it had it share of ups and downs. Work was good; productive, busy and rewarding; Cathy enjoyed her second full year of retirement and we both have stayed healthy. We’ve stayed close to family; I got to see my brother Jim and his daughter Devon, and my sister Kathleen a number of times throughout the year (it’s a lot easier since Jim and I get to see each other for work as well as pleasure), and my travel schedule gives me the opportunity to see them all regularly. On Cathy’s side, we had a Chant Family Reunion in the summer; her sister Cindy took a job that required her to be in California a significant amount of time so she’s been staying with us while here, and we went to Santa Cruz for our nephew’s Oktoberfest…you get the idea.

It hasn’t all been perfect; losing my friend Mike to a motorcycle accident was very sad; it reminds me how fleeting everything can be.

We have some big plans for 2016; a kitchen remodel and trip to Africa are the two biggies in the first half of the year. We plan on starting the kitchen in a just 3 weeks, and plan on it being finished well in time for our trip in late May. Also, the usual things of getting more exercise, eating better and so forth.

Why do we make the resolutions on New Years? This is an interesting custom. We pick a day (I grant you choosing January 1 is not arbitrary; there’s a lot of societal support for doing so), and on that day commit to being a different person a variety of ways. I think it’s partly related to what I wrote about a few posts back about “the good life.” We have a mental picture of who we think we are, or perhaps who we’d like to be, and try to work more to make that picture become reality. Of course most of our resolutions have a pretty short half-life; habits are hard to change.

It strikes me that life is a journey, so maybe it’s good enough to just keep trying. And our resolutions on New Year’s Day are a way for us to bring back into focus what type of person we would like to be.

So we’ve got that goin’ for us. Which is nice, I think.

Posted in Family, General commentary on the world as I see it... | Leave a comment

Paradigm shifts, correlations and intervention trials

I have read that paradigms change from the outside, and do so slowly.

I got into the field I am in now (therapeutic nutrition) over 40 years ago. Back then, the conventional wisdom of the medical community was that vitamin supplements created nothing more than expensive urine, except in cases of frank vitamin deficiency. If you weren’t on a starvation diet, you would get everything you needed from your diet and would not need to take a vitamin supplement. And in actual fact, since the advent in the Forties and Fifties of food fortification, deficiency diseases such as scurvy (vitamin C deficiency), beri-beri (vitamin B1 deficiency) and so forth are extremely rare, at least here in the US. But we believed that vitamin supplements were needed anyway, for a couple of reasons. First, our food supply had shifted from what our parents and grandparents ate (primarily home-grown or fresh produce, locally-produced and minimally processed) to highly-processed food where the nutritional value was largely compromised. Secondly, changes in our environment (air and water pollution) and in lifestyle habits (cigarette smoking, dependence on pharmaceutical intervention for every ill, etc.) had all contributed to increased need for particular nutrients just to be healthy. Thirdly, a concept pioneered by Roger Williams and others called “biochemical individuality.” Williams’ book showed that there was a tremendous variation in the biochemistry of humans, and since activation of important enzymes is dependent upon availability of their nutrient cofactors, there was also tremendous variation of vitamin and mineral requirements. This meant that I might require hundreds of times more vitamin B6 (for example) to activate my enzymes than the person next to me, making supplementation of that vitamin necessary for me function normally. It would be a “functional vitamin deficiency” as opposed to a traditionally-understood deficiency. And lastly, we believed that vitamins had functions and effects that went beyond simply preventing or treating deficiencies. For example, the level of vitamin C required to prevent scurvy was about 30 or 40 milligrams per day (about what would be found in an orange). But based on work done by Linus Pauling and others, doses hundreds or even thousands of times higher were shown to be effective in preventing or treating cold and flu and even serious diseases such as cancer.

In the Seventies this was the realm of “alternative,” “preventive” or “holistic” medicine. There were a few studies that supported these beliefs back then, but those of us in the therapeutic nutritional supplementation field were confident that much more support would emerge over time.

Fast forward to today. Terms like alternative, holistic or preventive have given way to “functional” or “integrative.” To me this indicates a less polarized view of medical options, where “allopaths” are on one side and “homeopaths” on the other and never the twain shall meet. Instead we’re incorporating those aspects of all disciplines that work and make sense, and this now includes nutritional supplements, in much higher levels than what is required to prevent deficiency disease. TV shows like Dr. Oz have popularized what used to be avant-garde or considered unscientific, and now the clinicians who tell you that supplements give you expensive urine or in the minority. I read recently that something like 80% of gastroenterologists who were polled said that they recommended or provided probiotics to their patients, and most studies show a significant deficiency of vitamin D across much of the United States during winter months. There is still controversy, but it’s more arguing over nuance, as opposed to calling one another frauds. The paradigm is definitely shifting.

Part of this is due, I believe, to the incredible explosion of studies showing that the “traditional” approach to health care is well-suited for crisis intervention, where powerful pharmacological agents or surgical intervention are required, but it’s not so well-suited for the chronic diseases we see today, such as type 2 diabetes, obesity, cardiovascular disease and so forth. Most of these studies are correlational studies, where large populations are evaluated for presence or absence of disease, and then different variables examined to see what associations might emerge. Almost without fail, people who have the chronic diseases I mentioned above choose a diet high in refined sugar and fat, low in nutrient value and fiber. To say it differently, lifestyle choices people make have a direct correlation to health. Eat a diet high in fiber and micronutrients and you reduce your risk to disease; eat a “western-style” diet (high in refined sugars and fat, low in micronutrients and fiber) and your risk to chronic disease goes up dramatically. And if you can’t (or don’t want to) consume a more micronutrient-rich diet, take a supplement and you’re good to go.

Maybe.

While there’s a strong correlation between making better lifestyle choices and reducing the risk to disease, it may or may not have anything to do with the micronutrient content of the diet (with its corollary of vitamin supplementation). There could be other factors that are contributing to the undeniable observation that we can benefit from healthier lifestyle choices; it could even be a simple coincidence (unlikely but not impossible). For many people (I count myself as one of them), the correlational studies are enough. I take supplements, try to exercise and select a healthier diet because I think the benefits (potential and proven) far outweigh the downside. And it’s tempting to say that’s enough extrapolate to the population at large; people should eat a healthier diet, exercise and take supplements and their health will improve. But for others, that is not enough. The say “correlation does not prove causation; I need a definitive cause-and-effect relationship to be established before I’ll believe.

And there’s a justification for this conservativism, at least from a policy approach. If, for example, we are able to clearly prove that, say, taking vitamin D in winter months reduces the number of colds in a given population by 40%. Furthermore, let’s say we have established that the cost of the vitamin D works out to about $.10 a day for everyone, while the cost to business and schools in lost productivity amounts to an average of $1.00 per day per person. And one last consideration (safety):  the amount of vitamin D necessary to do that has never been shown to be harmful to anyone. And while this is stretching the science and economics a bit, it’s possible that everything I’ve said could be true. With the facts as I’ve laid out, it would obviously make sense to get everyone the ten-cents-worth of vitamin D every day. But how would that dime for every day for every person get paid for? (It adds up quickly; for a family of four that would be nearly $150 per year; a small town of 10,000 would be shelling out almost $375,000 every year.  Try to get taxes raised to pay for that!

Thus, many people (especially policy-making government types) push back pretty hard against the notion that low levels of micronutrients (which could be fixed with supplements) is the cause of our health woes; they want it proven beyond doubt before they buy in (figuratively as well as literally).

How do you prove cause and effect?  Intervention trials. This is where a group of people are selected who are at high risk for a specific disease; part of the group is given an intervention (nutritional supplements, for example) and another part is not. Both groups are followed over time, and if the intervention group stays healthy and the control gets sick, you have proof. Of course the trial has to be conducted a number of times to make sure it’s not a coincidence, but you’re on your way.

But there’s a problem.  A puzzling thing I’ve observed over the last several years is that the majority of large intervention studies of nutritional supplements have not shown benefit. For example, one study  (called “The Women’s Health Study”) looked at the effect of vitamin E in reducing heart disease in a large group of women (following nearly 40,000 participants for more than 10 years) and found no benefit to the supplement. In addition, several of these studies even appear to show that taking supplements is a bad idea, and could lead to harm. This study (called the “SELECT” trial) concluded that fish oil consumption increased risk to a particularly virulent form of prostate cancer in men. This is of course not universal, as some studies have shown significant improvement, but there haven’t been the flood of positive studies we had expected. As you might expect there are bones to pick with the way these and other similar studies are designed, conducted and interpreted; nonetheless it is surprising to me that there are not more positive studies.

It got me thinking that there might be a problem with the way this question is being approached.

Posted in Nutrition and eating | Leave a comment

Yeah, but…

In my previous post I talked about how it’s important to consider opposing viewpoints when examining what you believe. My position is that while generally true, there are situations where opposing perspectives should not be given equal time. In other words not all viewpoints deserve equal consideration. I used the example that, when considering how the pyramids or Stonehenge were created, it’s OK to dismiss “space aliens did it” without critically examining that viewpoint.

The other side of that coin is that paradigms NEVER change from inside the system, so good things that should have been given a careful look have historically been dismissed as ridiculous; “Everyone knows that can’t possibly be true.” I heard once that there are three (tongue-in-cheek) stages to the acceptance of a novel idea by a former critic:

  1. “That is ridiculous; any reasonable person can see that.”
  2. “Well, maybe it’s true, but it’s irrelevant.”
  3. “Of course it’s true, and I’ve said so from the beginning.”

I have an acquaintance (actually, we’re “friends” on Facebook, but I’ve only met this person face-to-face once, so I’m not sure exactly where that puts us). Anyhow, I gained considerable respect for him from both our interactions when we met and from following his FB postings. We have similar political and social views; he is a respected and accomplished physician who loves the outdoors and leads a very active life, and I feel certain that if we lived closer together we could become actual (as opposed to just FB) friends.

Unlike me, he is not afraid to take strong positions on Facebook (I tend to keep those views somewhat private; not all my views are things I care to share with my business colleagues). And I find that some of his postings are, while not directly offensive to me, would be to people whom I respect. For example he has posted a number of attacks on chiropractic and homeopathy as fraud or quackery. This is a bit surprising to me; he is not a “conventional” MD and embraces much of what I would have called “complementary or alternative” medicine. When we met it was at a summit to discuss how lifestyle medicine could (and should) be incorporated into the medical treatment paradigm.

Anyhow, the point is that he considers quackery some of the things I believe to be useful tools. Knowing him, I think I’m safe in assuming the reason he believes as he does is that he thinks there is no scientific evidence to support them.

But what if his opinion, which he believes is based on science, turns out to be wrong? In the case of chiropractic, for instance, it turns out he’s just poorly informed; there actually is a fairly large body of studies indicating benefits to patients with back pain from chiropractic treatment. But for homeopathy that’s not true; the only serious studies of homeopathy have shown no benefit over a placebo. So he could, based on the current science, be justified in dismissing homeopathy as worthy of no further consideration. (Note that this is not the same thing as rejecting something that hasn’t been studied simply because it conflicts with “conventional wisdom”; homeopathy has actually been studied.)

But my personal experience (and that of very smart people whom I know and respect) is otherwise. I’ve used homeopathic remedies and found them to be effective. Not always (but then, neither is anything else), but often enough to convince me of a benefit. And on pets, which would reduce the likelihood of a placebo effect.

Admittedly there is always the potential for a confirmation bias, but still, it makes me wonder if the gold standard of a scientific trial is appropriate to study everything.

Posted in General commentary on the world as I see it..., Religion and philosophy | Leave a comment

Occam’s law and space aliens

I think it’s important to be fair. Not just to people, but also to ideas. Critical thinking (of which I am a huge fan) requires that you carefully examine available information before deciding what’s “right.” (And by the way, that carries with it the requirement that contrary positions be sought out and thought through rather than simply being dismissed; if they turn out to be correct this must be incorporated and a new understanding emerges.) An honest scientist, philosopher or whatever (except for politicians; they are apparently exempt) would always say “I’ve examined as much information as I can, and this is what I believe to be true. But I could be wrong.”

But is there always merit in considering all sides equally?

I don’t think so. And let me illustrate with the pyramids. Or Stonehenge. Or the Easter Island statues. Pick one. For a long time (millennia?) we had no idea how any of these came into being. It seemed that the technology required to accomplish these feats were well beyond the culture and scientific acumen of the time. In the case of the pyramids, the sheer size of the individual stones was such that it looked like there was no possible way these could have been quarried or transported, let alone stacked up into what we see today at Giza.

Enter space aliens. One theory bouncing around a few years back was that aliens built the pyramids for reasons known only to them. (There was even a movie starring Kurt Russell called Stargate that explored that notion. As a sci-fi geek myself, I have watched it several times. Cool CGI.) If you’d rather not talk about space aliens, substitute “advanced civilizations, since lost in the mists of time” or some such flowery verbiage and you’re at the same place.

OK then.

It is a possibility, I suppose; infinitesimally small maybe, but a possibility nonetheless. But here’s the problem: there’s not a shred of hard evidence that space aliens had anything to do with building the pyramids; it’s just that up until a few years ago no one had a plausible explanation of exactly how it was done. Not knowing “how” something happened does not mean that it will never be known, or that any explanation is as good as any other. It just means “we don’t know yet.” And with our pyramids, sure enough, archeologists have identified the quarries that the stones were most likely cut from, shown how they could have been moved to the Nile (rolling them over logs), then ferried by barge to Giza where they were then rolled to the pyramid site and set into place for tourists on camels to take selfies in front of today. If you have enough slaves at hand and don’t concern yourself with little things like human suffering, it’s amazing what can be accomplished.

So on to the core of this post: when considering theories of past events, should all explanations be given equal weight? Let’s say you are hosting a conference about Stonehenge and how it might have been built. And since I’m making this up, let’s further assume that there are 10 different theories of where the stones were quarried, how they were transported to the site of Stonehenge today, etc., and you have gathered archeologists, engineers and so forth to present their theories for consideration by the audience. Do you also have to give equal time to someone who maintains that space aliens used lasers to cut the stones and a tractor beam to float them from wherever they were quarried to where they stand today? I think the obvious answer is “Of course not; that would be just silly!” Without any real evidence to support their position, the space-alien crowd would be dismissed as cranks. And justifiably so.

Occam’s law says (in current language) that the simplest answer is most likely the correct one. That’s adapted for med students today that “when you hear hoofbeats, think of horses, not zebras.” Of course there is the possibility that the patient has some weird, one-in-a-million-cases disease, but the more likely explanation is the flu. That concept loosely applies here: don’t go looking for space aliens when we don’t currently have a clear explanation.

Posted in General commentary on the world as I see it..., Religion and philosophy | Leave a comment

I didn’t know them.

Not too long ago one of the houses in my subdivision caught fire. I heard there was an electrical short in the garage someplace, a flame started and spread to the house. The family got out with some smoke inhalation and burns, but they survived and are expected to recover, even though the house and all their possessions were a total loss.

This event clearly changed that family’s life forever. Their memories will always be contextualized as either “before” or “after” the fire. Justifiably so; it’s a profoundly traumatic event to have your possessions destroyed and your life irrevocably changed, not to mention the potential for personal harm (in this case, for the most part fortunately avoided).

They were neighbors. I had never met them; in fact could only vaguely recall the house they lived in, even though it was right on the path I take every night when I come home from work. It’s a little more personal than what you read in the newspaper because it was so close to where we live, but it was still one of those events that you see, think about for a little, say “oh, how sad for them,” and move on. And I don’t mean to trivialize that process; it’s probably part of a defense mechanism we have to keep us from living in a state of constant anxiety.

But they had a dog.

More than one, actually. A couple of other pets as well, but it was the dogs that caught my attention. Their high school-age daughter suffered burns on her back, face and arms when she ran back into the house to try to rescue their dogs. She was partially successful; one dog survived but a couple of others died from the smoke. As I said above, she will recover but the pets she risked her life for were lost.

We have dogs. The MoLos (Moses and Lola) are part of our family. We have no human children, so we treat our dogs as surrogates. Oh, we’re not the “dress them up for Halloween” dog owners, and yes, we do realize they are not humans, and on and on, but I would be devastated to have either of them die in a fire. Or get hit by a car. When I heard that this family lost several pets in the fire it brought a completely different and much more personal awareness of their tragedy to me. It’s the exact same event, but discovering that they lost a dog in the fire made me think about it more, and it became more personal, in a way.

It strikes me that this defense mechanism we have, of protecting ourselves from tragedies, is partly because we don’t see ourselves in the event. It happened “somewhere else” to “people I don’t know.” We think, “How sad for them, but it wasn’t me.” But when some part of the event strikes a chord with us, and we now identify with it in a personal way, it has a deeper effect on us, and I think awakens a more empathetic side of us. I’m not suggesting that we should abandon that defensive distance we create around other’s tragedies, but it doesn’t hurt every so often to allow ourselves to be more in touch.

I think the world would be a better place if we could do that just a little more often.

 

 

Posted in Family, General commentary on the world as I see it... | Leave a comment