Archive for September, 2016

The Road to Insanity

insanity3zoom.2hq

The road to insanity (**) is often paved with stuff that while sometimes troubling, isn’t usually considered that bad. One generally doesn’t go insane quickly, or from one event. It’s typically a systematic process. Erosion. The person going insane doesn’t even see it happening. It just becomes apparent at some point we are not ourselves.

Eventually we can look back and see some of the causes, and recognize that many of them were probably avoidable, yet we drove right into it in spite of what should have been better judgement. Were we already insane? Where were the original lines crossed? Were we preordained by virtue of our life’s baggage (or, for the religious, some journey God wants us to take)? And what of the others who helped take us on the journey? We think of insanity in an isolated, lonely way, yet it’s rarely accomplished without some “help” — innocent though it may be — from someone else.

Once you’ve gone insane, whatever that is, then what? There is no real undo button, nor trail of bread crumbs back. Life is additive. You may always be prone to a degree, carrying the baggage, just like an alcoholic is always an alcoholic. The best you can do is live with it and try to manage. That burden gets added to the rest of life’s weight we carry with us. You try not to make things worse, which requires a now difficult to assess combination of judgement and discipline.

Maybe you have a partner or friend in life who can help, but depending how far down the rabbit hole you’ve gone, it may be time to consult a professional. That’s the best advice I have, which isn’t saying much since it probably means taking input from someone who’s battling her own issues, not to mention the tendency to lump people and circumstances into boxes of conventional wisdom of previously defined semi-similar characteristics that may well not apply (correlate) thoroughly enough to provide a real remedy. It’s therefore better to figure most of it out yourself if you can. Big if, though…

(**) Insanity is an often misunderstood and misused word. It is legal, not clinical. So we’re all on the same page, know that I use it in the most broad, generalized (colloquial) sense here.

 

Probability: Facts, Statistics, and Reality

What is reality? Statistics are based on facts. We can’t deny or ignore them. But they aren’t always factual, or even meaningful.

I am a statistic and so are you. I drive a car. I eat. I buy things. I have an education. I don’t smoke. I was born at a place and time. I am programmed by my surroundings and DNA.

We use prior facts and statistics to reason with uncertainty, to get at probability, but we suck at it. In general we are really, really, staggeringly incompetent at processing all but the simplest statistical data in ways that produce meaningfully accurate evaluations. This is partly because it is much more difficult than it appears on the surface. We aren’t rigorous enough. But it’s also because our intuitive way of understanding complex relationships is…well…it’s too simplistic.

A Quick Lesson in Photography (It’s relevant, run with me)
In photography one learns about the relationship between the glass of a lens, the distance to the subject, and the focal point, which is where the image passing through the curved glass is in focus. The curvature, or varied thickness of the glass, bends light. Distorts it, providing a means to get a reasonably coherent image on to a sensor (without this, all light from all angles will hit the sensor, producing nothing more than a gray blur).

BasicLens

So a lens allows us to pick an area to capture, just like the position of your eye picks an area of the overall scene to project onto your retina through its lens (your brain further filters this into what it chooses to focus on). The focal point is where the image comes into focus after passing through the lens. The focal length is the focal point’s distance from the lens.

Modulating the distance between the lens and the subject changes focal point.

FocalPoint

In short, there is a relationship between the shape of the lens, the distance to the subject, and the distance where light reflecting off of that subject that passes through the lens will come into focus. Five minutes playing with a magnifying glass gives one an intuitive understanding of this. Note: the eye changes focal length by changing the shape of the lens, whereas a camera does it changing the distance between two (or more) lenses. All of these ingredients, and others to follow below, interact with one another to vary the result.

Simple? Well, it’s not that simple. There is more. (Hang in there.)

In around 1000 A.D., Arabic physicist, Ibn el-Haitam penned the first known accurate and comprehensive description of how light is refracted by shaped glass. This led to the development of a myriad of mechanical devices that provided augmentation for human vision. Through a combination of lenses, we can “zoom in” on specific parts of a scene, bringing things far away closer to us for better examination.

ZoomMicroscope

In Astronomy, however, the concept of “zooming in” isn’t as significant, even though it would be cool if we could zoom way in to distant objects. At those great distances our earthly lens ratios don’t accomplish much. And we can’t change the ratios too much due to diffraction. Diffraction, or the scattering effect of light, always exists when light passes through or around something, or is reflected. Diffraction essentially acts to defocus the image, which means that as the magnification or zoom increases, the sharpness and clarity of the image breaks down. This defocusing is inversely proportional to the diameter of the lens, not to mention the optical quality, but is always an issue. It’s one reason why the best telescopes have a large diameter.

DiffractionDiagram

DiffractionImages

Larger f numbers indicate smaller openings. Intuitively one can see that zooming in to any of these images would cause an increased breakdown in apparent focus.

 

The other reason for wider lenses is because they are inherently able to gather more light, which is pretty important for looking at far away stellar objects, some of which are so faint they cannot be seen by human eyes or conventional optics.

StarsTelescopeLightGathering

 

Gathering more light is also useful in earthbound photography. More light reaching the sensor allows it to record the scene quicker, which makes it easier to freeze the motion of moving images. Aesthetically this may or may not be desired. Note how these different capture speeds reflect different interpretations of “reality.”

WaterfallShutterSpeed

As capture speed slows, the motion of the water is revealed in different ways. The image on the left shows the exact position at an instant in time. The rightmost image shows the average position across a period of time.

PinwheelShutterSpeed

The actual speed of the pinwheel was the same in all instances.

 

Wider lenses come at a cost beyond the expense of manufacturing them. As we let more light in, and reduce diffraction, we simultaneously narrow the range of distances from the source to the camera that will appear in focus because light is hitting the sensor from wider angles off dead center.

BasicDOFDiagram

 

Photographers refer to the range of distances that will appear in focus at the focal point as “depth of field.” With larger apertures and lenses more precision in focusing is required, which is generally manageable, but the impact it has on the resulting photograph can be significant.

AperturDOFDiagram

 

More or less of it my be desired, depending upon the look one wants. Like speed, the depth of focus also serves to depict different views of “reality.”

DOFBulbs

DOFGirlBoy

 

It is a very useful consideration for pulling the attention of the observer to a specific part of an image.

DOFExample1

DOFBass

One can look broadly at an image and not see the distortions in the domains of time and clarity, but careful examination will reveal them. They can’t help but be there. We can capture a scene in a way that emphasizes or de-emphasizes certain of those, but in the immortal words of Scotty, “You can’t defy the laws of physics.”

We trust what we see with our eyes, too much. No doubt the reader has now surmised there is a matrix of trade-offs in what we “see” when we capture an image based on these parameters. Photographers make choices, intentionally and unintentionally, that affect the outcome. Observers usually take the photo at face value, with no regard for the actual reality of the scene, instead letting the photo determine the reality we believe existed in the moment.


 

Now, apply that same thinking to statistics and one can begin to see why it is so commonly held that statistics can be used to support nearly any conclusion. Unfortunately what often happens is that they are used too generally. Reputable (*) sources say things like…

  • 10% of our brains are used
  • 50% of marriages end in divorce
  • we share 99% of the genetic code
  • left handers die an average of 9 years younger than right handers
  • 18% of social media users use snapchat
  • 77 cent wage gap between women and men
  • 20% of women are sexually assaulted before they leave college
  • 0.0024% of deaths are from electrocution
  • men think about sex every seven seconds
  • The religion of Islam is growing at a rate of 2.13% per year
  • spousal abuse skyrockets on Super Bowl Sunday
  • The average household income in the U.S. is $70,000, or…
  • Any of the stats that show that the top X% earn [staggeringly large number]%
  • 80% of convicted sex offenders repeat the crime
  • 50% of 18-24 year-olds go on Facebook when they wake up
  • 30.5% of all desktop search traffic between 6/10/16 and 7/7/16 came from searches with the term “pokemon” in them
  • (my personal favorite – LOL) 90% of statistics can be used to say anything 50% of the time

There are so many more I could use: Crime Rates, School Quality, Unemployment, Mortality, Cost of Living, Obesity, Literacy, Birth Rate, Gun Control, Teen Pregnancy, on and on…

They are all generally accepted as true, yet none are precise across the board (stay tuned for a future blog post delving into the difference between precision and accuracy). There are nuances in specific ages, cultures, geography, time, education, temperament, weather, situations, and any of dozens of other variables that come into play. They are generalizations. Averages. Broad snapshots, or maybe narrow ones. Very useful as shortcuts to learning and not basing too much on mere assumptions, but also misleading in some ways. Many sound plausible. Some we believe. Some we would question. Some are subject to varied interpretations. Some are patently false. I’m not just referring to the inevitable weird exceptions that are in the noise of any statistical model. Noise is random. I’m talking about correlatable things with some significance that are missed, ignored, or misinterpreted.

For instance, Psychology Today based an article on the 10% of our brains get used statistic. We’re easy prey for this because we’ve been hearing variations on that number most of our lives. None of them account for what the data really says, which is that a small portion of our brains are used at any given moment, in a given activity, or for a given purpose. Other activities and functions require other parts of our brains to be used. The real number is significantly higher than 10% when all of this is factored in.

I can, to some extent, debunk every one of the above listed items with rigorous research, but then who says my facts are really correct? More significantly, is the interpretation of the data correct? For instance, the divorce rate of 50% seems pretty concrete. We have records to prove it, so there is no questioning the data, right?

If that’s the case, then why are the quoted stats sometimes so “approximate?” The APA website says “40 to 50%.” That’s a massive difference for something that seems so concrete. Further, we all seem to believe it, or do we? We accept it. But…most of us get married anyway!!! If you knew that you had a 50% chance of getting struck by a bus while crossing a street, would you dare to cross in spite of it? A little thinking and reading quickly leads us to realize there are tons of potential nuances to a statistic like this. So many it isn’t practical to try to list them all.

But there are also fundamental issues that skew the data. How is it calculated in the first place? Often the number is reached by simply comparing divorce filings to marriages over a given span of time. Make sense? It does on the surface, but it doesn’t necessarily tell us much because over time things change. Okay, well, what if we compare divorce and marriage rates in a given year? That’s a pretty short span of time and would seem to be a suitable snapshot. So, let me get this right…you want to add up all of the divorces in a year, no matter how many years all of those people have been married, with no consideration to when they were married or under what circumstances, and compare them to marriages this year? Doesn’t really work. As with photography, anytime you capture something moving in a static “image” or focus on a part of it, you’re forced to make decisions about how to present it to meaningfully convey the result. If anything, we should be comparing all people who are currently married to those getting divorces in a year, but it’s almost never done that way.

Yet we buy in. We’ve all heard the statistic so many times that it rings true to us, to the extent that nowadays any data showing something different would be called into question. We have become biased, mostly (in this case) by our own inability to effectively pick the data to look at, or in interpreting what it really means. And yet most of us decide to roll the dice and get married anyway.

We often apply statistics to provide probabilities about how the future will turn out. Seems simple enough. If 20% of children outgrow a childhood allergy to peanuts, then you could assume your peanut allergic child has a 20% chance of doing the same. It’s also easy to see there could be many subsets that could introduce other variables. Is the percentage the same across race? How about across a range of medical treatments or diets or subtle variations in DNA? That’s just scratching the surface – the basics.

Consider this test. Two groups of people engage in a coin flipping exercise. Group A looks for the pattern H-T-T (Heads, Tails, Tails), and Group B looks for the pattern H-T-H (Heads, Tails, Heads). Each member of each group records how many coin tosses it takes before the desired pattern occurs. The two groups then average their respective results. Do you think the average number of tosses between the groups will be the same, or will one of them happen sooner on average? (Don’t worry, most people, even very smart ones, get this wrong.)

It turns out it’s statistically provable that the average number of tosses to reach the pattern H-T-H is 10, while the average number to reach H-T-T is only 8. How could this be? It butts up against our common sense on the matter. If anything we would think that H-T-H would be easier to create, since it overlaps itself. Throw a T between any two occurrences of H, and viola!

But think about what happens the first time you get an H followed by a T. At that point, either of the two results can now occur.

Group A: you’re looking for H-T-H, and you’ve seen H-T for the first time. If the next toss is H, you’re done. If it’s T, you’re back to square one: since the last two tosses were T-T you now need the full H-T-H.

Group B: you’re looking for H-T-T, and you’ve seen H-T for the first time. If the next toss is T, you’re done. If it’s H, this is clearly a setback; however, it’s a minor one since you now have the H and only need -T-T. If the next toss is H, this makes your situation no worse, whereas T makes it better, and so on. Even when you lose you’re 1/3 of the way to winning.

Put another way, in Group B, the first H that you see takes you 1/3 of the way, and from that point on you never have to start from scratch. This is not true in Group A, where a H-T-T erases all progress you’ve made.

People write articles and make declarations every day who do not understand these types of relationships. We read them, and we usually believe them so long as they sound credible (often through citing studies and statistics) and/or match sufficiently with our common sense or already held beliefs.

Bipolar Disorder affects 3% to 5% of the population according to Gary Sachs, director of the Bipolar Clinic at Massachusetts General Hospital. It affects 2.6% of the population over 18 years old according to WebMD. The U.S. National Library of Medicine has been summarized as saying that it occurs more in women. The DBSA (Depression and Bipolar Support Alliance) say it occurs equally in women and men, but women tend to cycle faster. Closer examination, however, reveals that Bipolar II (a predominance on the depressive side) occurs more in women. These are basic examples of statistics being in general agreement, but presented in ways that show differences.

What about clinical testing? What if there is a test that is 99% accurate. When it returns positive, you could assume that there is a 99% chance the positive result is true, no?

Two problems…

  1. I have pointed out that there is almost always a higher level of detail or granularity that can be considered to further classify and characterize the risks of different individuals.
  2. It also depends on how common or rare the condition is. Suppose the condition affects 1 of every 10,000 people. If we sample a million subjects, the test will get the 100 who do have the condition right 99% of the time. 99 of them will test positive. Amongst the 999,900 who do not have the condition, the test will get it right 99% of the time, which means it will improperly show that 9,999 of them have the condition, when in fact only about 99 of them really do. So, less than 1% of the people who test positive actually have the condition!

Consider the case of Sally Clark. She was convicted of murdering her two male children, one of which died in 1996, and the other in 1997. Defense claimed it was likely SIDS that took them, but the prosecution, through expert witnesses, won, in part by showing that there was only a 1 in 73,000,000 chance that SIDS would affect two children in a single household.

Wrong, in two ways.

  1. The calculation was faulty. It was based on the probability of SIDS affecting a child of affluent non-smoking parents being 1 in 8,543, so therefor the odds of two children suffering from it are 1 in 73,000,000 (8,543 x 8,543). But this doesn’t allow for the fact that we don’t completely understand what causes SIDS. There are clearly unknown environmental and/or biological (hereditary) factors. It’s quite likely that if a child dies from SIDS that some of these unknown factors are in play, and may increase the likelihood of it affecting other children by a factor of 5 to 10.
  2. Even if 1 in 73,000,000 is correct, the statistical analysis of murder suffers from the same flaw as the medical example above. There are two parts to the explanation that must be considered independently. 1) That Sally was innocent – which is likely considering most mothers don’t murder two children – and she suffered an incredibly unlikely event. 2) She is guilty, which is unlikely, but if she were trying to kill them, she succeeded.

Figuring that out statistically is much more complicated, and in fact there are significant factors that could have and should have been considered from the get-go, such as the details that boys are more likely than girls to suffer from SIDS. She was eventually released from prison after the second appeal, which more carefully broke down the statistical odds of a double murder to between 4.5:1 and 9:1. Still likely, but hardly conclusive enough for a life sentence based on weak circumstantial evidence.

We can be pretty good at questioning conclusions from people who we believe are unlikely to have competence in a particular area. If the subject matter expert of the court had tried to present arguments about diagnosing an automotive problem, no doubt there would be suspicions. But this expert was in the field, and presumed to understand the data upon which he was drawing conclusions. Yet he got it way wrong, and nobody questioned it.

Sally, after spending a few years in prison as a convicted murderer of her two children (probably not the most enjoyable of prison stays, as far as prison stays go), died of an alcohol overdose a few years later. Her husband said she was never the same after that experience. There is more nuance and information on this story, but the takeaway here is that facts and statistics as used severely mischaracterized the likelihood of a natural death.

Data and stats should be subject to much more scrutiny than they commonly are. But we need them. We need the shortcuts, assumptions, and predictions they provide. It’s a shame we so often get it wrong, because it skews our perceptions and causes faulty decisions. But even where the data and conclusions are relatively good, which as laypeople we often have no way of knowing for sure, we often fail to see into the exceptions.

A whole pile of data can suggest that something is generally true, yet in certain circumstances it isn’t. We don’t or can’t always quantify those circumstances, but we know they exist. Marriage is an example where there appears to be a lot of opportunity. Can we study a detailed matrix of characteristics that will lead us toward better outcomes based on empirical results? I suggest that the big dating sites have an opportunity to do this with the big data they could be collecting. Employers already do it to some extent.

We can do better and learn more, but until then, you have to ask…

What about the exceptions? Now you have to decide. Do you trust what the data is telling you? It’s a lot more tempting to trust statistics at face value when they appear to align with what you already believe. But are you really looking at it the right way? Does it really apply to you, in your unique circumstances?

Is it a big decision? How does the upside balance with the downside, and how does that impact how you think about the stats? Here is where the dreaded lizard brain of fear steps in. We look at it through the filter of what we are in and interpret the facts accordingly, or by finding another’s interpretation that resonates through us, in some cases by stimulating a fear we already have. Fear is so powerful. It can even make us interpret unmitigated facts in a misleading way.

Sometimes you have to trust yourself. Nobody ever got anywhere important through only listening to others, and that includes the stats, the so called experts, the masses, and the common sense. That stuff is useful up to a point, but somewhere along the journey you have to pave your own way, lest you become a different kind of statistic: the person who is eminently forgettable (ironic oxymoron), not happy or satisfied.

Against All Odds

Invariably someone comes along who defies what others believe. The journey that brought them there can later be quantified. We just need the right lens and some effort and skill to be able to see it. It’s much easier, however, to divine it as mysterious or destiny. We look on from the outside with wonder, not always recognizing there is a quantifiable method to the madness. We tranquilize ourselves with belief that the person was lucky, or blessed.

Few achieve the big goal, or true happiness from solely following others, or common sense. Each of those success stories has a foundation of paving one’s own way.

So much of the past five years of this blog has been about the willingness to make a leap of faith. Not in some external force, but in yourself. As I continue to wrap things up and finish off all of the work that has been started, I still find I am frustrated and even hurt by how little progress has been made. I marvel at those who do it better than me, and hurt for those who continue to find excuses to wallow in the status quo. That’s not what life is about. But then, who am I to say? I’m just a guy with a camera and a blog.

Do not put your faith in what statistics say until you have carefully considered what they do not say. ~William W. Watt

 

(*) – I can substantiate every claim and example used here, along with my rebuttals, if necessary (just ask). I chose not to burden the reader with too many links and wild good chases in the interest of focusing on the heart of the message.

 

Toy

TOYsml

Brownsville, KY, circa 1985. During a visit with a woman I would eventually marry, I remember watching an actual game of cat and mouse from the door to the back porch of her house. In those moments it struck me that the common definitions of “cat and mouse” are subtly wrong. In every humanly recognizable way, it appeared as if the cat was just playing with the mouse in the same way – with all of the same movements and actions – as would happen with a small ball of yarn. It wasn’t violent. It really looked playful.

But to the mouse…it wasn’t a game at all. Mice are more fragile than they appear. Eventually that little guy was too beat up to continue. The game was over.  With the mouse dead or unconscious, the cat lost interest.

I was too ignorant (naïve) at the time to understand what was really happening.

We eventually divorced, for fairly typical reasons that are beyond the scope of relevance to this writing. I will simply say there was never any gamesmanship to our relationship. Good or bad, it was always as it appeared, and I usually knew where I stood. That stood in contrast to some other relationships I’ve had (not just romantic ones), as well as many I’ve observed.

The cat didn’t appear to be aware of the harm being inflicted on the mouse, as it became gradually more sluggish and vulnerable. Manipulation amongst humans is often a two-way street, and not necessarily bad. Of course, when there is an imbalance of power it gets more complicated, and that’s where people tend to get hurt. Later research informed me this “play” is actually the way cats wear down or stun their prey so they can move in for the kill (usually snapping vertebrae in the neck area) without risking getting clawed or bitten around their face or eyes. In this case, based on the fact that the cat didn’t eat the mouse (at least not right away), it still seemed playful to me. Nevertheless it’s just what they do as they carry out their biological functions.

Humans sometimes do it, too. While the intent may not be bad in most cases, the fact that we do sometimes use others to derive our own enjoyment or validation is, nevertheless, tantamount to treating them like a toy – as the mouse. Upon being faced with this, we may feel guilty. What does the guilt do? It’s just a feeling, right? Unlike the cat, in the final gutless act, we don’t even stick it out to finish the job. The mouse is left on his own. We may hope he can pick himself up and find a way to carry on as we move on, but it isn’t our concern now. It’s pretty cruel, yet it’s just what we humans are capable of as we carry out our biological functions.

 

 



%d bloggers like this: