I really like the sentence “what you see is all there is” because it is implicated in so many of the cognitive errors that people make. I first heard the sentence when I was talking to my brother about how two different people can draw wildly different conclusions based on what appears to be the same information. He replied making two points: it is very unlikely that two people will draw different conclusions if they consume and process the same information and read the book “Thinking – Fast And Slow” by Daniel Kahneman.
I’m not certain about the first point, although probably. He was absolutely correct with the second point. The book is a master piece from the very beginning. A heart felt introduction lays the landscape – the book covers the topics of research that two psychologists collaborated on over a number of years. They developed a very close friendship, one that seemed to complement one another in terms of how they engaged and thought about the world. The work they did together has been highly influential and led to Kahneman being awarded the 2002 Nobel Memorial Prize in Economics. He maintains that had Amos Tversky not died in 1996 that they would have shared the honour. Unsurprisingly, “Thinking, Fast And Slow” is a heavy and demanding read. It is important and brilliantly revealing as it dissects, in terms of understanding, how and why people are the way they are and why the world is the way it is.
“What you see is all there is” (WYSIATI) is a way of making reference to the brains tendency to immediately process available sensory or perceptual information and to quickly and automatically make predictions based on ONLY this information. For example, you go to a baseball game on a rainy day and notice a lot of empty seats. When asked about the game later, you comment that the game was an absolute blow-out in favour of the home team (your team) and that the stands were practically empty because of the rain. You assume that people stayed away because of the weather and never consider that the game was against the worst team in the league and even though they won, it was uninteresting to watch because it was never competitive.
All you saw was the rain so therefore THAT must have been the reason why people didn’t show up. The truth is actually that people do not pay money to watch boring baseball games.
WYSIATI is the mental process of paying attention to only what is visible or that comes to mind when considering a decision. It is an “in the moment” and automatic phenomena and we are able to overcome it only when we take the time to consider what is not visible, what is not known, or what is known but currently not being brought to mind. Unless we take care to slow things down and work to create a mental placeholder for the things that you cannot see or that do not immediately come to mind, we are going to move forward considering only what is right in front of us; both literally and metaphorically.
This phenomena is likely the underlying cause of many other cognitive biases and its effects can be seen in a variety of different places causing predictable errors with decision making.
A great example of one of these biases is that of the survivor or survivor-ship bias because it has a near one to one relationship with WYSIATI. If you have never heard of the survivor bias consider the saying “history is written by victorious” and allow your brain to build the connection between these two things. You may notice that in your minds eye you begin to see two groups of people, one being the winners of the war and the other being the losers. The winners might even appear as larger, more clear, and in vibrant colour while the losers are smaller, blurry or lacking detail, and in black and white or grey scale. It is obvious from these images who is going to speak with more authority and clarity, and which group is going to have the volume turned down because their opinions are not worth listening to or hearing. The consequence to this is that one group gets to say everything while the other group doesn’t get to say much at all. If these just happened to be the two different sides of a war, it is obvious who gets to write down what happened and who needs to keep their mouths shut, accept their place, and to remain grateful for the fact that they didn’t get killed when their side lost.
Regardless of what gets captured as “history” by the group that has been given a voice, the other group still exists. Their silence, or the censorship of their stories, is not the same thing as them not existing. They remain alive and their version of events lives on in their brains, even if no one ever listens to or hears it. Pure or objective history is a single thing and is a point by point record of what ACTUALLY occurred regardless of the outcome. The text books may not contain any single sentence about it, instead being filled with the writings of the winning side, but this does not change reality at all. However, if you were given a test on the history of this specific event, you would be considered correct if you were to recite what is captured by the text books and would very likely lose marks for mentioning anything that was objective history but which did not match what the winners chose to put to paper.
Looking at this example it is easy to see that even though there is an other side to the story, it is as if there wasn’t because this side has never been shared. WYSIATI because you have never been exposed to anything else IN SPITE of the reality that something else did happen and this fact means that you are almost certainly wrong about history, or at the very least, profoundly ill-informed.
How this example relates to the survivor bias is that the winners are the survivors and the losers are the ones who did not make it. Both groups did exist, but we never hear from the losers because they never get to voice their experiences. If they could tell their tales, they would enrich the narrative and balance things out. This second thing is actually much more important because without their stories, the narrative seems completely balanced. It is only by becoming exposed to these stories that the lopsidedness of the initial history become obvious. But what you see is all there is and since you only get to see the stuff from the people who survived, your “objective” perception of things is completely skewed.
The big example that is used to illustrate the survivor bias is that returning war planes during the second world war. Everyone knew that both sides were losing a lot of planes because they were shot down by enemy pilots or anti-aircraft fire. They also knew that they could probably lower these numbers simply by adding more armour to certain areas of the plane. To this end, they set about collecting data on the damaged planes in hopes that they would uncover a pattern of vulnerability. Their efforts did reveal a lot of interesting things. The outer portions of the wings sustained a lot of damage, as did the rear wings, and the areas behind both engines extending back and across the centre of the aircraft including the fuselage. The initial thoughts were “reinforce these areas, add armour to reduce the chances that the plane would go down from enemy fire.”
This seems to make sense, the planes return with damage to some very distinct areas. Adding armour to these places is going to make the planes safer. It seems like the right thing to do.
What doesn’t come to mind initially are the areas with no damage. Look again at the image, the areas of impact are concentrated, as are the areas of no impact. There are very clear boundaries between them. This may or may not be significant.
Think about the history that the planes would write if they could write it. They’d tell you about getting shot, by the enemy, and of limping home, with holes all over their wings and body. Probably a close call for some of them. but no matter what else happened, the damage sustained was not sufficient enough to take them down. ALL of the ones that made it back made it back. If these planes are used for 100% of the samples in the study, you are only going to learn about damage that was not catastrophic. Putting more armour on these planes might be helpful, but all of them made it back safely without having any extra armour.
What really needed extra armour were the planes that did NOT make it back because their impacts were mission ending. But these planes never got to tell their story or write their history because they did not survive. If 50 planes went out and 25 of them returned, we can conclude that the armour on the 25 that did not make it back was not adequate to handle the impacts from the enemy. We do not know anything about the nature of these hits and we learn nothing about these hits by looking at the planes that returned damaged. This is a case of WYSIATI and the survivor bias.
What we are not seeing and need to see in order to solve the armour question is the damage on the planes that did NOT make it back because the damage was too severe for them to keep going.
Let this sink in if it doesn’t seem right or if you have never before thought about things in this way. Without taking the time to stop and really consider the problem, we make the error of assuming that the survivors DID something that allowed them to survive. But when we take some time to work the problem through again, we begin to open up to the possibility that maybe they survived because something was NOT done to them. This is exactly what was happening to the planes. When you look at the damage patterns of those that survived, you’ll notice a complete lack of damage to the engines, the front of the plane, and the fuselage right at the cockpit. So the planes that returned had working engines, intact forward facing aerodynamic surfaces, and pilots that were still alive.
The story about where armour should go was not logically told by looking at the survivors UNLESS you took the time to consider where the damage was NOT and to think about the consequences of damage to the unharmed areas.
Make no mistake about it, the other side was working with this information. In fact, by not having any access to the survivors, they were able to focus all of their attention onto solving the problem of how to destroy more planes by looking at the ones they destroyed and uncovering patterns. It was pretty clear to them, shoot the pilot, shoot the engines, or shoot any leading aerodynamic surface. Shooting the wings in general did not seem to have the devastating effect of hitting these other locations.
There is a good and a bad to the survivor bias and to the phenomena of WYSIATI. The good is that both can be counteracted to varying degrees by taking some time to think about what isn’t visible or what isn’t being said – to essentially give a voice to the vanquished. By asking the questions “what do I not know but is very probable,” “of everything there is to know about this situation, what happened to the stuff that isn’t known,” “what percentage of the totality do I know,” and “to make the best decision, is what you know more valuable or is what you don’t know more valuable?”
Asking these questions about the planes, you’ll get answers like “the planes that didn’t make it back may have been hit in other places,” “those planes crashed,” “50%” assuming that 50 planes left and 25 returned, and “what we do not know is more valuable because those planes got hit in the places that actually need improved armour.
The bad thing about the survivor bias and WYSIATI is that they do not feel like anything OTHER than sound rational decision making and analysis. The only reason you will know they exist is if you have learned about them or are the type of thinker who defaults to knowing there is stuff that is unknown and that this stuff has a big impact on things. Both are learned behaviours and even when we know them, both require effort to cultivate sufficient doubt to move you off of feeling certain and onto the task of figuring things out.
This effort requiring quality means that most human beings will continue to have their thinking impacted by these biases because most people are unwilling to put the effort into thinking about the impossible and the invisible. For most things and most people, the cost of being wrong is not all that high and most often the effort required to do the work to counteract these biases is greater than the work required to maintain an incorrect point of view. It is easier to justify why you are right than it is to put in the work to correct an error.
The brain does not make errors. It is a machine that operates in a purely logical way. When it doesn’t have accurate information or when it doesn’t have sufficient information, the output it generates may not be correct. It can only do what it is programmed to do and it can only do this with what it has access to.
This means that we need to use our attention to make sure we bring in the most accurate information we can, that we take sufficient care to interpret the information accurately while correcting errors quickly, and that we put the effort into surfacing or activating as much of the relevant information as possible. Experientially this is going to feel like work but there is a big pay off. In the short term it will mean improved decision making, and in the long term it will result in enhanced levels of expertise and a boost in cognitive ease.
WYSIATI exists because the brain is not able to bring to mind or activate everything it knows about a topic instantly, nor is it able to activate everything all at once. With enough time it will probably cycle through everything, but each new activation causes something else to fade away. All of it will serve however as input so given enough time, if your brain has the information stored, it will generate output that is correct. When we do not take enough time, we do not supply it with sufficient information to generate the correct answer.
When someone is an expert in a particular area, they are rarely impacted by WYSIATI because the information that they have stored in their brain is very accurate and they have created a new automatic and unconscious process for activating all of the needed information and giving the brain the input it needs.