What would be some fact that, while true, could be told in a context or way that is misinfomating or make the other person draw incorrect conclusions?
When you think about data it actually gets really scary really quick. I have a Master’s in Data Analytics.
First, data is “collected.”
-
So, a natural question is “Who are they collecting data from?”
-
Typically it’s a sample of a population - meant to be representative of that population, which is nice and all.
-
But if you dig deeper you have to ask “Who is taking time out of their day to answer questions?” “How are they asked?” “Why haven’t I ever been asked?” “Would I even want to give up my time to respond to a question from a stranger?”
-
So then who is being asked? And perhaps more importantly, who has time to answer?
-
Spoiler alert: typically it’s people who think their opinions are very important. Do you know people like that? Would you trust the things they claim are facts?
-
Do the data collectors know what demographic an answer represents? An important part of data collection is anonymity - knowing certain things about the answerer could skew the data.
-
Are you being represented in the “data”? Would you even know if you were or weren’t?
-
And what happens if respondents lie? Would the data collector have any idea?
And that’s just collecting the data, the first step in the process of collecting data, extracting information, and creating knowledge.
Next is “cleaning” the data.
-
When data is collected it’s messy.
-
There are some data points that are just deleted. For instance, something considered an outlier. And they have an equation for this, and this equation as well as the outliers it identifies should be analyzed constantly. Are they?
-
How is the data being cleaned? How much will it change the answers?
-
Between what systems is the data transferred? Are they state-of-the-art or some legacy system that no one currently alive understands?
-
Do the people analyzing the data know how this works?
So then, after the data is put through many unknown processes, you’re left with a set of data to analyze.
-
How is it being analyzed? Is the analyzer creating the methodology for analysis for every new set of data or are they running it through a system that someone else built eons ago?
-
How often are these models audited? You’d need a group of people that understand the code as well as the data as well as the model as well as the transitional nature of the data.
Then you have outside forces, and this might be scariest of all.
-
The best way to describe this is to tell a story: In the 2016 presidential race, Hillary Clinton and Donald Trump were the top candidates for the Democratic and Republican parties. There was a lot of tension, but basically everyone on the left could not fathom people voting for Trump. (In 2023 this seems outrageous, but it was a real blind spot at the time).
-
All media outlets were predicting a landslide victory for Clinton. But then, as we all know I’m sure, the unbelievable happened: Trump won the electoral college. Why didn’t the data predict that?
-
It turns out one big element was purposeful skewing of the results. There was such a media outrage about Trump that no one wanted to be the source that predicted a Trump victory for fear of being labeled a Trump supporter or Q-Anon fear-monger, so a lot of them just changed the results.
-
Let me say that again, they changed their own findings on purpose for fear of what would happen to them. And because of this lack of reporting real results, a lot of people that probably would’ve voted for Clinton, didn’t go to the polls.
-
And then, if you can believe it, the same thing happened in 2020. Even though Biden ultimately won, the predicted stats were way wrong. Again, according to the data Biden should have been comfortably able to defeat Trump, but it was one of the closest presidential races in history. In fact, many believe, if not for Covid, Trump would have won. And this, at least a little, contributed to the capital riots.
-