Friday, October 16, 2015

Three sources of political bias / polarisation

It is puzzling how smart people can diverge so much on political issues. Especially when they delineate their full lines of reasoning, covering much of the disputed arguments.

How come?

Three thinking dynamics are listed below. Which together can explain much of it.

1) Starting point bias.
Assume that both arguments have merit.

The right: Incentives are crucial, lower taxes encourage investments etc. (which is inarguable, but arguments exist on how far it can go. Nobody offers zero tax...)

The left: Welfare is crucial. We need taxes to fund it. Again inarguable. Only the extent is contoversial (nobody suggests building beach side mansions for the poor.)

But where do you start from?

A leftists will start from welfare importance. Where does the money come from? this is a problem to be dealt with. etc. etc.

A rightists will start from the importance of encouraging commerce and not strangle the economy. Welfare? this is a problem to be dealt with.

You can see how, even with similar beliefs, just changing the starting point of one's thinking can have a huge effect. Especially when some issues are blurred and subjective.

2) The cumulative effect of biases.
One of the biggest secrets in stupidity is that a slight bias if multiplied 10-50 times in itself can get enormous.

If you slightly over estimate parts of a discussion. but do it over multiple parts and issues, the cumulative effect can be enormous.

Lets see how the fate of the poor can easily go X10 either way.
For gauging how much suffering of the poor there are you go in stages:
Where is the poverty line? How much are the poor themselves responsible for their fate? How effective government aid is? to which degree welfare improves (helping people stay in workforce, helping the kids of the poor staying in soceity etc.) or degrade (maybe welfare encourage not working? etc. etc.)? How effective is welfare (i.e. how much is being wasted before arriving to the poor themselves)? how much is our moral obligation? and multiple other questions.

Eventually, a rational answer to the value of welfare is the combination (kind of numerical product) of all the questions above and similar ones.

If one can err on each question in say 20% the multiplied effect over 10 questions will be 1.2^10 = 6.16

basically deviating either side about those 10 questions by a mere 20% will either turn you into an anti welfare fanatic, or into a unquestionable pro welfare fanatic. I am talking here on the rational logic, not on any emotions of moral tendencies!

3) The complexity of the involved logic and circularities
The way to decide a political question is not a pre defined one.
There are multiple issues to decide upon, and many of them are interrelated.
It is even common not to trust some sources based on other peripheral opinions we do have.
This complexity opens the door to endless bias.

This complexity and circularity will be familiar if you try to recall arguments you ever had with people on the complete opposite of the political spectrum from yourself.
You think you will be done with discussing aspect A, but it ends up being related to subject B, which is related on a bag of facts C etc. etc.

If you were stupid and persistent enough to carry on enough, you will know exactly what I am talking about.

Thanks for reading. it has been exhausting

Tuesday, September 22, 2015

The futility of "intelligent" self-help books

many authors have interesting knowledge to write about.

Unfortunately, they tend to mix in self-help advice on applying their more interesting ideas.

It usually feels like high school seminary work, and is useless.

Self-help. Actually carrying out the great ideas you heard about is a totally different realm than the ideas themselves.

A book might be great about focus, or meditation, or money management etc,. etc.

Once the guy start giving you practical advice, you regret having met the book to begin with.

Stale, grandmotherly ideas: Start now. Start small. make a plan. etc. etc.

grandma was very cool. But remember that writing advice books that are not painful to read is a profession on its own, and you do not have it.

Monday, September 21, 2015

The negative bias in the recent replication literature

the usual "publication bias" is reversed.

It used to be that positive results are likelier to get published, yielding a literature biased towards positive results.

Nowadays, publishing negative replication results is hugely popular. It is easier to publish. And many times the original authors even do not get the privilege of responding.

The social media reaction is very supportive, whereas original authors even if their arguments are sensible, are harshly scorned.

Publication bias now favors "positive failures". You are likely to get published if you can find a headlines grabbing "failure to replicate"

Monday, September 14, 2015

A continuous world and the death of all intuitions

Saving lives is obligatory. helping refugees in need is a must.
"At any cost" is commonly heard.

But our modern world is different.

Policy is not binary anymore. It is continuous.
Saving a life can be capped at $1m, $10m or $1B,

We all agree that we cannot spend a billion dollar to save a life. But where to put the line.

In a pre - continuous world, it was simple. You knew that you are not going to go bankrupt to prolong a family member life in half a year. And you know you are going to spend a lot to save his life.

You rarely met the middle places, where a price has to be fixed. And the ugly reality of the limit of sacred values rears its head.

But nowadays, many government policies are continuous.

Refugees. With few refugees coming, it was so simple. help them. Even if it is a burden.

But would you accept 100 million starving refugees from Africa to Europe?
Beyond extreme moralists, few would even dare to suggest so.

The September 2015 Germany / Austria / Sweden refugee policy illustrated this reality.

All those countries said "of course we cannot ignore the refugees".
But when the numbers swelled exponentially, they all showed their displeasure in various ways at the infinity of the flow.

The continuous world is very very challenging for moral intuitions.
Voters still are sold strongly on sacred values and punish politicians who give a price to sacred values (various studies by Phillip Tedlock). But inevitably, a continuous world forces those caps and prices.

I am wondering if the public will eventually come to get used to those undeniable logics.

I think the UK is ten steps ahead in terms of such rational public discourse. So there is some hope in the human race.......

Publication bias - The error of compensating too much for it.

Positive results and "amazing" results have higher odds to get published.

Which is "publication bias" - the average published result is stronger than the true effect.

A naive method to "fix" this is to include in a meta analysis all unpublished data as well. Which is an error.

There are reasons why some works are not published.

Unpublished papers are on average:
1) weaker in terms of any parameter of quality. Sample size, procedural rigor etc.
2) Less prominent authors.
3) Authors less tenacious in getting it published. Sometimes you have to fight to get published. Try multiple journals, run additional tests, re-write stuff etc. etc.
4) Authors whose topic of the work is more peripheral to their career / expertise are more likely to "let it go", or even not know where is the best venue to get it published etc.

All those criteria are clearly correlated with lower quality of the ultimate study being done.
Weaker result, less experienced researchers, lower tenacity (which has an effect on study design, perfectionism in carrying it out etc.), or authors which center of focus is elsewhere.

All these are logically related to less meaningful studies.

Full inclusion of lower quality unpublished studies is dumb. Ignoring it leaves us with publication bias. So?

I think a n intuitive solution is to include those works with a lower weighting to take account for them being on average of lower quality. (my feeling is around 40%, but take your guess)

PS. One might do a Bayesian calculus that takes care on the fact that only the unpublished studies failed. In which case, i would expect intuitively that their ultimate weight will be even lower. But I have not done the math.

Saturday, September 12, 2015

Democracy - only to solve the agency problem

The stupid explanation of democracy is that the public is smart. it definitely isn't.

The problem with dictatorship is the agency problem. The king cares only about himself.

Democracy fixes this agency problem. If the premier is managing the country badly he will likely be replaced. This serves as a fix, and as a deterrent / motivator to act for the best of the citizens.

How much democracy?
here we have a complex question.

How do we best align the interests of public servants with that of the public?

Naively, electing every single representative individually is most democratic. But is it most efficient?

It is much easier for the public to asses a whole government, than to asses the functionality of each representative.

Thus, a single MP can go for all kind of populist yet irrational policies and get popular without paying on the costs involved.

A party as a whole, however, will be punished on budget bankruptcy etc.

Thus, a party system without primaries might be more aligned with the interests of the electorate than by electing each and every MP.

Sunday, May 10, 2015

"doing it" is is inherently different from the naturally occurring action. It might be useless

A common way to improve things is to use what we know is good, and do it intentionally.

if we know that married people are happier, - a hypothetical example - we might want people to get married.

but the example exemplifies the problem. People that get married without our advice, are probably more compatible and more in love than those to whom we advice that "get married, it is good for you"

The same principle applies for anything that is found to be good and we try to do it on purpose.

1) The most common problem is that the artificially created situation is different from the naturally occurring one. I am not against artificial things. They are just different. The guy getting married "because it makes people happy" is having a different marriage than the more natural marriage.

2) Another error is mistaking trait and intervention. 
if optimistic people are richer happier etc. - then, goes the nativity, lets teach people to be optimistic. 
But teaching optimism might not make people optimistic. Even if it does, the nature of this optimism will be different from the naturally occurring one. 

3) Less central is mixing of correlation and causation. But this is a well known problem. This too, might cause the above intervention error. if something is correlated with a positive outcome, a native observer will try to induce the correlate, even though the correlate does not have any causative relation.

Example for error 2: Mindfulness trait is strongly correlated to many positive psychological measures (Brown 2003,2007), but the theory that practicing mindfulness (meditation) works better than placebo (the effect of "doing something you believe will help")  is not yet proven. All meditation studies do not contain an active placebo control. There is one study I know of coauthored by Richard Davidson, which does not show meditation to be superior.