# In a familiar city

Outside his own apartment, everything was slightly less than convincing; the architecture of the building was reproduced faithfully enough, down to the ugly plastic potted plants, but every corridor was deserted, and every door to every other apartment was sealed shut — concealing, literally, nothing. He kicked one door, as hard as he could; the wood seemed to give slightly, but when he examined the surface, the paint wasn’t even marked. The model would admit to no damage here, and the laws of physics could screw themselves.

There were pedestrians and cyclists on the street — all purely recorded. They were solid rather than ghostly, but it was an eerie kind of solidity; unstoppable, unswayable, they were like infinitely strong, infinitely disinterested robots. Paul hitched a ride on one frail old woman’s back for a while; she carried him down the street, heedlessly. Her clothes, her skin, even her hair, all felt the same: hard as steel. Not cold, though. Neutral.

The street wasn’t meant to serve as anything but three-dimensional wallpaper; when Copies interacted with each other, they often used cheap, recorded environments full of purely decorative crowds. Plazas, parks, open-air cafes; all very reassuring, no doubt, when you were fighting off a sense of isolation and claustrophobia. Copies could only receive realistic external visitors if they had friends or relatives willing to slow down their mental processes by a factor of seventeen. Most dutiful next-of-kin preferred to exchange video recordings. Who wanted to spend an afternoon with great grandfather, when it burnt up half a week of your life? Paul had tried calling Elizabeth on the terminal in his study — which should have granted him access to the outside world, via the computer’s communications links — but, not surprisingly, Durham had sabotaged that as well.

When he reached the corner of the block, the visual illusion of the city continued, far into the distance, but when he tried to step forward onto the road, the concrete pavement under his feet started acting like a treadmill, sliding backward at precisely the rate needed to keep him motionless, whatever pace he adopted. He backed off and tried leaping over the affected region, but his horizontal velocity dissipated — without the slightest pretence of any “physical” justification — and he landed squarely in the middle of the treadmill.

The people of the recording, of course, crossed the border with ease. One man walked straight at him; Paul stood his ground — and found himself pushed into a zone of increasing viscosity, the air around him becoming painfully unyielding, before he slipped free to one side.

The sense that discovering a way to breach this barrier would somehow “liberate” him was compelling — but he knew it was absurd. Even if he did find a flaw in the program which enabled him to break through, he knew he’d gain nothing but decreasingly realistic surroundings. The recording could only contain complete information for points of view within a certain, finite zone; all there was to “escape to” was a region where his view of the city would be full of distortions and omissions, and would eventually fade to black.

He stepped back from the corner, half dispirited, half amused. What had he hoped to find? A door at the edge of the model, marked EXIT, through which he could walk out into reality? Stairs leading metaphorically down to some boiler-room representation of the underpinnings of this world, where he could throw a few switches and blow it all apart? He had no right to be dissatisfied with his surroundings; they were precisely what he’d ordered.

A window with a view of the city seemed harmless enough — but to walk, and ride, through an artificial crowd scene struck him as grotesque, and the few times he’d tried it, he’d found it acutely distressing. It was too much like life — and too much like his dream of one day being among people again. He had no doubt that he would have become desensitized to the illusion with time, but he didn’t want that. When he finally inhabited a telepresence robot as lifelike as his lost body — when he finally rode a real train again, and walked down a real street — he didn’t want the joy of the experience dulled by years of perfect imitation.

# Recipes as Searching the Space of Algorithms

When I was leafing through the pages of a cooking recipes book yesterday, finally it hit me why I love recipes: They are algorithms which are robust to small variations and mistakes, and more often than not, they benefit from them. They have further nice properties: They are empirical algorithms, obtained without a theory, but rather obtained by searching through the space of algorithms. We have no idea why a particular recipe tastes in the particular way it does, we have no theory for it, and there is no recipe derived from a theory, rather some people recently started trying to figure out theory from the empirical recipes transmitted to us from our ancestors.

This coincides with a theme I loved at NIPS 2016. Normally, in machine learning, we derive algorithms based on theories: We have mathematical models and variables about which we want to learn. Then, in order to estimate them, we derive algorithms using the mathematical model (theory). But if, in practice, the only real entity is the algorithm but not the model, why bother with the model in the first place, if we can search in the space of algorithms? Recipes are just fascinating examples of this attitude. Instead of derived from The Scientific Theory of Food, Taste and Molecular Chemistry by a Professor in the Department of Molecular Food Chemistry, recipes are extremely effective heuristics developed through trial-and-error, which have dominated the theory by presenting us with tastes that we wouldn’t be able to find with developing models of chemicals. Anybody who enjoys the taste of food wouldn’t care about the underlying chemistry (I don’t!) because as soon as you have the recipe, the chemistry is an additional curiosity, something you “would love to learn” if you have no other thrills in your life (but trial and error in the kitchen is far more enjoyful than food chemistry — sorry). Interestingly enough, this idea started to be taken seriously in the ML community. Instead of developing poor and inexpressive mathematical models with our poor imagination, why not learn “the code” or “the algorithm” directly, with the expense of having an inexplicable, black-box model as a result? As soon as this satisfies us in terms of solving the problem, why would we care about the underlying complex and inexplicable mathematical model?

Before leaving the further interested with some relevant links, I would like to note that this kind of approach led to a discussion about interpretability. People are annoyed by this kind of thinking because at the end you have an algorithm which works but you have no idea why it works — well, exactly like recipes. I have also discussed this in length with a friend of mine at NIPS: I think interpretability is not one of the goals we need. Any interpretation is likely to be a deception for any human being, interpretations are for the people who love niceness, beautifulness, nice interpretations, and similar constructs. I don’t. But I do not want to elaborate this point with more psychology — at least not now.

Here are some links about algorithms that can learn algorithms — you can browse further through the connections implied by these links.

Neural Abstract Machines & Program Induction

Recurrent Neural Networks and Other Machines That Learn Algorithms

Deep Reinforcement Learning Workshop

Bayesian Deep Learning

# Statistics is not an end, it is the beginning

The story is now old: We, humans, see random patterns given a sequence of numbers or when we observe some sequence of events in chains, we tend to think of correlations as causations and draw nonexistent inferences, we overtheorize. The evolutionary advantage of this instinct is obvious, I will not elaborate. Why is this called so much as a bias nowadays, then? The reason is because we are living under the “big data” regime, a quite different age than what our ancestors lived in; an age of noise explosion and explosive noise; an age where winner takes all and actually produces the winners who can take all. Everyday we are bombarded with data with so few actually working (statistical and heuristic) models to handle it.

If you think it seriously, the thing is pretty serious. It is directly relevant and central to our daily lives. Humans see random patterns in a sequence of numbers. You don’t have to think some misinterpreted government statistics or p-babble in a psychology experiment or a scientific paper which misinterprets it all. If it were limited to those bureaucratic issues, I wouldn’t call it as central. But it is central.

The real insight is that we have become cyborgs. We don’t have chips in our brains at the moment but technically we are not very different. Our email boxes, Facebook profiles, WhatsApp usage; they are all electronic extensions of us and we are now more than a mere collection of tissues; we live in a data flow, in the middle of numbers. As another example, I usually outsource some of the thinking to the computer, plotting the functions instead of imagining them, use computer actively to extend my reasoning.

Inside this data flow, being fooled by numbers is a huge problem. For example, I am sure if you are someone who, at some point, was obsessed with another human being and you stalked him/her over various social media platforms to get information, you are getting the point. Probably, in hindsight, you found so many spurious correlations, ridiculous fittings to some shared ridiculous links and drew inferences based on them. Mostly you made stupid conclusions and this caused you to miss the relevant information. In this age of information, we are bombarded with data, yes, but not only we don’t know how to draw inferences from data, we are also not aware of the fact that we don’t usually see the most relevant data. In short, we use our regressors to fit noise.

If you don’t think clearly, you will misunderstand everything. Let me open a bracket: If everybody misunderstands everything, why don’t everyone mess up? After all, if you make decisions with your stupid understanding, you should mess up at some point. But fools are doing pretty well. The thing is that the personal trajectory of a person very much depends on luck and more on the optionality coming from the family; so much that personal idiotic inferences are not relevant. That is, we are robust to our own stupidity as the society figured out heuristics for that long ago. However, this doesn’t mean that it is OK to not know statistics, because (thankfully) things are becoming weirder and weirder and the world will look more volatile and dominated by explosive noise in the future than usual. And thinking probabilistically is essential if you don’t want to devote your life to some fluctuation in the history (and believe me this can happen — it was not possible 10 years ago but it is becoming more and more probable).

We really need statistical reasoning.

Lately, I have been trying to improve myself on developing some actual systematic thinking against this noise. When looking at news, in my personal life, deciding what to research, watching financial bumps. In this mess, you need to fortify your mind with probabilistic thinking. The more I tried to actually use statistical reasoning in my life, the more I realised that it can be best used negatively. Actually, this idea, I knew from the beginning, took a lot of time to really sink in to my mind. This is why the following tweet was the end of the process and made it clear.

Exactly! Taleb nails it once again! This is why it is mostly desperate to use statistical methodology in a positive way, to claim some relationships. It can be best used to refute the relationships, to “denoise”, to not be fooled by randomness.

Let me conclude. I love discoveries and this is why I wanted to make science a career. But it seems to me that, the real discovery is not statistical, it will not come solely from the data — though you need rigorous probability & statistics in the process in order not to be stupid. Discovery is based on dumb luck and being in the field — whatever that field you want be in (Hint: choose “low variance” fields to get lucky, if you know what I mean). Discovery is based on excessive tinkering, trial-and-error, testing, experimenting and rejecting spurious hypotheses while keeping the more sensible ones and assigning them probabilities. This is why, I would first want to learn the statistical methodology by heart, then quit the field of computational statistics. What would I do next? I can’t say. Now I am trained to think probabilistically with all my atoms, after all.

# We need exponential forgetting in the world

There are some problems about the world which make me sick everyday. I wish to avoid them and stop thinking about them but they’re coming back. Writing them up is always helpful and I guess will be always helpful so let’s discuss one of them here.

Recently, I was in Barcelona and when we were sitting in a cafe with friends, we started discussing social inequality. The topic came from Piketty’s last book, as my friend has delved into it by taking lots of notes. At some point, I had been asked about my definition of an equal world. My answer was predictable in my standards: I don’t believe everyone can be equal deterministically but we should be equal in probability.

Let’s think mathematically. I like to consider each individual as a Markov chain and I like to denote people with $X_k^p$ with $k$ being time for each individual $p$. This random variable, let’s say for the time being, shows how “good” you are. Also let $\pi_k^p$ denote the distribution of the well-being of the individual at time $k$.

Everybody has an initial condition $X_0^p$. We are born in certain countries to certain amount of wealth and in socially stable or unstable populations. In Markov chains, it is customary to assume this also random so we have an initial distribution of the Markov chain $\pi_0^p$. Technically, the problem makes me sick everyday, harms me, bothers me and gives me a big unrest is the following: Given this initial distribution for any individual $p$, probability distribution at any time $k$, I mean $\pi_k^p$, is extremely dependent to $\pi_0^p$ in this world. We are very sensitive and dependent to initial conditions: Wealth of your family, your nationality, your first language. In a fair world, we would have an exponential forgetting property (defined for hidden Markov chains). Which means, given two different initial distributions $\pi_0$ and $\pi_0^\prime$ and observations from the world $y_{1:k}$, difference between distributions depending on these initial distributions at time $k$ (respectively $\pi_k$ and $\pi_k'$) should exponentially decrease with time. In other words, $\|\pi_k - \pi_k'\|_{TV} \to 0$ should be happening exponentially fast!

Intuitively, this means the following: Given any two initial distributions, no matter how bad they are, you should converge to the correct distribution you should have at time $k$. We are Markov chains, so we depend on our past but it should not be too much and should decrease with time. To be fair, we have some amount of this property in the world, the situation is not the worst if you look at the history. But we’re still far from the perfect.

And then, of course, we shifted to the discussion on how to achieve this. My solutions are never based on using some government force to enforce this, I am strongly against using any central force to achieve anything. Instead, I shared my recipes with friends, as only real friends will be entitled to hear them, I guess, for a long time.

How to make world more stochastic, more insensitive to initial conditions, so we can ensure that everybody has a chance to thrive, not only the ones who are born in rich countries and can spend ridiculous amount of money for mostly useless education? Answer it for yourself. But in any case, I would advise you to get ready for the stochastic future, because I’m pretty sure, we’ll do, at least, something about it.

# Decisions in the face of an enemy

Whenever we decide to play basket, our plans turn out to be naive and we usually end up with being not able to playing it. In winter, it was usually rain. One week we decided to play on Thursday (we decided on Monday) and I decided to go Turkey on Tuesday! One week I became sick and for the last week it started to rain out of nothing: No forecast! Some friends think that the real problem is me, the Goddess of Luck is pissed off because of me. It is not improbable, though, let’s see what we should do when Gods mess up with us.

The problem is similar to what some machine learning people have in something called “adversarial settings”. You have some possible actions $a_t \in \mathcal{A}$ for time $t$ and you have some adversary which takes an action $z_t$ against you. The loss you’ll observe will be a function of $(a_t,z_t)$ pair. In other words, you take an action and you face with an adverse action against you and depending on your action, you’ll have some loss or reward $y_t = f(a_t,z_t)$.

What machine learning people correctly identified is the following: In an adversarial setting, i.e. when someone really tries to mess up with you (Gods, here, or the city itself in Istanbul or London), any deterministic policy $(a_t, t \geq 1)$ will result in a disaster. Why? Because in this case, your enemy can choose the worst possible outcome sequence $z_t$ for you since she knows what you’ll be doing at a given time. In order to avoid the enemy, you should randomise your policy $a_t$ and simply be random (but in a controlled manner). It is exponentially harder to prevent some random event than prevent some deterministic event. This layer of uncertainty will prevent your enemy to guess what you’ll be exactly doing hence will significantly limit your losses.

Even Gods have limited powers. See, it is easy to make it rain when you know the exact time these guys will be playing. But when we include a layer of uncertainty over the days we’ll be playing basket, no God can make it rain for 3 days in the midst of the summer. What we should do is to decide to play at multiple times and nobody should really know the actual time since one can not hide information from the Goddess of Luck. The best policy here is probably simply to go and play at some random available time which is to be decided opportunistically (preferably when the weather is not over 35C!).

It is not easy to mess up with the Goddess of Luck. But since she is obsessed with me to some extent, I’m learning to strike back the hard way!

# Humans benefit from cognitive errors

We are seeing a huge literature on cognitive biases & errors. Most of them are interested in showing & representing biases — and a a few of them are about avoiding it. When we hear about cognitive biases, we directly think how to avoid them (without even realising it). However, Kahneman for example, in his book, states there is a little hope for it. And I think there is a reason for it.

Recently, it hit me that avoiding cognitive errors is itself an error! Yes, every one of us fits a narrative to his/her past, recollect events in a certain way (by forgetting the forgettable) in order to construct a narrative. And almost all of the stories are wrong; we choose stories, so to speak, based on our imagined future. The reality, at least personal reality, is a cultivated fiction.

I started thinking that, keeping this in mind & trying to avoid this process is a form of fallacy. Because cognitive errors are what let us thrive in the world and enable us to move on, rather than being stuck in a particular sequence of past events. Cognitive errors are similar to statistical model errors and humans have it because they benefit from them (unlike statistical models). Having an erroneous narrative doesn’t hurt anyone as long as they become better thanks to it.

There is, clearly, a compromise problem. Most of the people don’t care at all about how they reconstruct their stories in order to avoid everything which give pain to them. Consequently, they can verify almost every type of action (as soon as they are compatible with the norms of society) and it is no different than being an evil on purpose.

Based on this compromise problem, here is a thought. Somebody should put down a framework which gives us heuristics — which enable us to benefit from our cognitive errors while staying ethical. Should I write a book on this?

# Narratives

Every now and then, we see in the science news feed that researchers did so and so, thus in the near future, we will be able to remove or generate our memories. It is claimed that by being able to do so, we will remove unhappy memories, hence we will be full of happy memories which supposedly will make us happier.

People usually find these stories terrifying with the feeling that they will not be human anymore if their memory is not intact.

But it seems that nobody is aware of the frustrating fact that most of the humans already do this quite regularly.

Life is mostly about what we want to remember (and keep up) from our past. The cognitive mind is very skilful at only remembering memories which fit to our recent narrative. We narrate something first, we want to believe in it (or environment forces us to do so), and then our cognitive mind starts to filter out all unnecessary memories which conflict with the narrative. Likewise, it recalls all related memories which are now useful, and even if their relative power is not too much to support the recent narrative, it magnifies them, makes them more emotional and way more powerful than they are.

As Harari nicely points out, humans are very very successful about believing in a myth and the corresponding narrative as the rest follows by the cognitive mind. What is more powerful is, humans are just able to switch between narratives since the cognitive mind will do its job to filter out irrelevant past experiences in order to construct the future.

The only possible glitch that I could not understand yet, it seems to me that there is some layer of reality which we can not avoid. We discuss removing memories in science news because sometimes we are not able to forget them as there is a layer of reality which the cognitive mind is not able to avoid. Here is my explanation: In actuality, you never forget your memories which don’t fit to your recent narrative, but you just choose not to remember them. As life takes off, it brings original problems and situations which force you to remember what you don’t want to remember. This is the unsurpassable, unavoidable complicatedness of life: You never know what the fortune lady brings. So no matter how much you try to believe in your new narrative, there is a decent chance that life will filter out what doesn’t fit to reality. If your narrative is not grounded on powerful facts, it may collapse. This is the part we can not just simulate as we can not create facts. As the point beyond this (creating facts) is just classified as a cognitive disease, I cease the discussion here.