Markhov process

I find myself in a dualistic struggle as I try to understand the realities of what I study. On the one hand, I feel as if the only way of knowing is through observing a ‘real’ phenomenon for a very long time… as an ethnographer perhaps, or a natural scientist with longitudinal data. This side of my mind is currently very active with the book I’m just finishing, With Our Own Hands: A Celebration of Food and Life in the Afghan and Tajik Pamirs, which, if it must be classified, takes an ethnoecology approach to understanding complex social-ecological systems. So the other side of my mind, mostly related to my PhD, is searching for ways to generalise nuanced understandings to more generalisable theory, and back again. Specifically with regards to traps: What are the dominant dynamics that keep systems ‘stuck’ in an undesirable state? Rather than collecting a lot of data without knowing for what, we will create a ‘toy’ model to play with in order to further our specific hypotheses.

Modelling is one (very large) group of tools to understand complex systems and to simplify complex and perhaps even contradicting realities. So here I’ll start a small blog series on learning about modelling, and ideas that spring to mind as I dabble in this alternative reality. Mostly these blogs will act as placeholder for ideas to come back to, and hopefully for people to comment on and get involved with. I am following a free online course called Model Thinking through Michigan University. Most of the time I am very frustrated with the assumptions one must take to fit the world into a simplified model to assess a system that would never occur in reality. BUT I do also really see the value in using models to push one’s thinking in a given direction. So despite all the tedious calculations and oversimplification of what is real, here I go.

Markhov process was one such model that I learned about in last weeks lectures, which I think can have cool applications in understanding traps and transformations.

The Markhov process tells us about tendencies of a system to transform. For example, more states become democracies over time than autocracies. However, every decade a small percentage of democracies do become autocracies. Given this observation, one would logically assume that over time, the world’s states will reach an equilibrium of predominantly democratic states at any given time. However, this does not happen. A different equilibrium is reached, based on transition probabilities (the probability that a state will switch). This is based on the assumption that the system is memory-less… or predictions of the future are dependent only on the current state of the system and not of the past. This is possibly really useful for understanding traps and transformation. The Markhov model tells us that we cannot change a system’s trajectory by changing the state of the system itself, but rather that we need to change the process, or technically the transition probabilities, of moving from state to state. Process over function.  I look forward to exploring this with regards to why history matters in current system stability. Institutional theory, by limiting itself to analytical snapshots may be falling into the trap of a Markhov process.

Advertisements

One Response to Markhov process

  1. Steve says:

    Hi Jamila,

    Thanks for posting these thoughts, it’s caused me to go back and think about some basics of modelling!

    I’m curious what exactly that model of democracies and autocracies looks like. I would think the equilibrium mix would strongly depend on the transition probabilities you choose in the model.

    A general comment is that most modelling we do is actually Markov. Dynamical systems models and agent based models usually update their next state based only on the current state of the system. The problem with the two-state discrete Markov models you describe (and possibly with institutional analysis) is that they cannot have (respectively, do not acknowledge the possibility of?) multiple stable states — so yes we’re back to the old resilience, multiple stable state story.

    I would define a state as stable if it persists on a time scale much longer than the order of the Markov process (in the examples above, it is order 1 — the next state depends only on the previous time step). With multiple stable states a system, despite the memoryless nature of its updating rules, the system itself can have memory by persisting in one of the stable states.

    Going one step further, I would say human behaviour is also Markov. Yes we often base our behaviour on historical events. But it’s more accurate to say our behaviour is determined by our own mental state, which changes over time and only at some time in the past was directly affected by these past events. Aspects of this mental state can of course change over time, decaying or maybe even being reinforced into myth or reflex. If we are careful to model the system in a way that allows the possibility of multiple stable states, then the model, even though human behaviour is itself Markov, can show institutional memory.

    I hope these observations aren’t too trivial or sociologically naive, but I expect important when turning human behavioural dynamics into a model…

    Steve

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: