Member-only story
Markov Chains
Markov chains or Markov processes is an important topic in the fields of data science and machine learning. This concept is applicable in a wide range of areas, including reinforcement learning.
Prerequisites to Understand Markov Chains:
To udnerstand this part of statistics, you would need to be familiar with probability and matrices. If you do not have working knowledge of either one of these areas, I suggest that you take some time to learn the basics before you get into Markov process. And to understand Markov’s property’s applications, you need to have a solid understanding of Markov property.
The Markov Property:
To explain the Markov property in simple words, the current state of a process or a chain of events depends only on its immediate past and not on any other events from the past.
When the Markov property is satisfied in a chain of events, you can call it a “Markov chain” or a “Markov process”.

A bit of history about the Markov property: This part is evidently not necessary for you to understand how the concept works mathematically and all that, but I’m just adding this here for those of you who are curious to learn.
What Markov wanted to prove was that in the evolution of some processes, knowing just the present is enough to predict the future, and knowledge of the past is unnecessary, which is often referred to as “memorylessness”. By this, in case of what are called Markov processes, he proved that these results would be the same as the results you would obtain knowing the entire history of the chain as well. This was in response to another mathematician who argued that independence is a requirement that needs to be fulfilled for the weak law of large numbers to be true.
“States” in Markov Chains: As the term implies, by “state”, what we mean is the state in which a process is. A process can take different directions or paths and go from one state to another or even remain in the same state as the process continues to run. You could consider…