Category Theory ?

Artificial intelligence and machine learning are linked with a huge list of other fields of study: statistics, probability, information theory, computational complexity theory, algorithmic complexity theory, linear regression, linear programming, approximation theory, neuroscience, automata theory, geometry of high dimensional spaces, reproducing kernal Hilbert spaces, optimization, formal languages, and many other areas (see e.g. the graphic below by Swami Chandrasekaran).

My focus has always been on applied mathematics rather than “pure” mathematics. But, by studying AI I was almost forced into learning about areas that I had avoided in graduate school like decidability (logic) and proof theory which did not seem practical at the time.  I felt that most abstract mathematics and some abstract computer science, though beautiful, were not very useful. So, I am surprised that I am now studying the most abstract and possibly most useless area of mathematics, Category Theory.  

Why category theory (aka “Abstract Nonsense“)?  Mathematicians often wonder “What is the simplest example of this idea?”, “Where do idea A and idea B intersect?”, and “How does this idea generalize?”.  Category theory seems to be about theses questions which I hope will be useful for AI research.  I was also inspired to read about category theory by Ireson-Paine‘s article “What Might Category Theory do for Artificial Intelligence and Cognitive Science?” and “Ologs: A Categorical Framework for Knowledge Representation” by Spivak and Kent (2012). Paine describes relationships between category theory, logical unification, holographic reduced representation, neural nets, the Physical Symbol System Hypothesis, analogical reasoning, and logic. The article includes many authoritative interesting references and links.  Ologs appear to be a category theoretic version of semantic networks.

Since I am studying category theory in my off time, I will probably blog about it. I am an enthusiastic beginner when it comes to category theory, so hopefully I can communicate that enthusiasm and avoid posting the common conceptual blunders that are part of learning a new field.

Cheers, Hein

PS:  I loved this graphic by Swami Chandrasekaran from the article “Becoming a Data Scientist“.

Road to Data Scientist

 

Road To Data Science by Swami Chandrasekaran

3 comments

  1. antianticamper’s avatar

    Conceptually, higher order categories are really interesting in that they permit a weakening of the notion of “equal.” Practically, I’m still waiting for a true down-and-dirty application for category theory.

    Nice data science graphic. Metro Line #4 (machine learning) is the dangerous one and should require a federally issued license. Model fitting via convenient and powerful software without model understanding is the intellectual plague of our time and it is spreading relentlessly.

  2. hundalhh’s avatar

    Mark,
    I was looking at “Causal Theories: A Categorical Perspective on Bayesian Networks” as a way of dealing with uncertainty, “probably equal”, “usually close”, or PAC, but I don’t yet understand monoidal categories, so I think I have to learn that first.

  3. hundalhh’s avatar

    Michael Maloney wrote “One aspect of category theory that deserves mentioning is that categories (and higher categories) give a much more satisfying answer to “when are two things ‘the same’? In set theory, equality ends up too strict. You cannot distinguish the circle group and the group of unit complex numbers through group homomorphisms….” as a comment on the Categories, What’s the Point? post at Jeremy Kun’s blog.

Comments are now closed.