A monoid is a set with an empty element, and an associative multiplication operation. We present two formal definitions of this concept, namely the classical one in Definition 1, and a slightly nonstandard one in Definition 2.
Definition 1. A monoid consists of a universe together with an identity element and a multiplication operation, denoted multiplicatively, which is associative in the sense that
holds for every .
The definition says that the order of evaluation in a monoid is not important, i.e. that different ways of parenthesising a sequence of elements in the monoid will yield the same result as far as monoid multiplication is concerned. This is made explicit in the following fact, which is implicitly presented to students already in primary school (at least this was the case for me).
Fact 1. Let be a monoid, and let be a sequence of elements in . For any possible way of parenthesising the sequence and multiplying it in the monoid, the result is the same.
Proof. A parenthesisation can be viewed as a binary tree, where the leaves are labelled by when read from left to right. The associativity equation in Definition 1 says that doing a rotation does not affect the value of a tree. Every two trees that have the same sequence of leaves can be transformed into each other using a finite sequence of rotations.
In light of the above proof, the definition of associativity in Definition 1 might seem a little bit of a hack. An alternative definition is presented below. The key is that the multiplication operation is no longer binary, but takes in an arbitrary word. This allows a new statement of the associativity condition, in the form of a commuting diagram. The new statement, easily seen to be equivalent with the classical definition in Definition 1, will be a basis for further generalisations later in the lecture. Also because of these later generalisations, we write some of the subsequence definitions, like homomorphisms, in terms of commuting diagrams.
Definition 2. A monoid consists of a universe together with a multiplication operation
which maps each one letter word to , and which is associative in the sense that the following diagram commutes
where flattens a word of words into a word in the obvious way, while applies the multiplication operation to every word in a word of words.
Fact. The two definitions are the same.
Proof. Using Fact 1.
Example 1. If is a set, then is a monoid, where the identity is the empty words, and concatenation is the monoid operation.
Monoid homomorphisms
A monoid homomorphism if a function between monoids that preserves the structure of monoids, i.e. a monoid homomorphism between monoids and is a function
which preserves the identity element and which is consistent with the multiplication operation in the sense that
An alternative definition of a homomorphism would be that the following diagram commutes
with the two downward arrows representing the multiplication operations in the respective monoids.
Using monoid homomorphisms, we can express the following universal property of the monoid . It is generated by , and it is the biggest monoid generated by in the following sense. For every monoid that is generated by , there exists a surjective monoid homomorphism
which is the identity on the generators. This is property goes under the name “ is the free monoid with generators “.
Monoid homomorphisms are closely related with functions that are compositional in the sense defined below. Let be a monoid, and let be a set. A function
is called compositional if for every , the value is uniquely determined by the values and . Again, compositionality can be stated in terms of terms of a commuting diagram: is compositional if there is some function such that the following diagram commutes:
Lemma. Let be a monoid, and let be a sujrective compositional function. Then there exists (a unique) monoid structure on which makes into a monoid homomorphism.
Proof. Saying that is uniquely determined by and , as in the definition of compositionally, means that there is a function
such that every satisfy
One uses to be the product operation in , and takes the identity to be the image under of the identity.
Recognising languages
In this lecture, we will be interested in monoid as an alternative to finite automata, i.e. we will use monoids to recognise languages.
Definition. A language is said to be recognised by a monoid homomorphism
if membership in is determined uniquely by . In other words, there exists a subset such that
holds for every word .
The following theorem shows that, as far as recognising languages is concerned, finite monoids and finite automata are equivalent.
Theorem 1. The following conditions are equivalent for every language :
• is recognised by a finite automaton;
• is recognised by a monoid homomorphism into a finite monoid.
Proof. From a monoid one creates a deterministic automaton, whose states are elements of the monoid, and where the transition relation is obtained from the monoid operation in the obvious way. This proves the bottom-up implication. For the top-down implication, let us transform a nondeterministic automaton into a monoid. Let be a nondeterministic automaton with input alphabet and states . Define the function
which sends a word to the set of pairs such that there exists a run of the automaton which begins in , reads the word , and ends in . One can show that is compositional, and therefore there is a monoid structure on its image which makes it into a monoid homomorphism. The appropriate multiplication operation is simply composition of relations, while the identity is the identity relation.
Note how the transformation from a nondeterministic finite automaton to a monoid incurs an exponential blowup. Note also that the transformation from nondeterministic finite automaton to a deterministic finite automaton is exponential, but going directly from a nondeterministic finite automaton to monoids is not doubly exponential.
The syntactic monoid
Deterministic finite automaton have minimisation, i.e. there is always a minimal deterministic automaton that can be obtained from any other automaton by quotienting. The same holds for monoids, as proved in the following theorem.
Theorem 2. For every language there exists a monoid homomorphism
called the syntactic homomorphism of , which recognises it and is minimal in the sense that for every surjective monoid homomorphism
which also recognises , there exists a unique monoid homomorphism which makes the following diagram commute
Proof. Define the syntactic equivalence of to be the equivalence relation on which identifies two words if
holds for every . One can show that the function, call it , that maps a word to its equivalence class is compositional, and therefore by Lemma 1, the set of equivalence classes is equipped with a monoid structure that makes into a monoid homomorphism. It is not difficult to check that is minimal as required in the theorem.
Exercise. Prove that surjectivity is important in the above theorem.
COMMENTS
korigamik
July 31, 2024
can you suggest places to learn the notation? is category theory a prerequisite? if so can you suggest some resources for it as well?
Mikołaj
July 17, 2023
By the way, there are better lecture notes now: https://www.mimuw.edu.pl/~bojan/paper/book-on-languages-recognised-by-finite-semigroups
Soumodev
July 15, 2023
Hi! Thanks for your concise explanation and nice intuitions. There is a small typo that I would like to point out. It's in the definition of syntactic equivalence. Shouldn't it be \[uwv \in L \qquad\text{iff} \qquad uw'v \in L\]? Thanks again :-)
Zubair
May 21, 2023
Best explanation.. Thank you
Ahmed
September 29, 2018
Thank you for writing this concise lesson. However, I think adding examples and expanding the phrases/words (in an obvious way, monoid [without specifying the operation]) would improve the clarity of the lesson
Ohad
January 16, 2018
Thanks for this all!