relation for a language, the syntactic monoid, aperiodic monoids, and groups.

-- BarrinG - 2012-02-27

Monoids and groups are two fundamental algebraic structures that can be used to describe automata theory.

A **monoid,** denoted (M, +) and usually abbreviated to just M, is a set M and a binary operation + where

- The operation + is associative, that is, that for all
*a*,*b*,*c*in m,*a*+ (*b*+*c*) = (*a*+*b*) +*c*. - The set M contains an identity,
*e*, that is, that for all*a*in M,*a*+*e*=*e*+*a*=*a*. - The operation + is closed on M, that is, for all
*a*,*b*in M,*a*+*b*is an element of M.

Not all monoids or groups necessarily have commutative operations, and their sets may have finite or infinite size, in which case the structures themselves are referred to as **finite** or **infinite**, respectively. A monoid with a commutative operation is called a **commutative monoid**, and a group with a commutative operation is called an **Abelian group. **All groups are monoids, but not all monoids are groups. An example of an infinite noncommutative monoid is finite strings over an alphabet Σ, including the empty string as the identity with the right concatenation operation, usually denoted M* or Σ*. An example of a finite Abelian group is the integers modulo n, usually denoted Z_{n} with the addition operation.

A monoid M has a **submonoid** N if there exists some set N where N ⊆ M and N is a monoid that preserves the monoid law of M. A **subgroup** is the same concepts, where M and N are groups and N preserves the group law of M. A submonoid or subgroup is called **proper** if N is a proper subset of M. It is an obvious requirement from the definitions monoids and groups that the identity element of M must be in N, so all monoids and groups contain the **trivial group**, the group of a single identity element, as a subgroup (which is also the same as the trivial monoid). An example of a proper finite subgroup of Z_{4} is the elements { 0 (mod 4), 2 (mod 4) } which is isomorphic to Z_{2}, where 0 (mod 4) is mapped to 0 (mod 2), and 2 (mod 4) is mapped to 1 (mod 2).

A **monoid homomorphism** is a function f : (M_{1}, +) → (M_{2}, x) where

- f(
*e*_{M1}) =*e*_{M2} - f(
*a*+*b*) = f(*a*) x f(*b*) for all*a*,*b*in M_{1}.

A canonical example of a finite group is the **permutation group on n elements**, denoted S

1 2 3 4 5 6 7

↓

3 1 5 4 2 7 6

maps 1→3, and maps 3→5, and maps 5 →2, and maps 2 →1, which completes a cycle 1→3→5→2→1, denoted (1352). Any arrangement of {1, 2, 3, 5} **preserving this cycle ordering principle is equivalent**; for example (1352) is equivalent to (3521), (5213), and (2135) because they indicate the same cycle with a different first element. Similarly our bijection above contains the cycles (4) and (67). So this bijection is written (1352)(4)(67) in cycle notation. Single-element cycles are omitted and implied if absent, so in practice this bijection would really be denoted (1325)(67). Notice that (1325)(67) is a distinct element from (1325) and (67), which are also distinct from each other; the first is a composition of functions denoted by the second and third. Another interesting property of cycle notation is that **cycles on disjoint elements commute**. That is, (1325)(67) is equivalent to (67)(1325). It is easy to see that the composition of these functions is the same no matter the order, since they do not act on any shared element and that element will be mapped to itself by one of the cycles. However, cycles with elements that are not disjoint do not necessarily commute. For example, (123)(32) = (13), while (32)(123) = (12). It is also easy to compute the inverse of an element because **the inverse of every cycle is the same elements with their ordering reversed**. For example, the inverse of (123) is (321) and the inverse of (123)(45) is (54)(321). Since the inverse of a bijection simply has its mappings reversed, reversing the cycle ordering will change the mapping of the permutation denoted to its inverse.

Elements of S_{n} also have the property of being **even or odd functions**, defined by whether they can be written as a composition of an even or odd number, respectively, of **transpositions** or 2-cycles. All elements of S_{n} are either even or odd, but not both, and the identity element is even. The composition of functions preserves the conventional parity algebra described by Z_{2}, which is a subgroup of all S_{n} (except *n* = 1 in which case S_{n} is the trivial group); that is, two even or two odd permutations composed is an even permutation, and an odd permutation composed with an even permutation is odd. The identity element must be even because all elements of S_{n} can be decomposed to an even or odd number of transpositions, and the inverse of a transposition is itself; for any expression to be equal to the identity, it must be some term x composed with x^{-1}, and so even if x is an odd permuation, x^{-1} will double the transpositions and make the whole expression even.

As a result of the parity of permutations in S_{n} we have the **alternating subgroup** of S_{n}, usually denoted A_{n}, which is made of all the even parity permutations in S_{n}. A_{n} contains the identity, and since composition of even permutations always yields even permutations, the group is closed. The order of A_{n} is exactly 1/2 the order of S_{n}, or n!/2.

The permutation group is especially important because **every group is isomorphic to a subgroup of the group of permutations of that group's elements**. Or, more formally, ∀G ∃H H ≤ S_{|G|} ∧ G ≅ H. The reason for this is that the group law is a permutation of the elements of G based on their composition. In other words, multiplication by an element of G is a bijective function from elements of G to elements of G. Explicitly, let f_{g}(x) = gx where g, x ∈ G. Then the set of f_{g} for all g ∈ G is a subgroup of S_{|G|} since f_{g} is a bijection (since every element of G has an inverse), and the composition of (f_{u}f_{v})(x) = uvx = f_{uv}(x) is well defined in the same manner as S_{|G|}.

The **two-sided syntactic congruence relation**, denoted ≡_{L} is defined on a language L over an alphabet Σ such that for all *a*, *b* in Σ*, *a* ≡_{L} *b* if and only if for all *x*, *y* in Σ*, *xay* is in L if and only if *xby* is in L. Or, more formally, ∀*a*∀*b* *a* ≡_{L} *b* ↔ (∀*x*∀*y* *xay* ∈ L ↔ *xby* ∈ L). The relation forms equivalence classes in Σ* for each language L. It is easy to see that it is reflexive, as the definition is trivially true if *a* and *b* are the same string, that it is symmetric, as *a* and *b* can be swapped in the definition to obtain an equivalent statement, and transitive, as if ∀*x*∀*y* *xay* ∈ L ↔ *xby* ∈ L ∧ *xby* ∈ L ↔ *xcy* ∈ L, then by logical equivalence to *xby* ∈ L, *xay* ∈ L ↔ *xcy* ∈ L and *a* ≡_{L} *c*.

The ≡_{L} relation on Σ* for some language L forms a finite monoid, called the **syntactic monoid** and denoted M_{L}. The elements of M_{L} are the equivalence classes of ≡_{L}, with [*w*] denoting the equivalence class of *w* ∈ Σ*. The identity element of M_{L} is [*ε*]. Its operation is a kind of class concatenation, defined as ∀*x*∀*y* [*x*][*y*] = [*xy*]. It is not immediately clear that this operation is well defined. However it is the case that for all *a*, *a'*, *b*, *b'*, if *a* ≡_{L} *a'* and *b* ≡_{L} *b'* then *ab* ≡_{L} a'*b'*. By the definition of the ≡_{L} relation it must then be that for all *u*, *v*, that *uabv* ∈ L ↔ *ua'b'v* ∈ L. We can see this is the case if we take *v* = *bx*, where *x* is any string, and take u to be any string *y*; conversely, we then take *v* = *ya* and *u* = *x*. The monoid's elements are closed under the operation, as they are equivalence classes and therefore are disjoint and exhaustive of Σ*, and because for any two strings *a*, *b* their concatenation *ab* must be in a congruence class, satisfying the third monoid condition. The second condition of the definition of a monoid remains true, that ∀*x*, [*ε*][*x*] = [*x*][*ε*] = [*x*] as *εx* = *x**ε* = *x* in regular string concatenation. It is also clear that the operation is associative, as ∀*x*∀*y*∀*z* [*x*]([*y*][*z*]) = [xyz], and also as ([*x*][*y*])[*z*] = [xyz] as well.

The significance of monoids to formal language theory is that **every DFA D (and therefore every regular language L where L = L(D)), has an equivalent finite monoid M that is said to recognize L, and vice versa**. L is recognized by M if there exists a monoid homomorphism f from Σ* → M, and there exists some subset X ⊆ M such that for all *w* ∈ Σ*, *w* ∈ L if and only if f( *w*) ∈ X. Or, more formally, (∃M M is a finite monoid ∧ ∃f f : Σ* → M is a monoid homomorphism ∧ ∃X∀*w* *w* ∈ L ↔ f( *w*) ∈ X) ⇔ (∃D D is a DFA ∧ L = L(D) ∧ ∀*w* *w *∈ L ↔ δ_{D}*(q_{0D}, w) ∈ F_{D}). If we are given a regular language L and its corresponding DFA D, we can construct a monoid M that recognizes it. For each x ∈ Σ* we can create an element m_{x} ∈ M that is a function g : Q_{D} → Q_{D} corresponding to the mappings of δ_{D}(q, x) for q ∈ Q_{D}. The identity element of M is the identity function, and the operation is function composition as in S_{n}. The elements of M only map elements of Q_{D} to themselves, and the DFA is required by its definition to have total transitions between states so the set of functions is closed as you'd think it would be for a DFA to work correctly. The f in the definition is the mapping between the strings in Σ* and their character's corresponding elements in M; that is f(w_{1}...w_{n}) = f(w_{1})...f(w_{n}) = m_{w1}...m_{wn} where m_{w1}...m_{wn} = m_{x} for some x ∈ Σ*. The X ⊆ M will be the set of function transpositions that will take the initial state to a final state, or the set of strings that D accepts.

-- AlbristO - 2012-05-08

**Theorem** - Every Element of a Monoid has a Threshold and Period

Every element *a* of a finite monoid has a threshold *t* ≥ 0 and period *q* ≥ 1 such that *a ^{t}=a^{t+q}*. That is, ∀

**Proof**

If we iterate the operation of the monoid on the same element a number of times one more than the size of the set of the monoid, there must be two distinct numbers of iterations *m* and *n* such that *a ^{ m } = a ^{ n }*, as there are only |M| distinct elements in the monoid, while there are |M|+1 distinct elements among the powers of the element. If we consider the smallest distinct values of

**Theorem** - Every Monoid has a Threshold and Period

Every finite monoid has a threshold *T* ≥ 0 and period *Q* ≥ 1 such that iterating the operation of the monoid *T+Q* times on any element of the monoid will yield the same result as iterating the operation *T* times on that same element. In logic, ∃ *T* ∃ *Q* ∀ *a* : *a ^{ T } = a ^{ T + Q }*.

**Proof**

Given as we are from the previous theorem that each element of the monoid has a period and threshold, we can take *T* = max_{a}(t), that is, the threshold of the monoid is the largest of any of the individual thresholds of its elements. A number at least this high is necessary because any lower value will, for some element of the monoid, not be enough iterations of the monoid's operation to reach an element that can be reached again from iterating the operation with that element. A number this high is sufficient because, for any *T* greater than *t*, a^{T} = a^{T+q}, since *T* is just *t* plus some arbitrary integer *w*, so we can reduce this to *a ^{t+w} = a^{t+w+q}* and then apply the previous theorem. The minimum possible

**Definition** - Aperiodic Monoid

A monoid with a period *Q* of 1 is called aperiodic. Iterating the operation of the monoid on any element of the monoid will eventually reach a point where further iterations will not change the element at all, and so no notable periodic behavior occurs.

**Theorem** - Groups in Terms of Monoid Threshold

A monoid is a group (that is, it has inverses) iff its threshold *T* is 0.

**Proof**

If a monoid has a threshold *T* of 0, then ∃ *Q* ∀ *a* : *a ^{ 0 } = a ^{ 0 + Q }*, or, equivalently,

If a finite monoid is a group, each element, and thus the monoid as a whole, has a threshold of 0. Since the monoid is finite, there must be two different values *a ^{i}* and

-- DstubbS - 2012-05-07

**Definition:**

A **groupoid** is a set with a binary operation.

**Theorem: **A language is context-free if and only if it is recognized by a finite groupoid.

**Proof:**

- The language of any finite groupoid is context free.

we define a homomorphic mapping f from from A* to G*.

let the set F be our accepting set, a subset of G.

Y = the set of strings in A* that can be derived from an accepting element of G.

We construct a context free grammar D from our finite groupoid.

D = {V, T, R, S0}

let the terminal set T of D equal the alphabet set A of G.

let the variable set V of D equal the set of elements of G with a starting element S0 added.

We simulate the mapping f from A* to G* by adding rules to R of the form Si --> Ti where Si is an element of V and Ti is the element of T that maps to Si.

We simulate the multiplication over G by adding rules to R of the form Si --> SjSk where Sj x Sk = Si in G (x denotes the binary operation of G).

To account for S0 we add the set of rules S0 --> Si for every Si that exists in F.

Finally we add the rule S0 --> epsilon if epsilon is in Y.

- Every context free language is the language of a finite groupoid

Additionally let D be an invertible grammar, meaning that if S1 --> SiSj and S2 -->SiSj then S1 = S2.

D = {V, T, R, S0).

Variable set V = (S0, ... , Sk)

Terminal set T = (T1,...Tn)

Rule set R = (R1, ... )

Start variable S0.

let T = the elements of our alphabet A.

let V = the elements of our groupoid G.

let rules in R of the form Si --> Ti define the mapping f from A* to G*.

let rules in R of the form Si --> SjSk be the multiplicaton table for our groupoid G. For the rule Si --> SjSk we add to the table that Sj x Sk = Si.

S0 is simply an element of G.

The accepting subset F of G is the set of variables in D that the start variable produces.

The language of G, Y, is determined just as in the first part of the proof.

-- CcpoweR - 2012-04-09

Need definitions of abelian group, commutator of two elements, commutator subgroup, solvable group.

Need definition of symmetric group S_n (all permutation of n elements).

Examples of S_1 and S_2 which are abelian, S_3 and S_4 which are solvable but not abelian,

and S_5 which is not solvable. (I'll do this in class on 2 April.)

To describe specific elements of S_n, it's useful to have cycle notation. For example, "(1 3 5)(4 2)" is the element o fS_5 that takes 1 to 3, 3 to 5, 5 to 1, 4 to 2, and 2 to 4. We can multiply (compose) permutations more easily in cycle notation, by tracing the action of each element through the series of cycles.

If two permutations have the same cycle structure, they are called "conjugate". For example, (1 3 5) (4 2) is conjugate to (3 4 1) (2 5). Permutation s is conjugate to permutation t if and only if there exists a permutation z such that s = ztz^{-1}. (PROVE THIS.) Conjugate permutations come up in the proof of Barrington's Theorem.

-- BarrinG - 2012-04-17

Copyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding UMass CS EdLab? Send feedback

Ideas, requests, problems regarding UMass CS EdLab? Send feedback