Why Abstract Algebra?

I have always liked formalisms. They usually take a complex system and try to describe it through few simple rules that generate the complexity through their interactions. It might seem that in Mathematics the causality is reversed: we describe the rules first (axioms) and then explore the system. This is misleading because that’s usually how the final product is presented. In reality, there are many different formalisms of related systems and with time each formalism get polished. Eventually the “best” formalism wins out and becomes the standard formalism for that system.

Set Theory in some ways is the simplest formalism. It formalises the ability to group objects and describe these groups and its relationship to other groups of objects. After reading up on set theory and related formalisms for a bit, I decided to study other formalized structures. The study of such structures is Abstract Algebra.

After some research, I decided to pick up Abstract Algebra by Judson. The book is open sourced and available online here. Print copies are also available for purchase here.

I recently finished reading the first 6 chapters of the book and while there are many more chapters on groups, I think these chapters provide a a good basic introduction on the topic. So I have decided to summarize what I learned. I am going to try to use more plain english instead of full math notation (at the cost of brevity and precision).



The first abstract structure that book introduces is a group. A group is a set of objects and a binary operation over that set. The binary operation can take any two objects from the set and the result should always be in the set. This is also described as the binary operation being closed over the set.

A binary operation can be simply represented as $f(a, b)$ since it is a function that takes two arguments. However, it is more commonly represented using infix notation: $ab$ or $ a \cdot b $ where $\cdot$ represents the operation more explicitly.

Formally, here are the requirements of the binary operation:

  • Associative: $(a \cdot b) \cdot c = a \cdot (b \cdot c)$. It is worth noting how nontrivial this requirement looks in its prefix notation: $f(f(a,b), c) = f(a, f(b,c))$
  • Identity: This means there is an object $I$ in the set such that $x \cdot I = I \cdot x = x$ for all $x$.
  • Inverse: For every $x$ there exists an $x^{-1}$ such that $x \cdot x^{-1} = x^{-1} \cdot x = I$

Note that being commutative (i.e $a \cdot b = b \cdot a$) is not a requirement. If the binary operation is commutative, the resulting group is called abelian



The simplest example of a group are the numbers we are familiar with. The set of integers, commonly referred to as $\mathbb{Z}$, is a group when the binary operation is chosen as addition. Addition is associative, the identity is 0 and the inverse is the negative of the number. Similarly, the set of all rational numbers ($\mathbb{Q}$) and the set of all real numbers ($\mathbb{R}$) form a group under addition.


What if we tried to use multiplication as our binary operation over the set of integers? The identity operator becomes 1. However, for all integers their multiplicative inverse, which is $1/n$, is not an integer. So the the set of integers and multiplication do not form a group.

What about the set of all real numbers and multiplication? Now almost all numbers have an inverse, except for 0. There is no number such that $0x = 1$. So this is not a group.

What if we remove the number zero though, then can we still form a group? There are no numbers $x$ and $y$ such that $xy = 0$ as long as $x \neq 0$ and $y \neq 0$. So multiplication maps elements of this set only to other elements of this set: $\mathbb{R}_{\neq 0}$. Since all numbers now have an inverse, this set with multiplication does form a group.


Modular arithmetic allows numbers to “wrap around” instead of increasing indefinitely. So for example, $3 + 4 \equiv 2 \pmod 5$. If the numbers “wrap around” at $n$, we can refer to these finite set of numbers as $\mathbb{Z}_n$.

Is $\mathbb{Z}_n$ a group under addition? Addition is closed on the set since any addition operation will always result in a number that is in the set. Addition is associative. 0 is the identity and is in the set. Since $-x \equiv n - x \pmod n$, the inverse for each element in the set also exists in the set. So $\mathbb{Z}_n$ and addition do form a group. One way to visualize this group is by creating a table:


This table is called a Cayley table. It lets us visualize relationships between all the members of the set. For example, we can easily see that each element has an inverse (0 shows up in each row/column). We can see that this group is commutative/abelian because all rows match their respective columns.

Is $\mathbb{Z}_n$ a group under multiplication? We have the immediate problem of 0 not having an inverse. If we remove 0 from the set, do we have a group? The answer is still no because not all numbers have a multiplicative inverse in modular arithmetic. It is not guaranteed that there is a number $y$ such that $xy \equiv 1 \pmod n$. Example 3.11 from the book shows that if a set is created with elements with an inverse, the resulting set is a group!

Matrix Multiplication

The power of this formalism is that it is generic. The elements of the set do not have to be numbers. The only requirement is that the binary operation can work with those elements.

So if we look at the set of all $2 \times 2$ matrices and use matrix multiplication as our binary operation, do we have a group? The multiplication of a square matrix will result in another matrix of the same dimension so matrix multiplication is closed over this set. The identity matrix becomes the identity element. However, not all matrices are invertible so not all matrices have an inverse. This is the case for matrices that have a determinant of 0. So the set of all $2 \times 2$ matrices is not a group.

What about the set of all $2 \times 2$ invertible matrices? Since the product of two invertible matrices is invertible, multiplication is closed over this new set. Since the identity matrix is invertible, it is a member of this set. All the matrices are invertible by definition so this set does form a group! This group is called the General linear group.


How do we know if something is a group or not? The definition of a group gives us the requirements so we can check each of the requirements to show if something is a group or not. This isn’t the easiest though.

For example, suppose we are given this Cayley table:


Does this set and relation represent a group? Is this relation associative? We likely will need to do some brute force computations to verify that. Let’s skip it for now. Does an identity element exist? To find the identity element, we need to find an element which has a row and a column that maps each element to itself. In this case, $a$ does fit that so it must be the identity element.

Does each element have an inverse? To show that each element has an inverse, we need to show that each row and column have a cell that maps to $a$ (the identity element). Each row and column does have $a$ somewhere so each element does have an inverse!

That leaves us with just verifying that associativity is fulfilled. This is tedious to do. Instead it is better to apply this simpler rule of thumb: each row and column must be a permutation of the set. This means every element of the set must be represented in each column and in each row.

Since multiple rows and columns in the example above are not permutations of the set, the example above is not a group! We can verify that more explicitly by seeing that

$$ d \circ b = c = d \circ d $$ $$ b = d^{-1} \circ c = d $$ $$ b = d$$

Since $b = d$, they are not distinct elements and $xb = xd$ for all $x$. However this is also not true according to the table above so the binary relation is not well formed as shown by this contradiction.


  • There is exactly one identity element. Our rule of thumb from above reinforces this
  • Each element only has one inverse. Our rule of thumb from above reinforces this
  • $(ab)^{-1} = b^{-1}a^{-1}$
  • $|G|$ for a group $G$ is called the order of $G$ and represents the number of elements in the group.


A subgroup is a subset of a group that itself is a group under the same binary relation.

So for example, $\mathbb{Z}_n$ is not a subgroup of $\mathbb{Z}$ because they don’t use the same binary relation. Addition and modular addition are different binary operations. However, the set of all multiples of 3: $ \{.., -6, -3, 0, 3, 6, ..\}$ is a subgroup of $\mathbb{Z}$. It is closed under normal addition. The addition is associative, the identity element is in the set and each elements inverse is also in the set.

In general, to show that a subset $H$ of a group $G$ is a subgroup, it is sufficient to show:

  • The identity element of $G$ is in $H$
  • The binary operation of $G$ is closed on $H$: for any $h_1$ and $h_2$ from $H$, $h_1 h_2 \in H$
  • The inverse of any element in $H$ is in $H$.

Cyclic Groups

For element $a$ in group $G$, what happens if the binary operation is applied repeatedly to $a$?

$$a \cdot a = a^{2}$$ $$a \cdot a^2 = a^{3}$$ $$…$$

Note that if $\cdot$ represents addition, $a^n$ is more often written as $n a$.

Let $\langle a\rangle$ be the set of all elements of the form $a^n$ for all $n \in \mathbb{Z}$. $\langle a \rangle$ is a subgroup of $G$ since:

  • $a^{0}$ is the identity element
  • $a^ja^k = a^{jk}$ is in $\langle a \rangle$
  • For any $a^n$, $a^{-n}$ is the inverse element and it exists in $\langle a \rangle$

$\langle a \rangle$ is called a cyclic subgroup and the element $a$ is known as the generator for $\langle a \rangle$. Additionally, $\langle a \rangle$ is the smallest subgroup containing $a$. If for some $a$, $\langle a \rangle = G$ then $G$ is a called a cyclic group. If $G$ is a cyclic group, then all of its subgroups are also cyclic.

If the number of elements in $\langle a \rangle$ is finite, then repeated application of $a$ to an element will “cycle” through the finite elements. $|a|$ is called the order of $a$ and is equal to smallest $n$ such that $a^n = I$. Note that this is consistent with our previous definition of order since $|a| = |\langle a \rangle|$. The lenght of the cycle is the same as the number of elements in the set. For infinite sets, the order is infinite.

For example, for $\mathbb{Z}$, 1 is a generator and $\langle 1 \rangle = \mathbb{Z}$.

$$ … $$ $$ -3 + 1 = -2 $$ $$ -2 + 1 = -1 $$ $$ -1 + 1 = 0 $$ $$ 0 + 1 = 1 $$ $$ 1 + 1 = 2 $$ $$ … $$

Order of $1$ is infinite as is the size of the set.

A group can have multiple generators. For example for any $\mathbb{Z}_n$, all relatively prime numbers less than $n$ will be generators for $\mathbb{Z}_n$. Both 1 and 3 are relatively prime with 4 and thus $ \mathbb{Z}_4 = \langle 1 \rangle = \langle 3 \rangle $

Inversely, not all groups have a generator. When a group doesn’t have a generator it is not a cyclic group. Every group that is not abelian, is not cyclic. So the group of invertible matrices has no generator since it is non-abelian.

Cosets and Lagrange’s Theorem

For a group $G$, subgroup $H$ and for each element $g \in G$, a coset is defined as:

$$gH = \{ gh : h \in H \}$$

This means that a coset is the set of all elements resulting from multiplying $g$ with each element of $H$. It is easy to imagine that each $g$ will result in a very different coset. However, it turns out that many different elements $g$ generate the same coset.

Matter of fact, if two cosets share any elements then they are the same coset! This means that every element $x \in G$ exists in exactly one coset of $H$.

Since $gh_1 = gh_2$ only when $h_1 = h_2$, we know that $gH$ won’t have any duplicates. This mean that size of the coset $gH$ is the same as the subgroup $H$: $|gH| = |H|$. Since every element of $G$ is in exactly one coset, to be able to have enough cosets for each element we know that the total number of cosets needs to be ${|G|}/{|H|}$. The number of cosets of $H$ is also called the index of $H$ and written as $[G:H]$. So the previous statement can be written as:

$$ |G| = [G:H]|H|$$ This relationship is known as Lagrange’s theorem.

Left/Right Coset

$gH$ technically is called the left coset and $Hg = \{hg:h \in H\}$ is the right coset. If $G$ is not abelian/commutative than the left cosets and right cosets can be different. However, all the statements in the previous section apply equally to the set of left cosets and the set of right cosets.

Application of Lagrange’s Theorem

Lagrange’s theorem allows us to relate the size of a group, subgroup and the number of cosets of the subgroup. How is that useful?

Suppose we have a group with a prime number of elements in it. So $|G| = p$ and since $p$ only has factors $p$ and $1$ we know that |H| can only be $p$ or 1. When $|H|$ is 1, the subgroup only contains the identity element. When $|H|$ is $p$, it contains all the elements of $|G|$. This can be used to show the $G$ is cyclic!

So using Lagrange’s theorem we were able to deduce the number of subgroups and core property of the group only from knowing he number of elements in the group.

Section 6.3 of the book does something even cooler. It uses Lagrange’s theorem and applies it to a specific group called the “groups of units”. Through this it derives Fermat’s Little Theorem which seemingly has nothing to do with groups: $a^{p-1} \equiv 1 \pmod p$ where $p$ is some prime and $a$ is some integer that is not divisible by $p$.


I think Lagrange’s theorem was a good midpoint for this post. Clearly all the previous chapters had been building up to get to this point. I thought it was a very satisfying journey, particularly how Fermat’s Little Theorem was derived from these ideas.

There are still 10 more chapters about groups in this book so I know there are a lot more basics for me to learn. I am excited to see what other structure can be derived from the relatively simple requirements that groups have. Maybe I will do another write up when I finish the remaining chapters about groups.