42
$\begingroup$

What are some applications of abstract algebra in computer science an undergraduate could begin exploring after a first course?

Gallian's text goes into Hamming distance, coding theory, etc., I vaguely recall seeing discussions of abstract algebra in theory of computation / automata theory but what else? I'm not familiar with any applications past this.

Bonus: What are some textbooks / resources that one could learn about said applications?

$\endgroup$
5
  • 1
    $\begingroup$ For a related question see Uses of algebraic structures in theoretical computer science on TCS. $\endgroup$
    – J.-E. Pin
    Commented Dec 31, 2016 at 13:13
  • 1
    $\begingroup$ Though well intentioned, this question is hopelessly broad. $\endgroup$
    – djechlin
    Commented Dec 31, 2016 at 22:04
  • $\begingroup$ Both in the c.s. and algebra sides. Are graphs and trees algebraic structures? Does every application of linear algebra count as an application? Does every algorithm applied in algebra count as computer science? What if we prove it is NP hard? Are number theoretic results algebra? What about counting? When I use the binomial theorem to analyze Bernoulli random variables when studying $\mathbf{RP}$ and $\mathbf{BPP}$ is that abstract algebra enough? $\endgroup$
    – djechlin
    Commented Dec 31, 2016 at 23:02
  • $\begingroup$ FWIW, in my fields of machine learning and natural language processing, a solid grasp of linear algebra is a must (along with dexterity in vector calculus) because we inherently deal with multidimensional spaces. Here is a common procedure for dimensionality reduction, rife with linear algebra. $\endgroup$
    – erip
    Commented Jan 1, 2017 at 14:44
  • $\begingroup$ See also this. $\endgroup$ Commented Sep 23, 2018 at 11:12

11 Answers 11

27
$\begingroup$

Algebra is incredibly useful in computer science. I'll preface my answer with my opinion: I view a good portion of computer science as a branch of mathematics. So my answer will be quite broad. Note that my opinion is one that not everyone shares.

Algebraic Combinatorics and Graph Theory: Recall Cayley's Theorem from group theory, which states that every group is the subgroup of some Symmetry group. That is, every group is a permutation group. This immediately motivates the study of algebraic combinatorics. Group actions are used to study symmetries, or automorphisms, of mathematical objects. Informally, a group action is a dynamical process on an object, which partitions its members into sets which we call orbits. The study of the structure and quantity of these orbits yields important combinatorial results. You have likely seen the Dihedral group, which is the automorphism group of the Cycle graph. In general, finding the full automorphism group of an object (such as a graph) is a non-trivial exercise.

There are several applications that come to mind. The first is Polya Enumeration Theory. Given a mathematical object (such as a graph), we color the vertices (this is not necessarily a proper coloring, in the sense that two adjacent vertices may receive the same color). Then given some subgroup $H$ of automorphisms (such as rotations of a necklace), how many distinct colorings exist? That is, two colorings are equivalent if they belong to the same orbit under the action of $H$. Polya enumeration theory studies these types of problems. If your combinatorics is weak (and even if it isn't), Loehr's Bijective Combinatorics text is an excellent resource.

In algebraic graph theory, it is common to examine the Cayley Graph for a group. Given a group $G$ and a set $S$, the Cayley Graph $X(G, S)$ has vertex set $G$, and two elements $g, h$ are connected by an edge if $hg^{-1} \in S$. Intuitively, the Cayley Graph tells us if an element in $S$ can be used to take us from $g$ to $h$. We may examine a related graph of transpositions from $\text{Sym}(n)$, and leverage algorithms from graph theory to ascertain whether the transpositions generate $\text{Sym}(n)$. I cover some of this in my lecture notes for Theory of Computation (http://people.math.sc.edu/mlevet/Lecture_Notes.pdf). Godsil and Royle's text on Algebraic Graph Theory is a more comprehensive resource.

A third thought that comes to mind is the study of graph homomorphisms. In particular, a proper coloring of a graph is a graph homomorphism. If the graph $X$ has chromatic number $c$, then there exists a graph homomorphism $\phi : X \to K_{c}$. So there is a direct connection to complexity theory here, as graph coloring is NP-Hard. Constraint satisfaction problems are generalizations of this, where we study homomorphisms of relational structures of finite rank. Constraint satisfaction problems arise frequently in artificial intelligence. Algebraic techniques (granted, not senior level algebra techniques) arise in the study of CSPs. The Dichotomy Conjecture states that every CSP is either in $P$ or NP-Complete. Dimitry Zhuk recently presented his results at a two-week conference at Vanderbilt in September, settling the conjecture in the affirmative. While he has not put forth a paper, the folks there were relatively convinced of his results (including the logician at my school, who was a Tarski student).

Automata Theory: The set of regular languages forms a Kleene semi-ring over the operations of set union (addition) and concatenation (multiplication). The notion of the derivative also comes up. See the Brzozowski Derivative. We can leverage this fact to design a linear-algebraic algorithm (see the Brzozowski Algebraic Method) to convert a FSM to a regular expression. (I also cover this in my notes.)

Complexity Theory: Babai's recent result on the Graph Isomorphism problem is (or should be) the poster child for algebra in CS. He uses a lot of techniques such as group actions and representation theory to study problems like Graph Isomorphism and Group Isomorphism. Regarding the Group Isomorphism problem, it is undecidable in the general case. For finite groups, the best bound is something like $n^{\text{log}(n)}$ (which is easy to obtain, simply by showing that every finite group has a generating set with at most $\text{log}_{2}(n)$ generators). For abelian groups, Group Isomorphism has an $O(n)$ time solution. For solvable groups, the $n^{\text{log}(n)}$ bound has been relatively untouched. Babai and Qiao presented a polynomial time isomorphism test for groups with abelian Sylow towers. To the best of my knowledge (which is limited), this is the largest class of solvable groups to date with a polynomial time isomorphism test.

A lot of the techniques Babai uses are quite heavy handed. If these types of problems catch your interest, I might spend some time on representation theory. I think a solid handle on group theory (comfort with the Sylow Theorems) and abstract linear algebra, along with motivation, is sufficient to pick up a book in the area. There are several books on Representation Theory from a combinatorial perspective. Sagan's text on the Symmetric Group is one text that comes to mind. Babai also teaches classes on Algorithms in Finite Groups, which you can find on his home page. Computational Group Theory is the key phrase, if you decide to look into this area more.

Cryptology, Coding Theory, and Computational Number Theory: These are the obvious ones, and have been covered quite well.

$\endgroup$
5
  • $\begingroup$ Thanks: this is an informative answer. Could you please clarify the statement that "group isomorphism is undecidable in the general [infinite] case"? How do you define the decision problem with infinite inputs? Is it something like "take as input ZFC-descriptions of the two groups"? (In that case, undecidability makes sense, as you could trivially embed the halting problem into the group operation.) Or is this something else? $\endgroup$
    – wchargin
    Commented Dec 31, 2016 at 8:23
  • $\begingroup$ We deal with finitely representable groups. So the input string for the Turing Machine is finite. The fact that the group Isomorphism problem is undecidable follows from the undecidability of the free group problem (we reduce the free group problem to group Isomorphism). $\endgroup$
    – ml0105
    Commented Dec 31, 2016 at 8:28
  • $\begingroup$ Ah, finitely representable is the key. Thanks. $\endgroup$
    – wchargin
    Commented Dec 31, 2016 at 8:38
  • $\begingroup$ I share your opinion. Except that I'm even more radical, because I am unable to think of any branch of CS that is not simply a type of mathematics. (Hint: If you work with proofs rather than experiments, you are dealing with math, not science.) $\endgroup$
    – Wildcard
    Commented Jan 1, 2017 at 7:21
  • $\begingroup$ I hesitate to say all of CS, based on some areas of HCI which are really more psychology. But yes- I share your general sentiments. :-) $\endgroup$
    – ml0105
    Commented Jan 1, 2017 at 18:54
9
$\begingroup$

Here is a link to a seminar course given at KTH, Stockholm, in 2014, the topic of the semester being "Algebraic Gems in TCS". There are several links to other, similar courses on the bottom of the page, as well as links and references to some relevant literature a bit higher up:

http://www.csc.kth.se/utbildning/kth/kurser/DD2442/semteo14/

As regards automata theory and algebra, there are interesting connections between automata and what is known in algebra as semigroups (the difference between a semigroup and a group being that semigroups don't need identity elements or inverse elements), and one of the main links between the subjects is the syntactic monoid (see https://en.wikipedia.org/wiki/Syntactic_monoid for more information). You can also try looking out for resources covering algebraic automata theory, apparently also known as Krohn-Rodes Theory:

https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes_theory

Depending on one's viewpoint, one may consider calling category theory a branch of algebra, and categories have found quite a lot of application especially in connection with functional programming. First courses in algebra typically don't cover categories though, but if you'd like some reference to catch up, Awodey's book is quite pleasant to read and presents some connections to computer science. But you could probably google "Category theory for computer science" as well, and end up finding really good notes for free :)

And just like Chas Brown states, cryptography is another part of CS where algebra has found applications. You will probably find a lot of group and field theory applied there.

And then there is the entire subject of coding theory, but judging from the post you already knew about this.

And then I guess there is at least some more that I can't think of right now…but hopefully this should keep you busy :)

$\endgroup$
8
$\begingroup$

The theory of Grobner Basis is a way to solve simultaneous multivariable polynomial equations. The key component underlying this is something called Buchberger's Algorithm (which given an ideal $I$ of some ring $R = k[x_1,\dots,x_n]$) computes the Grobner basis, an especially nice way to represent the ideal that makes it MUCH easier to answer certain questions about it).

The algorithm is interesting in that naively it's not clear that it terminates. Fortunately, by working with something called a monomial order we can guarantee that at each iteration of the algorithim we get "closer" to a solution, so it must terminate.

$\endgroup$
6
$\begingroup$

some functional programming languages use monads.

$\endgroup$
4
  • 6
    $\begingroup$ Is this really an answer? It seems like just a comment. $\endgroup$
    – Wildcard
    Commented Jan 1, 2017 at 7:23
  • $\begingroup$ Also, are you sure the "monads" used in functional programming languages (e.g. Haskell) actually fit the same definition of "monads" as in Category Theory? (The wikipedia pages are separate, for a starter.) $\endgroup$
    – Wildcard
    Commented Jan 1, 2017 at 7:24
  • $\begingroup$ @Wildcard well, its an applicatio of abstract algebra in computer science, I could go more in depth into how it works, but there are way better explanations online. $\endgroup$
    – Asinomás
    Commented Jan 1, 2017 at 7:25
  • $\begingroup$ @Wildcard yes they are, although not that much category theory is needed to use apply the, at a rudimentary level. $\endgroup$
    – Asinomás
    Commented Jan 1, 2017 at 7:26
5
$\begingroup$

Cryptography (including elliptic curve cryptography) comes to mind.

I suppose one could also argue that for performance reasons, a lot of problems get simplified by some sort of group action on the data set under consideration to reduce the size of the problem set to something more manageable; so it certainly doesn't hurt to have a good grounding in the basics.

$\endgroup$
5
$\begingroup$

The hidden subgroup problem lends itself well to quantum algorithms. Solving it for Abelian groups yields polynomial time prime factorization (Shor's algorithm); some other groups would provide similarly powerful results (e.g. the hidden subgroup problem on the symmetric group is equivalent to the graph isomorphism problem). That makes algebra (specifically representation theory) an important tool in quantum computing.

$\endgroup$
4
$\begingroup$

Can we call the entire field of symbolic computing an application? Or is that too meta?

$\endgroup$
4
$\begingroup$

Algebra (and in particular, universal algebra) is also used in semantics of programming languages. A simple example of this is process algebra, of which there exist many different ones for different application areas. Broadly speaking, a universal algebra gives you rules for generating syntactically correct strings in whatever language you are considering.

A key insight due to Turi and Plotkin is that while the algebraic side of things describe what strings can be formed in a language, the coalgebraic side of things describe the semantics of the language.

$\endgroup$
2
$\begingroup$

Error correcting codes are another thing that comes up in eecs. You want to send k symbols without error so you add some redundant symbols and are able to correct some number of errors. Examples are things like reed Solomon codes, hamming codes, etc.

$\endgroup$
2
$\begingroup$

An additional note on Cayley Graphs: There has been a great deal of research on using Cayley Graphs in designing communication architectures for parallel & distributed computation, in which each vertex represents a separate processor and each edge represents a communication link between two processors. Typical research topics include maximizing the number of processors given a fixed number of links per processor, finding effective transmission schemes that minimize communication overhead (i.e traffic congestion), and designing algorithms for important computation tasks which map efficiently onto different network topologies.

$\endgroup$
2
  • $\begingroup$ Is graph theory a branch of abstact algebra? I doubt it. $\endgroup$
    – Alex M.
    Commented Dec 31, 2016 at 21:39
  • $\begingroup$ @AlexM. On the other hand, Cayley graphs are generated by groups, and so their study is naturally filled to the brim with representation theory. Furthermore, Cayley graphs are natural candidates to consider for finding expander families, and in fact many of the explicit expander families we know of today are families of Cayley graphs (Margulis, who was the first to construct an explicit family, even used Cayley graphs specifically in his construction). Since expanders have seen heavy use in TCS and in the study of distributed programming, I think this answer is relevant. $\endgroup$
    – MonadBoy
    Commented Jan 1, 2017 at 0:03
2
$\begingroup$
  • The Walsh-Hadamard code's robustness relies on the subset-sum principle which is most easily seen as a consequence of the second fundamental theorem of linear algebra after viewing inner products as linear maps. This code and its robustness properties is used in the proof of the PCP theorem which is used to prove hardness of approximation of MAX3SAT and establishing hardness results elsewhere in the study of approximation algorithms.

  • Computation of the permanent, which is a function analogous to the determinant, is a complete problem for $\mathbf{PP}$.

  • The complexity theoretic classes $\mathbf P$ and $\mathbf{NP}$ are defined as algorithms that run in deterministic resp. nondeterministic polynomial time. Robustness of these classes relies on closure properties of polynomials; for instance, the property that the sum of any two polynomials is again a polynomial.

  • Infinite families of hash functions are constructed by considering finite fields, a familiar object from abstract algebra. They have many applications including a parallelizable derandomization of the randomized algorithm for MAXCUT and the proof that $\mathbf{BPP} \subset \mathbf{PH}$.

  • The proof that $\mathbf{IP} = \mathbf{PSPACE}$ relies on arithmetization of the operations $\wedge$, $\vee$, $\neg$, $\forall$ and $\exists$ to convert SAT formulae into polynomials, which are often studied in algebra. Important properties of polynomials such as bounding the number of roots by the degree are considered in this proof.

$\endgroup$