rschwieb already gave you the high powered answer. Here let me give you the low-powered version of what he wrote.
Consider the collection of $2\times 2$ matrices with real entries. We can write each matrix as
$$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} $$
and if we re-organize the presentation, it can be identified with an element of $\mathbb{R}^4$
$$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} $$
By writing it as a matrix, you allow yourself to do "multiplication" by matrix multiplication.
Now, we can write
$$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} = A \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix} + B \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} + C \begin{pmatrix} 0 & 0 \\ 1 & 0\end{pmatrix} + D \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} $$
which, if you know a bit of linear algebra, is just expressing a $2\times 2$ matrix in a basis.
As it turns out, what you've done is basically just choosing a different basis for the $2\times 2$ matrices. You chose
$$ \mathbf{1} = \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \quad \mathbf{i} = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} $$
and
$$ \mathbf{j} = \begin{pmatrix} 0 & -1 \\ -1 & 0\end{pmatrix} \quad \mathbf{k} = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} $$
We can solve for the "standard" basis $\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}$ etc. in terms of this new basis. Plugging it back in to the expression then we have
$$ \begin{pmatrix} A & B \\ C & D\end{pmatrix} = \frac{A}{2} (\mathbf{1} + \mathbf{k}) + \frac{B}{2} (-\mathbf{i} - \mathbf{j}) + \frac{C}{2} (\mathbf{i} - \mathbf{j}) + \frac{D}{2} (\mathbf{1} - \mathbf{k}) $$
This identification can be reversed (exercise for you!). But in any case your identification of $a\mathbf{1} + b\mathbf{i} + c\mathbf{j} + d\mathbf{k}$ with the $\mathbb{R}^4$ vector $(a,b,c,d)$ corresponds then, to identifying the matrix $\begin{pmatrix} A & B \\ C & D\end{pmatrix}$ with the element
$$\begin{pmatrix} \frac12 (A + D) \\ \frac12 (C - B) \\ -\frac12 (B+C) \\ \frac12 (A-D) \end{pmatrix}$$
which can be realized as the linear transformation of $\mathbb{R}^4$ that can be realized by a matrix multiplication
$$ \begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix} \mapsto \begin{pmatrix} \tfrac12 & 0 & 0 &\tfrac12 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ 0 & -\tfrac12 & \tfrac12 & 0 \\ \tfrac12 & 0 & 0 & -\tfrac12 \end{pmatrix}\begin{pmatrix} A \\ B \\ C \\ D\end{pmatrix}$$
What is the lesson behind all this? Given any four real numbers, you can of course identify them with an element of $\mathbb{R}^4$. The real question starts when you ask "how is this identification meaningful"? The first thing you can do is to try a little bit of linear algebra like I outlined above. But things get real exciting when you start connecting the algebra to geometry, and that's where the power of the Clifford Algebra that rschwieb mentioned really shines.
For the time being, if you cannot completely absorb the abstract nonsense in the definitions of Clifford algebras, it may be worthwhile to set your goal a tiny bit lower and think only about geometric algebra. (Unfortunately the Wikipedia link is not the best way to learn about this, read this first, and if you are interested, perhaps follow a textbook such as this.)