Yes and no. It depends on the constraints you want to meet and the preconditions needed to run your algorithm.
Ideally, an algorithm is an abstract recipe that defines step-by-step how to do something. Algorithms was defined like so with the goal of reproducibility, and later automatization. Algorithms originates from lambda-calcul, so you can easily see why they are made in such a way. This definition is the usual one, but modern algorithms can be non-sequential (not step-by-step, like concurrent algorithms, or logical ones like the ones using unification), non-linear (stochastic algorithms) or just plainly weird (quantum algorithms), but I'll pass that.
Thus, ideally, an algorithm should be as abstract as possible without accounting any hardware.
But, as with any system, you must define some axioms, not only to get a coherent system, but also to gain time. For instance, most algorithms presume, at the very least implicitly, that they are defined on a Von-Neumann machine. If that was not the case, they would need to explicitly define each parts of the system they need to be run on (since this is required to reproduce the recipe, this a kind of precondition). Also, often algorithms relies on common commands such as write() without defining them fully.
Another reason why algorithms are not so abstract from hardware architecture, is when you need to meet some constraints.
Let's say you are working on embedded systems, then probably you can't rely on the same amount of resources you have on workstations. One of the most restrained resources is probably memory. However, most algorithms tend to optimize the time complexity (speed of execution on CPU), not the memory complexity (amount of memory necessary to work on the data). For these systems, memory optimized algorithms have been devised where non memory optimized algorithms would just fail or run a lot slower. In fact, embedded systems are not the sole target of memory efficient algorithms: for example, there are cache-oblivious algorithms that adapts their processing to efficiently use the CPU cache. Another example: some machine learning algorithms for big data are tailored for incremental learning or out-of-core computing to process huge amount of data far bigger than memory available on any computer, etc.
There are also algorithms that do not optimize a specific part of the computer, but a standard that is dependent on hardware architecture. For example, numerical data that need precision are stored inside float or double, which are by nature limited due to hardware limits. The problem is that complex computations can lead to rounding, and the more calculations you do over rounded numbers, the more you will drift off. This is called a catastrophic interference. Some applications need a critical precision, even at the cost of some worst complexity. For this type of applications, algorithms that optimize their calculation to reduce or remove catastrophic interference were made.
Thus, designing an algorithm can also be a trade-off between abstraction and constraints.
In the end, we can say that an algorithm is as abstract as its target is, and as its preconditional (architecture) needs. The more specific target your algorithm aims, the more it will probably rely on the hardware architecture.
Some related keywords that may interest you: