In my opinion, it is not a matter of ethics but a matter of peer-review and how research progresses.
As pointed in the comments to your question, there is not enough time. It is not reasonable to assume that every researcher in a quantitative field understands the details of every paper cited, since otherwise research would not progress. It takes time to understand other's ideas, but it takes even more time to develop, structure and redact your own ones. Therefore, you should only focus on key aspects that are important to be understood for your own research work.
That being said, this does not mean that you are carelessly taking any risk, because we have two powerful tools for avoiding that: peer-review and citations. With peer-reviewed papers, there is some guarantee for the content of the paper (in most cases, of course there are exceptions of bad review policies at certain conferences and journals). If the paper is not peer reviewed and just a pre-print, such as the one you link, then another good indicator for the quality of its content are the number of citations and the context in which they appear, which in your case are 52 for now.
If a paper does not meet any of the above criteria, I'd be cautious and review it myself. But this is rarely the case, specially at the level of university assignments, it can happen when doing a M.Sc. or Ph.D. thesis.
Of course, there might be exceptions to this, in my own field there was one popular paper with an algorithm available in the most relevant software packages, and 10 years later another paper came showing that the algorithm was mathematically incorrect (on a detail, but it impacted results). But those are also rarities and occur at research-level (note that another paper was needed to explain the issue, that is, an ordinary university student would not be expected to spot it).