You're talking about sites like Youtube, so I'll focus on that.
That quote, which you bolded, was over-simplified. The actual law is
17 USC 512 (c) Information Residing on Systems or Networks At Direction of Users.
(1) In general. A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the storage at the direction of a user of material that resides on a system or network controlled or operated by or for the service provider, if the service provider
(A)
-- (I) does not have actual knowledge that the material or an activity using the material on the system or network is infringing;
-- (ii) in the absence of such actual knowledge, is not aware of facts or circumstances from which infringing activity is apparent; or
-- (iii) upon obtaining such knowledge or awareness, acts expeditiously to remove, or disable access to, the material;
(B) does not receive a financial benefit directly attributable to the infringing activity, in a case in which the service provider has the right and ability to control such activity; and
(C says and complies with a DMCA takedown notice).
What you're talking about is membership fees (e.g. Youtube Premium)... or general advertisements that would be placed on any video without awareness of it being infringing.
That does not violate Safe Harbor per se. At least not on a legitimate site which is dominated by legitimate content and makes an honest, credible effort to keep it that way.
The important clause in (B) is "In a case..." Under (B), they lack the ability to control each activity (upload or view).
Youtube's inability is due to receiving 500 hours (30,000 minutes) of uploaded video every minute, obviously requiring 30,000++ staff seats working 24x7 to curate. That would amount to about 200,000 staff - all of Google is around 50,000 right now.
Even if a small site were able to moderate all content, they might still have a Safe Harbor defense if they could credibly say that they did not know the material was infringing. If someone created a "Juan Brown" username and uploaded blancolirio's videos from YouTube, they could say "we did not know that was not the real person". But if the video started with an HBO splash screen and tones, then heck no.
But non-moderation is not an airtight defense. When sites are neglectful toward removal, they can soon develop a reputation as a haven for such infringing content - which the sites tend to embrace, since it brings many customers! This was the undoing of several music sharing sites in the 00's, since this awareness of their reputation, plus a lack of diligent removal, failed them on all three arms of 1(A) above.
Remember that a competently run website that relies on user submissions is well aware of the DMCA and its case law, and has tailored its rules and enforcement to make it easy to defend a copyright claim. For instance, in the case of music, Youtube uses some human intervention but largely automated means to either
- take it down and give the uploader a copyright "strike" leading to a ban (which alienates their biggest contributors, especially when a popular Youtuber like blancolirio winds up with a distant car stereo in background noise, remember the detection is by "bot" and no human ever sanity-checks it).
- de-monetize the suspect video (uploader gets nothing, but, neither does YouTube).
- monetize it, but give the revenue stream to the rights holder due to an agreement with them.
The last one is Youtube's preference with regards to music. As this was vastly easier, more practical and better for the community all-around, allowing whole classes of content to be created that would be prima-facie illegal otherwise. And it's content people are already creating and Youtube can't stop them, so it solves a big policing problem too.