It's absolutely possible to have people in the loop
And before you leap to your feet I mean in the loop for things the system isn't sure about.
YouTube and Facebook are sitting on mountains of data that would enable them to triangulate in an automated fashion generators of the kind of content that is unacceptable (beyond just copyright or tits) I don't mean an AI that looks at an individual bit of content but something more holistic that takes a whole time line into account, clicks, origin ip addresses, duplicate accounts, likes and complaints, the whole thing.
Then the big problematic or uncertain things are pushed upwards, for manual review. That way Dr David Duke can't get around puny rules on individual posts, his account soon becomes identified correctly and for what is, and gets the wooden spoon.
Google spent millions on things like click fraud of their golden goose advertsing system, how about they start spending money on figuring the "social" part of social media out. YouTube can't even reliably classify stuff for adults vs young children, Facebook can't recognize a live stream suicide, or that a redirect URL is not a different or honorable news site, and so on and so forth. Saying it's too hard is a failure. We built a system we can't manage in real time? Is that something to be proud of?