A nice feature of most places on the internet is that people can easily create a new identity (you might have to solve a captcha but that's about it). This wouldn't work so well in real life as it does on the internet - in real life if someone commits a crime they need to be held accountable for that, so it's important that we each have a real life identity that we can't just replace. Similarly, social safety-net programs need to ensure that any given person does not collect more money than they are entitled to, so they also need to use real-life identities.
Social media websites should not need to know peoples' real-life identities. But if identities can be discarded and replaced, how can we deal with the online equivalent of crimes (i.e. spam, abuse and malware)? I think the answer is just to ignore them with extreme prejudice. To decide if some message (whether it's an email, an RSS feed item, a link from an aggregator or whatever) is worth reading, we should ideally be able to look at the reputation of its originator. Completely new identities, not vouched-for by any identity with actual positive reputation would have no reputation and the messages they send should be consigned to the deepest levels of the spam filter.
Unlike in real life, there's no point in internet identities having negative reputation overall - if one did, the owner of that identity would have nothing to lose by abandoning it and spinning up a new clean one. Blacklists won't work, we'll have to use whitelists.
If you somehow grew up under a rock or something and were new to the internet, how could you build up reputation if nobody's reading your messages? Well, presumably you know some people in real life, and some of those may have some internet reputation. Contact them (offline if need be) and get them to vouch for you until you have enough reputation that strangers can read your messages.
Reputation scores should be subjective, not a single global score. So user A may have a different idea than user B about what user C's reputation is. The centralization of a global score would cause problems, and could be gamed (earning reputation from people who give it away easily and spending it where it more lucrative). My value of a reputation score for user C should be a influenced by whether I have liked their posts, and by the reputation scores for user C according to other people who who have high reputation scores according to me. It's sort of like Google's PageRank algorithm but for users instead of websites.
Abusive messages are mostly anonymous and would therefore generally have an extremely low reputation score. Otherwise, they would stand to quickly lose whatever reputation they had. So abuse is solved the same way as spam (and ends up in the same bucket).
Credit reporting agencies like Experian and Equifax keep reputation scores on our real life identities for non-crime purposes, like determining if it would be wise to lend us money. I sometimes think it would be a good idea if those companies were not allowed to use our real-life identities, so that "bad credit" could be escaped just by creating a new "credit identity". Then nobody would ever lend more money to someone than they had spent building up their credit reputation. The current system allows "no credit" young people to build up huge unsecured debts which they are then shackled with for an extremely long time. Student loan debts in the US cannot be discharged in bankruptcy, on the theory that the benefits obtained by attending college can't be given up, but this system can have some devastating consequences for those who ended up paying more for their degrees than those degrees were worth.