top of page
  • Writer's pictureBryan Jun

the Facebook problem

I recently wrote about my thoughts on the various forms of mainstream social media and noted that I have been off Facebook for quite some time now, not due to any privacy concerns or feeling that the algorithm is brainwashing me to be violent or racist, but simply because I felt that I was addicted to the platform and it was providing me no value. Facebook's reputation (already tainted for some time) has seriously gone down since then, with a double punch coming from the whistleblower (who is at a congress hearing as I type) and the longest platform wide outage in its history happening yesterday. The TLDR here is that Facebook has been allowing, if not encouraging, harmful content through their "algorithm" and much of their business model relies on users clicking on pages and ads that do well due to their virality and controversy.


While there's clearly a fundamental ethical issue with the idea of delivering adult-themed ads towards kids (promoting drugs, anorexia, and the use of dating apps are just some examples) and allowing for content that has been proven to incite ethnic violence, I'm not entirely sure why the reaction of the public (at least congress) is one of surprise rather than affirmation. Is it a surprise that an industry built upon and depended on virality as a means of revenue generation with a duty to its shareholders to continue to scale and increase market share heavily invests and promotes on the very content that such results come from? It doesn't take a rocket scientist with a PhD to recognize that controversy, pictures and ads that draw attention and anxiety, and content that provokes conflict are the easiest, and as horrible as this sounds, best ways to support this business model. We've all been part of this digital and social media-induced age for some time now - we know very well what encourages us to click on something and get "excited' by it, regardless of how many times we've been clickbaited or discouraged by whatever followed. I'm definitely not an exception here - as digitally literate I can claim myself to be and as involved as I can be in "content creation," there's something inherent about human nature that get us to click on something we may even know is not true or is proven to make us feel bad afterwards - whether this is a form of instant gratification or some form of FOMO perversion, it doesn't matter. What does matter is the reality that we can't avoid it and it's not a "phase" - we see our parents and grandparents do it as well as the kids younger than us and our friends. As with most things, there's a subconscious belief that things will get better as time passes, but psychological issue stemming from harmful content is here to stay.


Unpopular opinion time - with everything I said so far, we can all agree (hopefully) that knowing something is harmful and continuing to do so is a horrible thing to do. There's a lot of big brains at Facebook (the whistleblower is a Harvard data science PhD person herself) - I'm certain there's many very well equipped individuals who know objectively from a data standpoint that the behavior that they encourage and the actions that bring in the most revenue ultimately harm the public rather than benefit them. However, given everything they're doing is legal (at least for now, as congress seems to be still stuck at "how does Facebook make money when it's free"), I'm not entirely sure if there's an issue beyond ethics.


To reiterate what was noted before, Facebook's main obligation to "society" is on bringing money to its shareholders and continuing to grow. As much as they can say they're about fostering a community, making a world a better place and change their logo with some social justice theme every other month, at the end of the day Facebook is a corporation with the sole obligation to bring in revenue and show growth on their 10-q's every quarter. If they've figured out through their giant employee base of big brains that a certain type of content brings in the most user engagement, which therefore converts to dollar figures, there's nothing that prohibits or discourages them from continuing to exploit this understanding. In fact, one could argue (please do not misquote me on this - I'm not saying this is what you SHOULD do as an ethical and moral human being, rather taking a "logical" stance for the sake of argument) that this is the best path forward, and perhaps the right path forward given their core mandate of making stock owners happy. Given the idea that time spent on the webs is now a relatively good measurement of one's wealth - people with more money are choosing to spend time in the real physical world as they can afford to do so whereas the poor are turning to "free activities" on the "metaverse" where they are indirectly paying with their data and time - my other assumption is that people that will benefit the most from Facebook growing probably don't even use the platform to the extent where their minds are being morphed into being a negative and violent individual.


I don't believe there's some secret Illuminati society using Facebook and other means of social media to control the masses or start wars on their behalf through status updates and keyboard warriors - human beings aren't smart enough to pull that off at a macro scale, at least not yet. I do think that the current incentive structure in place, whether that stems directly from capitalism or this new gig economy we're rapidly learning to be a part of I'm not sure, makes perfect sense with what Facebook has become and I'm sure all other corporations will become. It wouldn't be surprising to find out that Amazon provokes unhealthy spending habits and FOMO of not owning something, that Youtube encourages everyone to be content creators and feel bad about not having subscribers while almost forcing old people to continue to watch highly polarized political content, and Salesforce deliberately makes sales processes inefficient for further dependency on their platform (this one's a low blow joke as I've been having a lot of SF issues lately at work, sorry). Bottom line, I don't think anyone should be surprised that companies are knowingly doing "unethical" things - this just comes from a wrong belief that human beings will behave ethically and responsibly when given authority and power - as hard of a pill this may be to swallow, you need to accept that we all suck and are selfish at the end of the day.


Never in history have individuals and entities been able or allowed to have so much influence at their fingertips. Napoleon and George Washington could've commanded thousands if not millions of people if they wanted, but none of them came close to the instantaneous reach we now have as individuals through the world wide webs. None of this should be news to people living in 2021, I just think it's important to remind ourselves that we're only human at the end of the day and such headlines shouldn't come as a surprise. Time to go check my Instagram stories - have a great day everyone.


41 views0 comments

Recent Posts

See All

My thoughts on various social media

I'm getting rather lazy with this category - once my current wave of busy-ness slows down (which could be postponed until retirement), I'll have more lowkey companies and apps featured. I thought I'd

Google Photos

I'm going to get so much heat for this post being in a category called "undervalued apps and companies" and part of me wants to admit this is a copout post as I haven't really been exploring new apps

2 apps no mains

Here's the start of a new category of posts - apps and companies that I think are undervalued. What I really mean by undervalued (I did not conduct some fancy financial analysis to understand what ind

bottom of page