Ban Social Media Platforms From Censorship and Feed Algorithms - Communications Act

Granting companies the ability to create algorithms that benefit their ideals and agendas is a form of Censorship. Much like the two recent updates to the X platform, baring user posts that are blocked by ‘verified users’ (pay-to-speak) and muting replies and tags made to ‘verified users’ if they aren’t tagged initially (speak when spoken to). While paid ‘verification’ is okay, muting or demoting non-paid user content is not, it is a violation of our innate Rights and subversive of our Constitution.

Collaboration in defining what restrictions platforms and companies like Meta, X and other social media platforms/video shring sites would have to follow in order to end free speech infringement online would be most appreciated.

Here are some great points from, and collaborations with, @MWJones

Communications Act:

An act to enforce the FCC to declare that social media platforms, forums, video sharing sites and the other internet media companies are hereby designated as “Common Carriers” along with any website that allows users to share content of their own production, and;

  • Exceeds a [to be determined] number of daily visitors.
  • Exceeds [to be determined] registered users.
  • Generates revenue in excess of $[to be determined].

Enacting that websites or other companies defined as a “Common Carrier”;

  1. Shall not deny service to any user.
  2. Shall not by human intervention or algorithmically demote or promote any user or their content based on viewpoint.
  3. Shall not remove any user’s content unless that content is clearly in violation of US Federal Law (so no foreign countries dictating content removal produced by US Citizens) or under order of a Federal Court.
  • 3a - All content removals must have the final decision made by human reviewers located in the United States (so “bots” can flag the content for human review but cannot arbitrarily make the decisions).
  1. User’s whose content was removed shall be provided exactly what violated Federal Law, and must have a clear path to appeal.
  • 4a - All appeals must be handled by human reviewers located in the United States, and all appeals must be resolved within 48 hours of the appeal being made (this includes restoring removed content on successful appeal).
  • 4b - The companies shall provide compensation equivalent to 300% of the user’s typical revenue that are financially impacted by content removals if their appeal is successful (for example if they receive advertising revenue, and the company’s actions “demonetizes” user’s content).
  • 4c - The companies shall, at least once per quarter, provide a report available to the public listing the number of actions taken on content based on legal reason, number of appeals, and the number of overturned actions as a result of appeals. The report shall not include any Personally Identifiable Information.
  1. User’s content shall be stored on servers in the United States.
  2. User history shall not be disclosed to 3rd parties absent a Search Warrant. Likewise, history of US Citizens shall not be disclosed to any party outside the United States.
6 Likes

Who do you purpose do the “banning”?

The First Amendment right to free speech generally protects private speech from governmental restrictions. It doesn’t protect against speech restrictions imposed by private entities.

Social media sites are generally owned and operated by private companies. As a result, they’re not bound by the First Amendment. Accordingly, any regulations they may impose on speech are not subject to First Amendment protections.

The First Amendment (Amendment I) to the United States Constitution prevents Congress from making laws respecting an establishment of religion; prohibiting the free exercise of religion; or abridging the freedom of speech, the freedom of the press, the freedom of assembly, or the right to petition the government for redress of grievances.

I suggest if you don’t like what Meta or X are doing, try Truth Social or Rumble. At this point they supposedly don’t censor,

1 Like

Than maybe this thread should change toward an amendment aimed at including social communities and private accountability of the protection of our rights. It’s detrimental to have a system where governments are barred from violating rights, but legal structures aren’t.

Than any social platform in violation would get the same treatment as TikTok is right now, and update block and sudo-ban.

Since 1934, the Federal Government has the authority, under the Communications Act, to establish “Common Carrier” services (This act also created the Federal Communications Commission). The Courts have long established that the Government had the authority to regulate private operators under this “Common Carrier” rule.

Under the “common carrier” rules, it established that the carrier could not deny services to a customer in their service area, if that customer remains in “good standing” paying for the services provided.

More recently, Obama, Biden and the Democrats have been pushing to make the Internet Service Providers “Common Carrier”, and thus able to impose Net Neutrality. In this case, the courts have struck it down, since it was not by Congressional action

In a scenario as is proposed here, and what I have considered the right move for over a decade, is to flip Net Neutrality on it’s ear. The Internet Service Providers are not the ones playing games with access, but it is the Social Media companies.

Congress should pass an updated Communications Act that requires the FCC to declare that Alphabet (Google Search and YouTube), Meta (Facebook and Instagram), X Corp (Twitter/X), TikTok and all the other social media companies are hereby designated as “Common Carriers” and:

  1. Shall not deny service to any user.
  2. Shall not by human intervention or algorithmically demote or promote any user or their content based on viewpoint.
  3. Shall not remove any user’s content unless that content is clearly in violation of US Federal Law (so no foreign countries dictating content removal produced by US Citizens) or under order of a Federal Court.
  4. User’s whose content was removed shall be provided exactly what violated Federal Law, and must have a clear path to appeal.
  5. User’s content shall be stored on servers in the United States.
  6. User history shall not be disclosed to 3rd parties absent a Search Warrant. Likewise, history of US Citizens shall not be disclosed to any party outside the United States.

I know this is just a start but those six bullet points would get the ball rolling. Since Copyright Law and Child Safety Laws are both Federal, the companies can still deal with those issues like they do today, but the messaging to the user must be clear and unambiguous (In July 2024 I had a private playlist removed by YouTube for Violating “Child Safety Policy” - yet I couldn’t get an answer as to what video - of over 1,000 in that playlist - was in Violation - and I was not the creator of the video, so the entire playlist was wiped - and I never saw a person’s name attached to any appeals, just a generic response denying the appeal).

We can use this as a framework toward a detailed contribution.

5 Likes

Yes thanks for the post! Care if I use this above and link you in the creation? or do you have a thread including these points already?

You are more than welcome to run with it (I’m an engineer by day, I can come up with plans but making the plans look good on paper tends to be my challenge).

After drafting my above comments, my engineering mind continued to come up with additional bullet points:

  • 3a - All content removals must have the final decision made by human reviewers located in the United States (so “bots” can flag the content for human review but cannot arbitrarily make the decisions).

  • 4a - All appeals must be handled by human reviewers located in the United States, and all appeals must be resolved within 48 hours of the appeal being made (this includes restoring removed content on successful appeal).

  • 4b - The companies shall provide compensation equivalent to 300% of the user’s typical revenue that are financially impacted by content removals if their appeal is successful (for example if they receive advertising revenue, and the company’s actions “demonetizes” user’s content).

  • 4c - The companies shall, at least once per quarter, provide a report available to the public listing the number of actions taken on content based on legal reason, number of appeals, and the number of overturned actions as a result of appeals. The report shall not include any Personally Identifiable Information.

Of course I’ll probably think of more later, but we can certainly continue to collaborate on it.

1 Like

Let me know if you have any notes, or additions. Thanks a bunch! I didn’t expect this much, this quickly.

Truth social does in fact censor, even ban for life.

Reguardless of who is doing what. Any platform must be included in said regulating factors.

I agree, to make this pass legal muster, the enabling legislation must not call out the companies by name - it must target a specific parameter and any company that surpasses that parameter faces the regulatory bonanza.

In the case of the “TikTok ban” it was targeting ownership by foreign adversaries, and then named China, Iran and North Korea into that foreign adversary category.

For this one, it could be “any website that allows users to share content of their own production”, and (pick one of the following):

  1. Exceeds a [to be determined] number of daily visitors.
  2. Exceeds [to be determined] registered users.
  3. Generates revenue in excess of $[to be determined].

This would allow those startups to get going without being burdened out of the gate by the Government regulation, and then once they’re successful, face the common carrier designation and all the ramifications thereof.

1 Like

I’ll make sure to include this, good insight.

Perfect way forward IMO. Slide them underneath existing regulations in order to force their compliance and limit their censorship. Building out from there, and closing loopholes, might immediately follow. I also love the idea of restricting U.S. user data and content review in-country, keeping in line with ‘America First’, in scope of Constitutional protections.

1 Like

I didn’t read all the comments so maybe this is mentioned.

In addition to being stored on servers in the US it should be owned by a US citizen/company etc and not foreign owned.

I’d also like to see that user information on any website may not be shared, distributed, sold, etc. If they are going to do that then the user should get the profit since it is their information.

I think the rules should apply to anyone not just users who exceed visitors or users etc.

1 Like

Would you want to ban Rumble? Rumble (the YouTube alternative) is actually based out of Toronto, Ontario, Canada, although their stock is traded on the NASDAQ stock exchange.

The already passed Foreign Adversaries ban (aka the TikTok ban) deals with those countries that should not be influincing our youth.

If a company in Canada, EU or elsewhere wants to create the next big thing, fine, just know that they have to play by our rules once they hit a certain size.

See Item 6 in the above proposal - “User history shall not be disclosed to 3rd parties absent a Search Warrant. Likewise, history of US Citizens shall not be disclosed to any party outside the United States.”

I think that pretty much sums up this concern - they can use that information for their algorithms to make recommendations, but cannot disclose it to anyone outside their business without a search warrant (so the data cannot be sold period).

While it would be great to slap these rules on everybody from day one, if you’re the next Mark Zuckerberg sitting in a dorm room creating the next big Social Media platform, do I really want to worry about complying with a bunch of regulations when I’ve only got 30 users and one cloud server?

That’s the idea of the threshold, let the business succeed or fail on their own merits without Government involvement, but if they are successful and get to a big enough size, then know that they’re going to have to guarantee they play nice with everybody.

My suggestions above were really meant to be “choose one of these” not all or a combination of them - I was just mearly throwing out options.

1 Like

I agree with these points. @DRSE @MWJones In fact I think we should clarify that private user data is not up for sale. Regardless of platform transparency, private user data shouldn’t be a viable ‘product’ (as technically stated in your 6th point).

Advertising should be done via research instead of espionage.

I do have a question for the both of you though;

Should public user data, like general posts, videos, bios, etc.; anything that a user willing publically displays on their account, profile, page, etc., be viable for companies to share via statistical data rather than individualized personal user data.

Meaning, If Facebook or X (a ‘Common Carrier’) could detect an influx of people posting about, idk, Instapot (whatever) they’d still have the option to sell the data to a company like Amazon in order to indirectly advertise, based on the something like ‘68% of users from the US make posts including use of an Instapot’. Meaning that there’s still the functionality of online advertising, but it wouldn’t inlude any user data from private messaging, or private conversations caught by passive microphones (home assistants, phones, etc.).

Non-user-specfic, public statistical data sales for online advertising. Companies caught selling private data or using espionage in order to collect data should feel the full effect of the law.

Otherwise, I am completely okay with a full ban on user data sales, private and public. Regardless, I think we should include wording that encompasses espionage data collection tactics (like passive microphones), and the strict ban on data sale, taking our wording a step further than the 6th point.

Sorry, I agree with Michael. Upcoming platforms and business need the opportunity to grow to the size of already popular media sharing platforms. This why the final point of what defines a ‘Common Carrier’ is important when conjoined with the prior two.

I don’t think so. They should have to pay for that data from the individual if they want it. Just like a focus group. If someone wants to opt into participating with providing data that’s their choice. But I think it should automatically be private and therefore paid for to the individual. Just like if you go into the store they don’t follow you around then follow you outside the store to get data. They don’t follow you home to see who you talk to and if you mentioned any of the products you saw. (or at least they should not be) Companies should be getting data from their sales.

Advertising should be random. That’s how people get exposed to new items, places, ideas etc. If they target all the advertising because of social media they are deciding what people are exposed to all the time. It’s a narrow view.

2 Likes

Yes. I saw his comment after I made the post. I agree in the ability to grow.

I also think that data is private for everyone. That’s the key point I am making.

1 Like

Unfortunately, yes regarding even Rumble.

Love Canadians. Have friends.

But do not appreciate the way Canada has been treating it’s own citizens the past several years. So I would have to say yes.

Personally I don’t believe in saving all this data in the first place. Before the internet and social media people said and did things all the time. They were able to move on to a new day. Make mistakes, say things they didn’t always mean, grow, learn, etc without it being engraved forever in some record. I don’t think data should be collected and saved forever.

You obviously have more knowledge on the business side of social media. But on the user side I don’t want posts shadow banned from friends and family. I don’t want my information collected and shared. And so on. I don’t want to have to have a certain amount of users or visitors to have protections or to have my posts seen.

completely agree.