Summary: Section 230 should be reformed, such that social media must essentially be a platform or a publisher. While social media must be protected from liability from user content or they will cease to exist, which would be detrimental to our freedom of speech, they must be held accountable for what they promote or throttle- whether manually or algorithmically. They must be held accountable if attempting to enforce hidden policies on their users. They also must stop data-mining their users. For this to happen, section 230 needs to be updated:
Any platform which polices its user content in a manner inconsistent with or outside of its clearly stated user policies should lose section 230 protections. This includes algorithmic policing of content.
Any temporary or permanent ban, throttling (shadow ban) or removal of user content, or refusal to post user content which is pre-reviewed by the platform, should be required to be made clear to the user. Under law the platform should be required to state clearly why such action was taken and which user policy or policies were violated. Failure to comply with this should remove their protections under section 230. Users should be permitted to revise content or to appeal, and have a path to suing if they believe this has been violated by the platform.
If there is algorithmic influence on which kind content is promoted on a platform’s site and/or which content is throttled, the platform should be held liable for such content and lose their protections. In this case the algorithm inherently becomes a part of the creation process by promoting, rewarding, or discouraging the creation of certain kinds of content. Inherently, this makes the platform in part responsible for the content created and therefore should lose their protections under section 230.
Even more ideally, all algorithms for content suggestions, promotion, throttling, or content approval or removal should be open-sourced in order for section 230 protections to apply. Users should always have an option to bypass algorithmic suggestion in favor of a chronological or similarly non-algorithmic timeline. Algorithmic content suggestion/promotion should be opt-in only.
All content algorithms which ever went live on the platform should remain cataloged for a period of time by the platform for review if legally subpoenaed.
Platforms should be protected from being sued by advertisers based on what content their advertisement appears next to. Platforms should be required to apply advertisement algorithms on an opt-in basis only. However platforms should always take user content into consideration when placing advertisements intended for children if and where appropriate.
Any algorithm which promotes or suggests content or advertisements to a user should be based on information voluntarily and actively given to the platform by the user in an opt-in manner, in the form of user preferences, and not collected automatically. This includes data collected by parsing the users’ created content for use on the platform. If content or advertisements are suggested or shown to users algorithmically using any data automatically collected by the platform, OR the platform knowingly uses an advertiser doing the same, the platform will lose its protections.
Considerations should be taken into account for all the above to not affect the operation of platforms which aren’t operating for profit and have a sufficiently small number of users, as to not overburden smaller or hobby sites.