While I agree with the proposal in spirit, I cannot agree to this policy as stated, as it transfers undue responsibility for things I cannot control.
If a downstream user consumes my software to violate this policy, I cannot do anything about that, as any mechanism to prevent such uses of my software would make it rightly non-free. But I might be hit with a punitive action as a result of this policy. Thatâs not acceptable and it is anti-democratic.
I also fear that this will lead to bad faith actors having a mechanism to use for concern trolling: software foo is being used on some bad instance, so we must ban the developers. The fact that âprimarily used forâ is included in this language does not assuage these concerns.
Please revise this policy to explicitly narrow the scope to things developers can actually control. Thanks.
Some questions to consider when revising this policy statement:
-
What criterion are used to determine whether a software is designed to primarily violate the policy statement?
-
What criterion are used to determine whether a software is âprimarily usedâ for violating the policy statement?
-
How is âprimarily usedâ defined? Who defined it?
-
What are the definitions of all terms in the policy statement bullet points for the purpose of enforcing the policy statement?
-
What is the definition of âtolerate such behaviors within their communitiesâ? What is the definition of âgood faith effortâ?
So far, out of software people are actually using in production, the only developer who has agreed to this is the Pixelfed author. If we are going to be held accountable as vendors for the actions of people downloading software from our websites and installing it, then we may as well not participate in Socialhub as vendors and restart Litepub instead.
I really cannot reiterate this enough. The whole value of SocialCG is that it is a mechanism for production implementations to work with each other. Those implementations do not need SocialCG, but SocialCG gains legitimacy from their participation here. If you impose conditions that are an undue burden on us, we will ultimately collaborate elsewhere, as there is no point in working in a space where our ability to do work might be interrupted due to issues outside our control.
And, to be clear, I agree with the spirit of what you all are trying to accomplish, but developers have to approach technical community participation from a risk analysis perspective.
As presently worded, if an instance is deployed using the software that is not aligned with the proposed policy statement, then the project in question is required to denounce that use (âhave not made a good faith effort to discourage such behavioursâ). What happens when a project is unaware of such an instance?
What happens when a project does not denounce that instance, because they do not care about things they cannot control anyway? Why should a project be required to take a position on âbadâ instances?
I want to spend my time writing code and doing technical work which actually benefits society, not waste my time releasing PR statements about instances deemed in violation of this policy by a concern troll.
So, until this thing is basically rewritten to clarify precisely what responsibilities I have as a developer of an AP project, I cannot agree to this proposal, and must assume the most draconian possible interpretation, in the same way as one must assume the most draconian possible interpretation of any other policy statement.