Opinion-based Question very often involve the Respondent's own inferences because the question doesn't describe enough
Last year, year before, we had a huge meta discussion about the relative merits of a no-longer-existent close reason known as "Idea-generation". The results of that discussion was the identification of Question Constraints as an essential component of an answerable question. If a question was insufficiently constrained, it could not be answered in the time and format available on WB. (Alternatively, if it was overconstrained, it probably wasn't a good fit for WB either since it was too specific.)
While I didn't cast a VTC for this question, I probably would for the following reasons:
- The Courier's ethical code wasn't identified.
- Any kind of hint about how they might go about identifying good and evil. This alone would push me to VTC because this is a question that philosophers and ethicists have been asking for thousands of years and there is no universal answer.
- There's a little bit of a distraction in the question about how to conceal messages from hostile parties (but this is a minor point in the question).
To answer this question, a respondent would have to infer the courier's ethical code (and the priorities inherent in all ethical codes), then invent a set of rules for match messages with that ethical code. Since a huge portion of the answer depends on whatever the respondent pulls out of thin air, there will be far too much variation in the answers.
I also note that the question doesn't indicate anything about identification of second-order effects from a successful delivery method. How do the courier's deal with messages that don't explicitly order the death of innocents but indirectly cause their deaths anyway?
Edit
Without specifics about the ethical code, the situation or the message to deliver, I don't think anything useful can be said about the decision function to deliver or not deliver a given message.
Tl;dr
Extract details from ethical system A, combine with information extracted from courier situation s and message content m then define a mapping from all that information to "good " or "evil". Which information to pick from A, s and m and what to do with that information will be the algorithm contained in D. Some information will be deemed unimportant and discarded. Conversely, other information will be deemed important and retained.
Since A may specify conflicting outcomes, D will need to be robust enough to resolve those conflicts since D must always resolve to "Good" XOR "Evil". Perhaps, A will be internally consistent, in which case D doesn't need to include a conflict resolution mechanism for conflicts in A.
Long Answer
Given, message $m$, courier situation $s$, ethical code $a$ and decision function $D$, resolve $$\{good, evil\} = D(a, m, s)$$ Observe that $a \in A$ where $A$ is the set of all possible ethical systems. The OP hopes for a description of $D$. That's fine. $s$ and $m$ are free variables, dependent on the storyteller and can be anything. Since $a$ can also be anything, including internally inconsistent ethical systems, how do we resolve those conflicts to simple "good" and "evil"? Ideally, we want $D$ to resolve exclusively to "good" or "evil", never both and never neither.
$A$ represents all possible ethical systems. It cannot be less than all possible ethical systems since no constraints were placed on $A$ by the OP. We know that any ethical system will fall somewhere on the continuum between being complete but conflicted, or, it will be incomplete and consistent. No non-trivial system can be both complete and conflict-free.
Systems of the first kind, $A_{complete}$ [complete-conflicted], make the job of $D$ harder since $D$ must now resolve extra conflicts from $a_{complete}$. Systems of the second kind, $A_{incomplete}$ [incomplete-consistent], are amenable to our task of resolving {good, evil} since they always provide consistent answers even if they can't handle all situations described by $s$ and $m$.
The resolution of conflicts inherent in $a_{complete}$ must either happen in $D$ or be prevented entirely by the definition of $a \in A_{incomplete}$. As only data, $s$ and $m$ don't carry any interpretative power in and of themselves to make assertions about their own goodness or evilness. Thus, $a$ makes assertions/interpretations about the important characteristics of $s$ and $m$. $D$ combines the rule set embedded in $a$ with information from $s$ and $m$ then returns "good" XOR "evil"; or more specifically, "will deliver message" XOR "will not deliver message". $D$ must be robust enough to handle instances of $a$ where the rule sets are inherently contradictory.