111

I would like your suggestion on a delicate matter involving a paper that I am reviewing. I will avoid giving too many details to preserve anonymity.

In this article the authors describe an algorithm for the (unmanned) control of a military armed drone. The work has been done completely in a simulator, but the authors show clear pictures of the type of real-world military drone they reproduced, explicitly saying that the real drone can be armed with different type of bombs. The performance of the algorithm is scored based on the number of times the simulated drone autonomously hit the target. The authors never discuss possible ethical concerns of this technology, neither they explicitly say if this technology will be used on real drones to hit human targets with possible involvement of civilians (e.g. in counter-terrorism).

My personal position is against the development of lethal autonomous weapons. Even though I have several years of experience in reviewing papers, this is the very first time I have to deal with such a problem and I am not sure how to move. I think that this kind of ethical problems are common in medical-related fields, but rather uncommon in engineering, and that they are catching our community off guard.

Note that, from a formal point of view the situation is fuzzy. Even though there has been an international petition against the development of autonomous weapons, the formalization in an international ban (like the one for chemical weapons) seems to be far away. Moreover, the ethical guidelines of the journal just apply to the use of animal in research, and to experiments on human subjects. This let me conclude that from a formal standpoint the paper is legit since it does not break any rule.

I have considered writing to the editor expressing my concerns or pointing out these concerns directly in the review, so that the authors can give an answer to my queries. As last solution I am also considering the option of withdrawing, but I would prefer to set up a dialogue with the authors and the editor since this is more constructive for everybody. However, I think it is also important to consider the hypothesis that my ethical position can bias my review (even though I am trying to be as neutral as possible) and this may be unfair for the evaluation of the technical quality of the proposed method.

My question is: how should I proceed in the review process? There are a few options: withdraw for ethical concerns, just review the article, review and express my ethical concerns to the editor (and maybe to the authors).


Update: 5 March 2020

Thank you everyone for your answers, they really helped me to better understand the situation. I decided to withdraw as reviewer, and I sent an official letter to the editor. In the letter I expressed my ethical concerns without giving any comment on the technical quality of the paper. The paper did not break any formal rule, therefore recommending an official rejection would have been like imposing my personal ethics over the review process. In the letter I have also reported those sentences that clearly framed the drone like an autonomous lethal weapon.

The assistant editor wrote me saying that the editor liked my academic attitude and wanted me for a second review. The editorial team has contacted the authors, who said they did not wanted to create a misunderstanding and that they are willing to modify the expressions. This seems to imply that they have forwarded to the authors the list of sentences I confidentially reported to the editor, and that the authors want to adjust those sentences in the revised version.

My personal opinion is that just rephrasing the paper without changing the substance would not be enough. You can substitute "target" with "goal" and "attack" with "reach" but that remains an autonomous armed drone.

17
  • 3
    Is target selection autonomous, or just the attack itself after a human operator selected the target? I'd say the first one is dangerous, the second is not. There have been guided missiles, cruise missiles, laser/GPS guided bombs, etc. for a long time, and it's better to use those and not carpet bomb the whole area WWII-style.
    – Nyos
    Commented Mar 4, 2020 at 15:41
  • 2
    @Nyos the paper did not discus this particular detail. The target is given, but there is no description of the way it has been acquired. However, note that the distinction between a Lethal Autonomous Weapon (LAW) and a Guided Bomb Unit (GBU) is not made based on the way the target is provided, but on the degree of autonomy of the weapon itself. LAWs have a higher level of autonomy and the ability of autonomous decision making for navigation and attack. The drone described in the paper is clearly a LAW. I invite you to give a look to the page on Wikipedia about LAWs for further details. Commented Mar 4, 2020 at 16:28
  • 2
    @Nyos even that can easily be seen as ethically problematic. Sure, a precision drone strike that just kills the person you want to kill and maybe a few collaterals (objectifying humans here^^) right next to that person (their fault clearly^^) maybe better than carpet bombing the whole village. But then again, maybe not killing anyone is even better. I.e. by providing such precision weapons you may make it more likely that such murders happen at all. So there can be reasons of concern no matter the type of drone/weapon is being considered. Commented Mar 4, 2020 at 17:54
  • 1
    @FrankHopkins but haven't you abstracted it too far in the other direction? Should one refuse to review any research that could potentially be used in any kind of weapon? For instance, would you advocate for refusing to review research on aerodynamics, because drones use wings to fly?
    – dwizum
    Commented Mar 4, 2020 at 21:12
  • 2
    @GenericJam I'm not going to stop this instance of A because some other instance of A will "surely" not be stopped (by another person, because they will cite me as their excuse^^) is a pretty weak excuse not to stop the instance of A you can stop. That being said, the fact that technology can be used in different ways is precisely why ethics is never fully out of the picture. And it's not black and white either. If I could choose to invent a deadly virus able to kill humanity and by not doing so I would only delay its invention by 3 years, I'd still consider that an ethically good decision. Commented Mar 5, 2020 at 16:27

11 Answers 11

102

I suggest you withdraw and send a formal letter to the editor expressing your concerns. Cite a larger moral universe than the narrow one covered by the journal's policies. Suggest that the research itself is unethical and recommend, without detailed review, that the paper be rejected.

I've been in a similar but less fraught case. I was shepherd of a paper that crossed a less dangerous boundary. I had repeated contact with the authors and tried, over several communications, to convince them that the actions they were proposing were wrong. I failed to convince them. The paper was rejected. I was actually given an award for my efforts to convince them, even though I failed.

My experience suggests that the people who do this sort of thing are not going to be convinced by a dialog of any kind. They are too invested in it to bother to listen.

Don't wait for "the powers that be" to decide this stuff is immoral and leads to a bad end. If people like reviewers at the bottom of the pyramid don't complain about it, this sort of research will continue and will get implemented. There are too many historical exemplars of this to ignore the consequences.

1
  • Comments are not for extended discussion; this conversation has been moved to chat.
    – cag51
    Commented Mar 3, 2020 at 21:48
39

The OP is undoubtedly aware that this is a difficult question with some grey areas. It may also be difficult to provide an single objective answer, so I will first highlight some courses of action and then make a personal suggestion.

(1) Reject on grounds of the work being unethical- a reviewer has the moral right to do this. Of course, if there's no supporting legislation or precedent, this can become quite subjective, and the reviewer should be prepared for rebuttal.

(2) Reject on academic grounds- the approach would be to point out that the authors have not done a rigourous job since they haven't dwelt on the possible consequences of such technology, and this is an important part of technology development.

(3) Review the paper on purely academic merits, but in the review, convey explicitly to the editor that this work may have unintended negative consequences, and your review does not consider these.

(4) Scrutinise the journal and see if it publishes articles that are (in the reviewer's opinion) problematic. If so, withdraw from the review AND raise awareness about this journal publicly through media, academic networks and so on. (Write to the journal as well, since that is an expected courtsey.)Without the second action, the first is sterile and almost evading responsibility.

Yes, none of the options are easy, and they involve either inviting backlash or carrying guilt. One must take a decision based on one's fortitude and appetite for criticism. I personally cannot recommend taking the softer way out, so my recommendation would be either (3) or (4) depending on how strongly one feels about the issue.

EDIT:

Following a suggestion in the comments, it may be worth differentiating the background of (3) and (4).

In (3), where one conveys one's apprehensions through a private communication to the editor, the moral burden is placed squarely on the editor/journal. The reviewer has done a purely academic duty and is satisfied with communicating apprehensions privately. There could be some good reasons to adopt this stance- trusting the ethical/moral core of the editor/journal, believing that this grey area should be dealt with within professional confines and not publicly.

In (4), one does not absolve oneself on any moral responsibility, and chooses to tackle ethical concerns personally. This means accepting responsibility not only for backlash, but also for the successful dissemination of the message. It is inherently the more difficult, but more responsible and possibly satisfying approach.

7
  • 1
    This is a good answer, but it would be improved by elaborating on the difference between conveying warning about the morality of the technology in the review itself vs in private correspondence with the chair. Commented Mar 2, 2020 at 16:08
  • I agree with @StellaBiderman, this looks like a good and balanced answer. Regarding point (4) I did not find any similar article published in the journal, so I discard this solution. At the moment I opted for option (3), writing a paragraph about my ethical concerns and a detailed technical review. Commented Mar 2, 2020 at 20:07
  • 16
    You can also just recuse yourself from the review process on personal ethical grounds. Rejecting the paper on ethical grounds is up to the journal to decide - it's not correct to submit a rejection on that basis if the editor is otherwise OK with publishing the content. You don't have to agree to review it, however.
    – J...
    Commented Mar 2, 2020 at 20:54
  • 3
    While I like your presentation of the courses of action, I disapprove of your recommendation. (3) will not do.
    – einpoklum
    Commented Mar 2, 2020 at 21:11
  • 2
    @WoJ in computer science it is typical for authors to be able to write a response to the reviewers, presenting a counterargument for any concerns raised and clarifying points of misunderstanding. Then the reviewers update their reviews based on the response and a final decision is reached. This response is referred to as a “rebuttal.” In common English, a rebuttal is an argument that counters someone else’s argument. Commented Mar 3, 2020 at 15:50
21

If it really troubles you, withdraw from reviewing the paper and tell the editor about the ethical concerns.

If you reject the paper for anything other than academic reasons, you're implicitly imposing your ethics on the authors. Ethical statements that make perfect sense to you might not make sense at all to someone else; the very fact that the authors did the research indicates they don't find autonomous armed drones ethically abhorrent. You could claim that your ethics are "better" than theirs, but that is not certain, and a reasonable person could take the reverse position. If you want nothing to do with the paper for ethical reasons, decline to review it, and let someone who is less troubled by the ethics review it.

That said, you can point out the issue to the journal and make sure they are OK with the ethics. It's possible the journal will agree with you that autonomous armed drones are unethical and will reject the paper without review. If they publish anyway, you can distance yourself from the journal by not reviewing for them and/or publishing with them in the future.

2
  • 4
    the very fact that the authors did the research indicates they don't find autonomous armed drones ethically abhorrent, or they silenced their inner conscience for the benefit of subject interest, bank statement, social status, peer pressure, or a combination of other such reasons. It wouldn't be the first time.
    – gerrit
    Commented Mar 3, 2020 at 14:09
  • 3
    True about any number of things that significant numbers of people find abhorrent. From eating pork to paying income tax.
    – puppetsock
    Commented Mar 3, 2020 at 17:58
18

You may be overthinking this. A reviewer does not accept or reject a paper. A reviewer recommends the editor to accept or reject a paper.

If you believe the journal should not want to be involved with weapons research, recommend rejection based on moral grounds. If the editor thinks moral grounds should not be considered, or believes weapons research is ethical, they may ignore your recommendation.

The difficult decision here lies with the editors¹, not with the reviewers.


¹Coming up with more extreme examples of research that has not directly harmed anyone in the methodology but still has strongly unethical implications is left an an exercise to the reader.

5
  • 1
    This would give grounds for religious reviewers to recommend rejection of papers that go against their theology. I don't think a reviewer should ever make a recommendation on anything other than the scientific content of a paper. Commented Mar 4, 2020 at 16:13
  • 6
    @Persistence If a reviewer holds the belief that a journal should not want to be involved with abortion research, then it's their right to recommend so. The editors will very likely disagree and probably not ask this person as a reviewer again. No harm done. I do think that "is this research ethical?" is a relevant question for reviewers to consider, the footnote in my answer contains a link as to in what direction I'm thinking when taken to its extreme.
    – gerrit
    Commented Mar 4, 2020 at 16:30
  • Agree with gerrit. There is a multi-dimensional continuum of morals. This example is definitely in a grey area where reasonable people may disagree, as the answers show. But for an individual reviewer, black or white can be easy choices. The editors are the ones faced with the difficult task of reconciling multiple inputs.
    – MSalters
    Commented Mar 5, 2020 at 15:17
  • 1
    @ScottishTapWater I don't see the issue with a reviewer recommending rejection because a paper goes against their theology, so long as they explain that's why they recommend rejection. The editors are perfectly capable of handling such a recommendation appropriately.
    – cjs
    Commented Jan 2 at 20:20
  • Well they could, but they shouldn't have agreed to review anything they can't be objective about, they've wasted everyone's time Commented Jan 3 at 17:39
18

TL, DR: You should just ignore your moral questionings for the purpose of reviewing this paper.

For disclosure: I work in a defense related field.

In summary:

  1. The questioning you may be proposing is probably much more complex and involves more hard-to-draw-lines than you may be thinking.

  2. Any political stance that may be held by the journal would be highly questionable, and also, not up to you to decide on them.

  3. It is out of scope to have an ethical discussion on every such technical paper. Even though codes of ethic should be known and adhered to by all professionals in every field.

  4. As far as established ethical guidelines on this kind of publication exist, none have been breached according to the present question (as you've pointed yourself).

  5. You as a person and human being, have the option to refuse reviewing the paper. But don't expect or bully others into doing the same.

In detail/ranting mode:

Despite this being an engineering paper, you kinda should think about the framework of science in general. "Science" and "ethics" are different things, which intersect within "actions". The paper does not claim to kill people for the purpose of testing, it only talks about simulations, hence, due to the lack of actions, the science part is distinct from the ethics part here, so in my book there is absolutely no ethical concern whatsoever that a reviewer should care about while reviewing the paper (other than fake simulated results, dishonest reports and other stuff applicable even to pure mathematics).

I see a lot of technical challenges within the field of autonomous weapons/drones/cars and so on, and those are fields in which people do active work and develop their careers on, for the good and for the bad.

If you are against the development of autonomous weapons, this is a political stance of yours that (IMHO) should not be endorsed by any scientific journal as an entity. You could check if the journal you that requested your review does have some related policy, but I doubt this is the case. Hence, it is not part of your review task to make political assertions about the papers you review.

Where exactly do you draw the line? Autonomous killing weapons are off to you, but what about face recognition AI? There are also concerns about unregulated development of AI applications, and even proposed temporary bans on usage of this technology. Do you expect every AI journal cease publishing until further notice? Maybe you'd like all weapon performance related publications to cease as well because weapons in general are bad? I get thousands of results from searching "missile efficiency" on google scholar, all in the first pages talking about engineering with no ethical discussion. Should all those papers have been rejected as well?

Would it ease your conscience if the "theme" of the paper was a bit more disguised? Maybe you'd have nothing against a purely mathematical essay on Monte-Carlo methods, but guess what: They trace back to the project of the atomic bomb. Back when I was an undergrad student, I've worked in a project where a simulated drone would take pictures of a target. Oddly, the camera would always break after the first picture was taken. Took me a few minutes to notice it was meant the simulate a bomb. Though it was the only time I've seen this topic be treated with no unaware people being a bit shaken, apparently it was an empathy move on the part of the teacher.

Bluntly, if my (technical/theoretical only) paper was denied due to a reviewer thinking that "this technology should be banned", I'd see this as no different that him/her saying "my religion disagrees with what you are saying". Which did happen in the past, stories tell that Napoleon once questioned Laplace about not dedicating any word to God on one of his treatises. Of course this gets blurred in humanities fields where the contents of a poor publication may translate into explicitly forbidden things like hate speech, fighting words, and so on. There is also a problem in medical/biological sciences where any decent publication needs empirical results (i.e. while I can simulate a drone strike, I can't simulate a poison acting on the human body). But this questioning is not about the scientific merit of the paper.

Also, don't ask the authors to include an ethical discussion on their paper. It's just out of scope, and frankly that's a humanities field of science, not an engineering field. At one hand, I'm completely equipped to write and read the scientific literature about weapon engineering (in the fields I work with), and I did have to take a course on ethics during undergrad school, I am aware and adhere to the codes of ethics applicable to my profession, but I'm a complete noob when it comes to research level ethical/political and legislative sciences. I cannot and should not be demanded to jointly review and discuss the latest literature on the ethics of autonomous weapons and their technical inner workings every time I write about algorithms for weapons. I just do the latter. An exception could be given to adding some standard boilerplate text like "Disclaimer: This paper does not endorse murder". But only only lawyers care about boilerplate text and I'm guessing you are not a lawyer.

Look, I work in a defense related field and know people who work with some products that may eventually (deliberately) kill people. Really, the ethical discussion gets old, you may spend a few days deciding if you are okay with it but once you're done, then discussion is over, just move on and either do the job or refuse it, revisit your values once you feel like it, but not every day (or on every paper for the current example) and don't go around questioning your colleagues about this every chance you get. I'm not minimizing it, companies that develop such products should and generally do have psychologists to support folks who may grow uncomfortable with this situation. And they do ask if the potential employee is okay with working on this kind of product, because this should be given some thought. But these questionings reach conclusions, and insisting on the discussion is just bullying others into adhering to your world view.

While some people have objections about working with weapons (particularly consumer available guns), and do "vote with their feet" by refusing to work in this field, the practicalities of life impose themselves into there existing a very thriving "defense" industry, armed police officers, armed forces and so on. There is just much more votes in favor of weapons existing and being developed than otherwise, like it or not, agree with it or not. And all the more demand for more weapons production and development.

12
  • 14
    Academics need to focus on actions and outcomes for this kind of thing. Definitely not a clear cut thing
    – safetyduck
    Commented Mar 3, 2020 at 21:52
  • 11
    "...it was an empathy move..." I find such thinly veiled euphemisms funny. It makes you wonder whether the teacher was insulting the intelligence of the students who may have had ethical concerns about the project. Something along the line of 'either you are a true patriot or you're dumb'. Another one of these is the 'defense' category of field, company, etc. which implies that the weapons are intended solely for the defense of the nation, and thus ethically acceptable, when in reality they are far more likely to be used for assaulting other sovereign nations. Commented Mar 4, 2020 at 7:52
  • 40
    I have voted this answer down, because I feel it argues that academics shouldn't concern themselves with ethics, a view with which I disagree strongly.
    – j4nd3r53n
    Commented Mar 4, 2020 at 10:08
  • 14
    Ethics should be the base of any human activity. Pragmatism and "objectivism" in science is a threat to humankind.
    – Juan
    Commented Mar 4, 2020 at 12:55
  • 27
    I find it controversial that someone directly benefiting from arms industry advises an academic to ignore their morale questioning and carry on reviewing of a paper. Note that an academic is not financially compensated for this review, which at least gives them full freedom to follow their morale compass. Commented Mar 5, 2020 at 8:10
13

tl;dr: Show backbone and reject the paper.

@Applied academic presented four courses of actions in their answer; I'm referring to those and you could go read that first (but you don't have to).

My recommendation is:

  • Start with (4): Check whether the journal is a problematic venue. You might have made a mistake if you've associated yourself with a journal which occasionally legitimizes assassinations and militarized lethal policing using drones, or other similar pursuits.
  • If it is a problematic journal, just follow through on (4).
  • If the journal rarely, or never, publishes such work: Do (1). That is, reject the paper (or rather, recommend it be rejected) on ethical grounds. The fact that there are ethical guidelines and accepted bans means you have enough ground to stand on in claiming that such research should not be condoned legitimized and disseminated via the journal. Try to make your argument both on general principle, and - if you have credible information about the specific research you're reviewing - examples of abuse of drone technology by the authors, their institute, their funders or their government. Unfortunately, such examples abound in our world today.

I'm against option (2), because complaining about lack of an "ethics section" suggests implicitly that the paper overall is close to being legitimate. I'm also against option (3), because that means you've "passed the buck" - you've done your part in getting that paper published, and you're now hoping the editor has more backbone than you have. It's doubtful they will, after seeing you've gone along with it.

Note: @DanRomik emphasizes in a comment that a reviewer does not actually reject or accept a paper, but rather recommends acceptance or rejection. That's well worth remembering even if we (and this answer) use the terms "reject" or "accept".

8
  • 15
    Rejecting a paper on the basis of ethical considerations of this kind is, itself, not ethical.
    – puppetsock
    Commented Mar 2, 2020 at 22:18
  • 11
    @puppetsock: 1. By what ethical standard? 2. That journal already supposedly applies ethical considerations, according to OP.
    – einpoklum
    Commented Mar 2, 2020 at 22:25
  • 5
    It makes sense to clarify that reviewers don’t reject papers, they only recommend rejection. And if the recommendation is based on a reason the editor finds irrelevant to the task they asked the reviewer to perform, they will simply send the paper to another reviewer. So unless you are suggesting for OP to lie and reject the paper based on spurious, made-up technical grounds, this symbolic protest rejection is no different than simply withdrawing from the review.
    – Dan Romik
    Commented Mar 2, 2020 at 23:08
  • 9
    I do take issue with “show backbone”. This is an aggressive and offensive thing to say, since you are implying that if OP chooses an alternative mode of action then they lack backbone (and so does anybody else reading your answer who disagrees that this is the best thing to do). I suggest that you also “show some backbone” and edit your answer... ;-)
    – Dan Romik
    Commented Mar 2, 2020 at 23:29
  • 10
    @DanRomik: The implication is there for OP, but not for other readers. OP has indicated he has ethical issues with the paper, but is hesitant to act upon them. So yes, I think that if he lets what I believe he believes are excuses keep him from acting, he will be exhibiting lack of backbone. I'm hoping OP will appreciate this kind of emphatic stance.
    – einpoklum
    Commented Mar 2, 2020 at 23:47
9

You have a number of viable choices, obviously. My personal bent is that if you choose to recommend turning the paper down based upon the ethics of weaponized drones that you do it, state that opinion in the review, and get off the stage, refraining from reviewing on scientific matters.

This way your objection is noted, you haven't made yourself part of the bad ethical situation, and you give the editor opportunity to follow the recommendation or seek further review.

The other extreme is to recommend turning the paper down, not saying it's because of ethical reasons, and searching for scientific justification for your recommendation. I would personally consider this scientific sabotage, and thus recommend against it.

The middle ground is to state your conviction, but offer a fair scientific review. I believe this is hard to do, and it brings your review into question. I would refrain from doing this.

Personally, if I had this particular moral conviction, I would simply contact the editor and decline to review the paper, stating why.

Defense industries have driven a whole bunch of technology. There are those who further this effort. I can understand refusing to be a part of it, but draw my personal line where it comes to sabotaging the field with disingenuous reviews. There are ways to have your convictions come through well short of this-- turn down the review, write a letter to the editor for publication, work on the leadership of professional organizations to end the policy of publishing such stuff, recommend to your librarian to discontinue subscriptions, etc.

7

Summary: ethical concerns come in degrees.

  1. Some are sufficiently standardized in the academic (or even general public) world-wide that violating these standards means the paper should be rejected on ethical grounds.

  2. Other ethical concerns are more personal in the sense that a reviewer may hold them personally, but they are not so widely agreed upon/standardized the authors should be held responsible to them.
    OP should IMHO tolerate that others may have a different point of view for these concerns (2.).

    Still, general personal ethical concerns (2.) often include ethical concerns that are widely accepted (1.) but on a narrower scope. These again are on topic for reviewing the paper.


I'd suggest to analyse the (un)ethical points a bit further.

I've never had to do with arms research, but in my field animal experiments are relevant, so I'll use them as an example for which I've considered my personal ethics, and for which I have also encountered other people's ethical considerations.

Here's my point of view:

  • Is the unethical point something that is "universally"* agreed upon as unethical by academia or even the general public? In other words, is there a current mainstream consensus that says it's unethical, and possibly even standardized formal ethical requirements that are violated? If so, reject on ethical grounds.

    Examples would be plagiarism, presenting falsified results or observations or a study design that is considered unethical by the rules ethics committees work on (if there's relevant local variation, I'd judge by the local rules where the authors are).

    * I'm after a level of agreement that is stronger/wider than "my professional friends agree with me" and also stronger than "my local community/society" or "my religion says": e.g. my country has stricter laws (which I take as surrogate to ethical agreements) on experiments on humans than some other countries. But these differences do not mean that that other society thinks causing harm to humans is ethically fine, the differences result from nuances in where exactly the line is drawn, how much suffering a particular procedure is judged to cause, how much suffering is thought bearable and how much suffering is considered worth while. So there may be some cases where the judgment differs, but on many cases, the ethical judgment will agree. And that is what I'm after here: a "current mainstream consensus" (thanks @ObscureOwl) across continents, countries, religions, ....

    (I don't think one can claim that every single person agrees on any given ethical point - but asking for such an absolute agreement is pointless for the practical purpose here since it could not be used to determine whether someone's acts are unethical: as soon as they'd disagree, the required agreement could not be reached.)

  • But I may have ethical objections that result from my (OP's) world-view/philosophy/religion/... and where I'd say while it is perfectly fine to personally object, but here I (OP) have to tolerate that other people have other points of view. Here,

    • a rejection as above is not possible. It would be abuse of power for a reviewer to force their personal opinion onto the authors and therefore unethical.

    • I may say "I won't be associated in any way with any such research" and refuse being a reviewer.
      Unfortunately, this won't do the least in terms of diminishing the unethical research.

    • Sometimes, digging a bit deeper in why exactly you object to this research can help to arrive at a more constructive solution in that I do the review and in a fair and relevant manner make my concerns heard.

    General personal concerns that are expressed on a more detailed level may translate to ethical concerns that are universally agreed upon (on a narrower scope than the more general concern) or sometimes also to scientifically relevant points.


Example from medical research

I once got a paper for review where more than 100 rodents had been used and killed for research. The paper reported proper ethical approval and proper handling (so so far fulfilled the universally agreed upon ethical requirements). They had studied a whole lot of different conditions.
The many different conditions meant that the supgroups in the experiment had only very few animals. While it may be good at the first glance to not make more animals suffer, it really meant that the whole study was worthless because no reliable conclusions could be drawn. This could have been known in advance.

Thus, we have

  • the scientific concerns of a no proper sample size planning, leading to
  • the study results being too uncertain to be of any scientific value and this also leads to
  • the universally agreed upon ethical concern that needless suffering was caused to the animals.

In other words, valid reasons to recommend rejecting the study.


 Application to the paper in question

One obvious difference between this example and the automated arms is that the concerns with animal research are about what has been done already whereas I guess your automated arms concerns are more about future real-life consequences.

Considering as a thought experiment that it is acceptable for you to think about more specialized concerns such as that the proposed technology poses a danger to civilians.

While as you say a ban of automated arms is not (yet) universally agreed upon, AFAIK it is universally agreed upon that civilians should not be put into needless or unnecessarily high danger.

Starting form this narrower but universal ethical concern, I do suspect from your description that there may be several related scientific and engineering concerns with the paper. Which are serious because of the ethical concerns behind them.

The performance of the algorithm is scored based on the number of times the simulated drone autonomously hit the target.

Any such figure of merit can at most be part of the verification and validation of the algorithm.

Valdiation must include estimating and reporting the risk to civilians: if we don't know whether the algorithm leads to unnecessary danger to civilians, we cannot possibly consider it being fit for purpose.

Personally, I'd suspect that moreover there is a trade-off here in that hitting more targets may very well come at the price of also hitting more civilians. If that is possible, a scoring function that does not penalize hitting civilians may well be considered inherently unsuitable even for development purposes.

I frequently find flawed validation procedures in papers with far less dangerous topics. If the topic is not too important, and the experimental effort to perform a proper validation is out of proportion now, I'm often OK if the resulting limitations are honestly discussed.

The manuscript in questions doesn't even seem to fulfill this much weaker requirement:

The authors never discuss possible ethical concerns of this technology,

(And with such a critical topic I'd tend to insist on proper validation.)


neither they explicitly say if this technology will be used on real drones

... this they may not know.

6
  • 2
    Universally agreed upon ethics? Was there ever such a thing?
    – Bex
    Commented Mar 5, 2020 at 8:00
  • 1
    I think "universally agreed upon" is a poor choice of words. Maybe something like "current mainstream consensus"? But I think this answer makes a good point: some ethics are sufficiently "standardized" and formally captured in a policy that you can reject a paper based on them. In this case, the academic consensus appears to be headed in the direction of prohibiting LAWs, but it hasn't happened (yet).
    – ObscureOwl
    Commented Mar 6, 2020 at 9:26
  • So you can still refuse to review the paper on your own personal ethical grounds. The journal can then seek a different reviewer if they like. But by taking such a stand you do move the overall ethical discussion a bit in the direction of no longer supporting LAWs research.
    – ObscureOwl
    Commented Mar 6, 2020 at 9:27
  • @Bex: I added a clarification of what I mean with "universal".
    – cbeleites
    Commented Mar 6, 2020 at 13:50
  • @ObscureOwl: thanks for the suggestion - I like it (though I mainly kept universal as that is shorter). I also added a longer clarification what I mean, which I think is still needed with mainstream in order to make sure that mainstream is not only the local mainstream in OP's closer peer group.
    – cbeleites
    Commented Mar 6, 2020 at 15:39
4

In the context of reviewing an article for publication, it is not your job to Save The World. Setting up a dialog with the authors would be to place yourself as arbiter of the actions of a group of people. The editor of the journal should, quite rightly, reject such a plan as part of the review of an article. Though it is just possible such a program might be acceptable outside the context of a review of an article, the journal is by no means obligated to follow such a plan.

You basically have two choices. You can withdraw from review. Or you can give an objective review of the paper based on the content and the publication policies of the journal.

If the journal in question routinely publishes such information, and you cannot accept such items as moral or ethical, then you should inform them, immediately, that you should not be considered for reviewing such articles.

The cure for speech you disagree with is more speech. If you think the work in this article should not have been done, then you should publish your work explaining why. There will be outlets for your positions and arguments. If they are sound arguments based on sound reasoning and valid data, then they will be accepted.

9
  • 5
    I did not downvote your answer. However, I think some of the reasoning underlying your response remains hidden behind the world "objective." Many will agree that the article should be evaluated objectively, but few can agree on what that really means. Your answer implies that it is obvious that ethics and morality are subjective, non-scientific, sociological considerations, and ought not be a part of the review process. Can you elaborate on what, in your view, are purely objective criteria? Are novelty, significance, clarity, or broad interest? Commented Mar 2, 2020 at 16:00
  • 8
    "The cure for speech you disagree with is more speech." Do you genuinely think that speech is the issue here? The reason I might object to someone going around teaching people how to build weapons has nothing to do with free speech and everything to do with the fact that people can take that knowledge and build weapons with it. Here the connection is even more direct: the research in question was very likely funded with the specific intent of developing and deploying such weaponry. Also, like AA's comments, your arguments can be easily adapted to ones that oppose IRBs. Do you oppose IRBs? Commented Mar 2, 2020 at 16:06
  • 3
    @StellaBiderman "Speech" is what reviewing a journal article is about.
    – puppetsock
    Commented Mar 2, 2020 at 17:20
  • 5
    Read literally, there’s nothing in this answer I disagree with. But implicitly, it strongly suggests that “an objective review of the paper based on the content and the publication policies of the journal” is incompatible with the idea of raising ethical concerns in the review — which is very arguably is a false dichotomy, since most fields with significant ethical impact consider those ethics an important part of their scientific integrity. If the answerer didn’t intend this implication, I suggest they clarify that; if they did intend it, I suggest they justify it explicitly.
    – PLL
    Commented Mar 2, 2020 at 20:37
  • 5
    I never wanted to "Save The World", I just want to do my part of the job. Closing the eyes in front of a problem is not the solution. The fact that a journal does not have a policy for this kind of papers does not mean that the reviewers should not point out their own concerns. Policies are not immutable, they can be adapted when new problems, or new point of views are gathered. Having said that, at the moment I am leaning toward writing an impartial technical review of the paper, while presenting my ethical concerns to both editor and authors. Commented Mar 2, 2020 at 20:59
2

To me, the answer is quite simple.

The matter at hand is not considered unethical by a clear majority of the scientific community (if it were there would be rules against it). Consequently, whether or not you consider it to be unethical is of little relevance, your views should not impact the ability of the scientific machine to make progress.

So your choices are withdraw and make no recommendation as to what happens to the paper or to review the paper on its merits and leave the discussions on ethics to their proper forum which is not the review process.

6
  • 8
    Thank you for you opinion. Your assumption has a logical fallacy, the fact that there are no rules against a specific piece of technology does not imply that the majority of the scientific community considers it ethical. I partially agree with you regarding the second part of the answer. Commented Mar 4, 2020 at 16:36
  • 2
    To be clear, I didn't say it's considered ethical by the majority of the scientific community I said it's not considered unethical by a clear majority... There's a subtle difference there. I'd imagine most people (myself included) are currently on the fence about the issue. The community is certainly not united against the research is my point. Commented Mar 4, 2020 at 18:06
  • 1
    Thank you for the clarification, however the fallacy in the assumption still remains even with the rephrasing. One possible way to know the position of the research community would be a large-scale survey, this cannot be inferred by the fact that there are no rules against something. I agree with you that ethical positions should not influence the rejection/acceptance of a paper (obviously only in the case those concerns are not part of official guidelines). Commented Mar 4, 2020 at 18:50
  • 3
    @Persistence There is no rule to loudly start singing rap songs when meeting for a board game evening, yet in almost all cases I can imagine most participants would not be particularly cool with that. The only thing you can deduce from the non-existence of a rule is that people haven't made the rule yet. From the existence of a rule you can derive that there was enough support to introduce the rule (that doesn't necessarily mean majority either depending on context). But all in all this is a logical one-way street. Commented Mar 5, 2020 at 1:51
  • 1
    @Persistence btw. something can very well be unethical when it is supported by the majority. Genocides are in general unethical, yet the majority of the country they occur in often is in favour or neutral to them. In general, academia is in many aspects a relatively unregulated sub-culture, with relatively little rules and ad-hoc decisions. The higher is the trust in personal ethics and the more reason to be clear about them. Commented Mar 5, 2020 at 2:00
2

I would review the paper and point out that the very practical limitations of the work are not clearly stated.

The ultimate goal of this research is clearly to construct an autonomous weapon system which can function on the battlefield. Now, all warfare is subject to law. This includes the principle of proportionality. The principle of proportionality allows for collateral damage and civilian casualties as long as such losses are not excessive compared with the direct military advantage resulting from the attack.

It is clear that the authors have not considered the principle of proportionality, because their scoring algorithm does not consider the value of the military target and has no concept of collateral damage or the loss of civilian life. It follows that any autonomous weapons system controlled using their algorithm will have a high potential for violating the laws of war. Therefore, their research has limited practical value and as scholars they are obligated to state such limitations clearly.

If you step aside, then there is no guarantee that these very practical limitations will be mentioned. More importantly, the readers will not be reminded of the laws of war.

There is a good summary of the legal principles of warfare in this article. The main focus is on the principle of proportionality.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .