Even if we could solve these problems, there may be another one we'd then have to worry about. Let's say we were able to create a robot that targets only combatants and that leaves no collateral damage--an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have a mortality rate approaching 100%, well over ICRC's 25% threshold. And this may be possible given the superhuman accuracy of machines, again assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from just-war theory, which is the basis for IHL. Further, this kind of robot would force questions about the ethics of creating machines that kill people on its own.
the relics that think that war is controlled by bans on expanding bullets, robots, chemwar, salting the earth, is evil. those relics don't understand modern war.
modern war, where 10 to 100 civilians die for each of your mercs.
and the usg only fields paid mercs, not conscripts: so their moral value is zero.