Henry Shevlins paperHow could we know when a robot was a moral patient? -- He says that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are
cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the behavioral equivalence strategy that I have defended in previous work but argues that it is flawed in crucial respects. Unfortunatelyand I guess this is hardly surprisingI cannot bring myself to agree that the cognitive equivalence strategy is the superior one. In this article, I try to explain why in three steps. First, I clarify the nature of the question that I take both myself and Shevlin to be answering. Second, I clear up some potential confusions about the behavioral equivalence strategy, addressing some other recent criticisms of it. Third, I will explain why I still favor the behavioral equivalence strategy over the cognitive equivalence one.