At some point in the future, we may have to decide whether an “artificial intelligence” program is conscious and should be treated as a morally important person. This is a question important to sociologists, psychologists, computer programmers, politicians, legal theorists, and others. Yet none of them can answer the question without philosophy. This is because no amount of data about hardware, software, biology, or legal history provides the resources to decide. Practitioners of these other disciplines are enriched and empowered to the extent that they are aware of historical and contemporary philosophical theories of consciousness, ethics, and personhood, and can use philosophical tools to work on the issue.