And if we can never reach that point, maybe it's exactly because consciousness is relevant?
But, sure, let's suppose it's not relevant in the sense you're meaning. Wouldn't it be ethically relevant in the sense that it is today. Because it isn't relevant today in the sense you mean. That is, it's not relevant to materialist science. Which is why materialist science can never provide a TOE.
In any case, whether a system is "experiencing" or not will always be ethically relevant, whether it's a cricket, a dog, a comatose human, a machine, or a digital system.
But I do want to circle back around to @constances and
@smcder comments re: ethics if AI being conscious.
Spend time on the farm, in the country, or read/watch about predator-prey interactions. Or god forbid watch YouTube video of big cats hunting big game.
Nature is brutal. If these creatures have phenomenal and affective consciousness—and I believe they do—then pain and suffering in high doses are all too frequent features of our world.
I mean, even reading about the torture and brutal war time killing throughout human history. Impaling hundreds of men at a time. Quartering men. The iron bull. Etc.
I'm not saying we wouldn't be ethically obligated to ensure we weren't creating affectively tormented machines. I'm more so just wrapping my mind around the fact that if all organisms are conscious, just how much pain and suffering occurs and has occurred within nature since time immemorial.
Having said that, I recognize the ability of modern man to deal with pain and suffering is quite different than beasts and historical man. But even so, pain and suffering in large doses.