Seems like a terrible blindspot to ignore the centuries of philosophy trying to conceptualize this issue even without the hard neuroscientific data to back it in any concrete way (if that is ever even possible). Though STEMs aversion to philosophy isn't unusual.
Do they simply look for a purely mechanical account of consciousness that is removed from any environment? Do social relations in the production of (self) consciousness, identity and/or intelligence ever figure into it? How do AI researchers conceptualize AI/intelligence/consciousness/etc., or do they even try outside of finding the right combination of light switches? I guess I'm also asking, how the fuck do they even know what they are looking for without a concept of what it is?
I'm not in neuroscience or a related field, so I have little idea of what people are writing about this outside of the tech journalism drivel which is just marketing.
Have you read any of Negarestani's Intelligence and Spirit? He seems to be trying to formulate a way to even begin to think about what a general (artificial) intelligence could be conceptually, through Hegel, Kant, and what I assume are a bunch of analytic and scientific writers I know little about tbh.
No that's what he looked like as a kid