Is Everyone Conscious in the Same Way? | Simon Roper - eviltoast
  • icosahedron@ttrpg.network
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    4 months ago

    i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background. if we knew what to look for, we’d find it. also anesthesia isn’t really a problem at all. in fact, we know exactly how general anesthesia works

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2908224/

    and Penroses’s Orch OR theory was never meant to explain anesthesia. it’s a more general theory concerning the overall existence of consciousness in the first place. however, anesthesia does relate to the theory, in that it could play a role in proving it (i think? not a primary source but it’s where i found that info)

    besides that, Orch OR isn’t exactly a great model in the first place, or at least from a neurological standpoint. even among theories of consciousness, Orch OR is particularly controversial and not widely accepted. i’m no expert and i could be misunderstanding, so please correct me if i’m missing something that would indicate Orch OR is considered even remotely plausible compared to other consciousness theories. this paper certainly had some things to say about it in the context of the validity of theories of consciousness (see V.1 class I).

    other theories seem more promising. global workspace theory seems particularly well supported by neurology. its criticisms mainly focus on how GWT fails to truly explain the nature of consciousness. but is that an issue any theory can resolve? again, the problem lies in the definition of consciousness.

    then we have integrated information theory. it’s a more mathematical model that aims to quantify the human experience. but you know what? it’s also controversial and highly debated, to the point that it’s been called pseudoscientific because it implies a degree of panpsychism. it’s clearly not a perfect theory.

    point is, you’re right. we don’t really get consciousness. we have some wild guesses out there, and penrose’s theory is certainly one of them. genius as penrose is, Orch OR isn’t empirically testable. we don’t know, and maybe can’t know - which is precisely why neuroscience searches elsewhere

    • bunchberry@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 months ago

      i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

      Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

      In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

      I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

      They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.