Soupie
Paranormal Adept
UFOs and entities have been witnessed throughout human history. However, the only consistency among reported sightings and experiences is their inconsistency. Where one person looks and sees a dragon another looks and sees dog-people in trench coats smoking cigarettes under a streetlight.
I don't pretend to have an answer for this, but I found the following research interesting in this context.
The Flaw Lurking In Every Deep Neural Net
Interestingly, since this research, others have been able to do the opposite: get an object (or image) that looks nothing at all like a penguin to a human, to be identified as a penguin by a DNN.
So this begs the question: are humans susceptible to this same phenomenon? Might we fail to identify or misidentify objects in the same way as these DNNs?
I'm not suggesting that UFOs and other paranormal phenomena are the result of humans misidentifying mundane phenomena — although we know this happens. What I'm asking is if this misidentification is happening at a deeper level — like above — and with non-mundane objects? Perhaps purposefully?
The stimulus in the sky or on the ground is real, but we (our brains) have no hope of correctly identifying it; it's simply too exotic; therefore, our brain says, "I got this! It's a square bit of milk with legs, a moustache, a top hat, and a cane." In reality, the stimulus was nothing of the sort. (And it was neither a misidentified mundane object nor a hallucination.)
I don't pretend to have an answer for this, but I found the following research interesting in this context.
The Flaw Lurking In Every Deep Neural Net
Every deep neural network has "blind spots" in the sense that there are inputs that are very close to correctly classified examples that are misclassified.
Since the very start of neural network research it has been assumed that networks had the power to generalize. That is, if you train a network to recognize a cat using a particular set of cat photos the network will, as long as it has been trained properly, have the ability to recognize a cat photo it hasn't seen before.
Within this assumption has been the even more "obvious" assumption that if the network correctly classifies the photo of a cat as a cat then it will correctly classify a slightly perturbed version of the same photo as a cat. To create the slightly perturbed version you would simply modify each pixel value, and as long as the amount was small, then the cat photo would look exactly the same to a human - and presumably to a neural network.
However, this isn't true. ...
What the researchers did was to invent an optimization algorithm that starts from a correctly classified example and tries to find a small perturbation in the pixel values that drives the output of the network to another classification. Of course, there is no guarantee that such a perturbed incorrect version of the image exists - and if the continuity assumption mentioned earlier applied the search would fail.
However the search succeeds. ...
This is perhaps the most remarkable part of the result. Right next to every correctly classified example there is an effectively indistinguishable example that is misclassified, no matter what network or training set was used.
So if you have a photo of a cat there is a set of small changes that can be made to it that makes the network classify it as a dog - irrespective of the network or its training.
There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks.
Put more bluntly "does the human brain have similar built-in errors?" If it doesn't, how is it so different from the neural networks that are trying to mimic it? In short, what is the brain's secret that makes it stable and continuous?
One possible explanation is that this is another manifestation of the curse of dimensionality. As the dimension of a space increases it is well known that the volume of a hypersphere becomes increasingly concentrated at its surface. (The volume that is not near the surface drops exponentially with increasing dimension.) Given that the decision boundaries of a deep neural network are in a very high dimensional space it seems reasonable that most correctly classified examples are going to be close to the decision boundary - hence the ability to find a misclassified example close to the correct one, you simply have to work out the direction to the closest boundary.
If this is part of the explanation, then it is clear that even the human brain cannot avoid the effect and must somehow cope with it; otherwise cats would morph into dogs with an alarming regularity.
Since the very start of neural network research it has been assumed that networks had the power to generalize. That is, if you train a network to recognize a cat using a particular set of cat photos the network will, as long as it has been trained properly, have the ability to recognize a cat photo it hasn't seen before.
Within this assumption has been the even more "obvious" assumption that if the network correctly classifies the photo of a cat as a cat then it will correctly classify a slightly perturbed version of the same photo as a cat. To create the slightly perturbed version you would simply modify each pixel value, and as long as the amount was small, then the cat photo would look exactly the same to a human - and presumably to a neural network.
However, this isn't true. ...
What the researchers did was to invent an optimization algorithm that starts from a correctly classified example and tries to find a small perturbation in the pixel values that drives the output of the network to another classification. Of course, there is no guarantee that such a perturbed incorrect version of the image exists - and if the continuity assumption mentioned earlier applied the search would fail.
However the search succeeds. ...
This is perhaps the most remarkable part of the result. Right next to every correctly classified example there is an effectively indistinguishable example that is misclassified, no matter what network or training set was used.
So if you have a photo of a cat there is a set of small changes that can be made to it that makes the network classify it as a dog - irrespective of the network or its training.
There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks.
Put more bluntly "does the human brain have similar built-in errors?" If it doesn't, how is it so different from the neural networks that are trying to mimic it? In short, what is the brain's secret that makes it stable and continuous?
One possible explanation is that this is another manifestation of the curse of dimensionality. As the dimension of a space increases it is well known that the volume of a hypersphere becomes increasingly concentrated at its surface. (The volume that is not near the surface drops exponentially with increasing dimension.) Given that the decision boundaries of a deep neural network are in a very high dimensional space it seems reasonable that most correctly classified examples are going to be close to the decision boundary - hence the ability to find a misclassified example close to the correct one, you simply have to work out the direction to the closest boundary.
If this is part of the explanation, then it is clear that even the human brain cannot avoid the effect and must somehow cope with it; otherwise cats would morph into dogs with an alarming regularity.
Interestingly, since this research, others have been able to do the opposite: get an object (or image) that looks nothing at all like a penguin to a human, to be identified as a penguin by a DNN.
So this begs the question: are humans susceptible to this same phenomenon? Might we fail to identify or misidentify objects in the same way as these DNNs?
I'm not suggesting that UFOs and other paranormal phenomena are the result of humans misidentifying mundane phenomena — although we know this happens. What I'm asking is if this misidentification is happening at a deeper level — like above — and with non-mundane objects? Perhaps purposefully?
The stimulus in the sky or on the ground is real, but we (our brains) have no hope of correctly identifying it; it's simply too exotic; therefore, our brain says, "I got this! It's a square bit of milk with legs, a moustache, a top hat, and a cane." In reality, the stimulus was nothing of the sort. (And it was neither a misidentified mundane object nor a hallucination.)