NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!
Ok, what’s the circumstantial evidence that leads you to think it’s plausible?
The Asimov debate covers some of that, but off the top of my head here are some thoughts on the idea.
1. The first assumption is that such technology is possible, and that is based on our increasing ability to model virtual universes with the technology we have now, and an extrapolation of the rate of progress for supercomputing, which suggests that eventually a system will be developed that is capable of modeling environments down to the molecular level that follow algorithms modeled on the way nature behaves.
If this is possible ( and it seems reasonable to me to believe it is ), then it's just a matter of time for us. However why should we think we'd be the first creatures in an infinite original universe to develop the capacity? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is ).
This is sort of similar to an argument made by Nick Bostrom: Are You Living in a Simulation?
2. A computational construct seems to offer plausible explanations for curious phenomena in physics like the "cosmic speed limit", "spooky action at a distance" and "particle wave duality".
I'm not seeing how this is an argument for anything except "we don't know how the universe works yet."3. The fundamental forces of nature are associated with particles, but even particles are abstract ideas about whatever actually composes what we think of as material reality, and nobody knows how those particles or strings ( or whatever the case may be ) are imparted with the properties ( forces ) associated with them.
This is causality, and is actually an interesting problem. It's one that the information theory of physics deals with rather neatly. However, this is still a model for us to look at the universe as, not as what the universe is itself.However as they are imparted with the forces associated with them, then logically, something is doing the "imparting". In a computational model, the imparting consists of rules of the associated algorithms. From the perspective of those within the construct, those algorithms are transparent and seemingly imparted in some mysterious way. The computational construct offers a non-mysterious way to explain that situation, at least for universes within such constructs.
Agreed on this one. However, the multiverse model may also be a good explanation.4. Anecdotes where people report what seem to be other realities of some sort suggest multiple universes rather than only one. Other universes fit well with the computational construct model.
I gave what seems to be a good reason to think exactly the opposite: To quote:It is possible. It's just improbable.
Right. Which brings up the concept of multiverses, which can be seen as independent programs. We might be instance 7 of Universe 3.0.exe.
An aspect of Turing machines is that they can simulate themselves; i.e. one Turing machine can emulate any other Turing machine as long as it's Turing-complete.
The hitch is that you need a larger Turing machine to simulate a smaller one. So any virtual universe would by necessity be capable of more computation than the universe it was emulating, which in itself would be universe-sized. See, the universe itself and all it's physical processes can be thought of as computation steps. So, no computer that existed in this universe could accurately model this universe.
Now, it's possible some larger universe could have a computer in it that would simulate this one. But why would it?
You raise an interesting point, however. If the 'outside' universe were infinite, it would be possible to simulate a finite/bounded universe, of which this is one. Our universe is not infinite.
"Freezing out" doesn't explain how the forces of nature came into being or why they'd be mapped onto their associated particles. This is a problem that nobody knows the answer to. For all intent and purpose, particles are forces and not particles that have "forces", but even that doesn't resolve anything.Not really. Many of the fundamental forces 'froze out' of the unified plasma that existed after the big bang. If the universe were virtual, there would actually be no need for those things to exist at all. It's more likely that they would be abstracted and simplistic, because the universe appears to be extremely complex, even when we're not paying attention to it. Why?
Right. But our systems are also in their infancy. What sort of power will they have in a thousand years? I don't know, but I wouldn't bet that modelling a universe would be beyond their capacity.If you look at what made 3-D games possible, for example, it's things like only modelling what the user is looking at that make the simulation possible. It's a cheat, a hack: only model what the user sees, and fake the rest. One might try to make an argument that this is wave/particle duality in action, except the thing that collapses the wave/particle superposition doesn't have to be conscious.
Right. I did say it was circumstantial and suggestive rather than conclusive.I'm not seeing how this is an argument for anything except "we don't know how the universe works yet."
That depends on which universe we're talking about. The computational model doesn't solve the biggest problem ( the turtles all the way down problem ), but it could solve the immediate one, and that would be huge step if it were the case.This is causality, and is actually an interesting problem. It's one that the information theory of physics deals with rather neatly. However, this is still a model for us to look at the universe as, not as what the universe is itself.
The concept of multiverses fits neatly into the computational model ( as indicated above ). Why should a near infinitely powerful system be confined to running only one construct? Running a few in parallel to study certain differences might be advantageous to running them serially.Agreed on this one. However, the multiverse model may also be a good explanation.
I gave what seems to be a good reason to think exactly the opposite: To quote:
"Why should we think we'd be the first creatures in an infinite original universe to develop the capacity [ to develop complex computational constructs ] ? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is )."
In other words, to make the assumption that it's not probable requires us to assume that we're one of ( or the only ) advanced race in the entire universe to ever conceive of the idea and begin working on the technology to realize it. I don't think that is a reasonable assumption. Even if this is the topmost layer, given the age and size of our universe, the probability that there are beings far more advanced than we are seems astronomically high. So I think it's exactly backward from what your suggesting.
In other words it is possible no other beings in the entire universe have ever conceived of creating technology to model universes, but it doesn't seem likely that we're among the most advanced ( or only advanced ) beings in the entire universe. But maybe I'm missing some sort of reasoning there that you can hep clarify by explaining why you think we are?
But let's say you took Jupiter and converted it into a computer so you could simulate a small, simple universe. You would do this by essentially compressing it into a black hole.
- In The Singularity is Near, Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 1090 operations per second. The mass of the universe can be estimated at 3 × 1052 kilograms. If all matter in the universe was turned into a black hole it would have a lifetime of 2.8 × 10139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10229 operations.[9]
Right. Which brings up the concept of multiverses, which can be seen as independent programs. We might be instance 7 of Universe 3.0.exe.
Agreed. The universe is truly weird."Freezing out" doesn't explain how the forces of nature came into being or why they'd be mapped onto their associated particles. This is a problem that nobody knows the answer to. For all intent and purpose, particles are forces and not particles that have "forces", but even that doesn't resolve anything.
Also what we find is that complexity arises out of simplicity through iterations e.g. fractals. Put the algorithms into action and what happens? You get a universe that seems to begin from nothing ( zero iterations of all algorithms ) to the sudden existence of the results of running the program ( the sudden existence of the universe out of nothing ) that would then go through a period of rapid expansion because the initial processing demands would be low, and so on, seemingly paralleling what we'd expect to see.
Right. But our systems are also in their infancy. What sort of power will they have in a thousand years? I don't know, but I wouldn't bet that modelling a universe would be beyond their capacity.
It feels like we're developing a belief system here, to be honest.Right. I did say it was circumstantial and suggestive rather than conclusive.
That depends on which universe we're talking about. The computational model doesn't solve the biggest problem ( the turtles all the way down problem ), but it could solve the immediate one, and that would be huge step if it were the case.
The concept of multiverses fits neatly into the computational model ( as indicated above ). Why should a near infinitely powerful system be confined to running only one construct? Running a few in parallel to study certain differences might be advantageous to running them serially.
Difficulty is a relative concept. Only a hundred years ago, the thigh tech stuff we have today may have been conceived of in some fashion, but considered unattainable. 200 years ago it's doubtful most of it was even conceived of, and before that, if it was thought of at all, it was purely speculation and philosophy in the minds of very few. Even today, the workings of a lot of tech is a mystery to the end user. So a thousand years from now? I wouldn't bet against technology being developed that hasn't even been imagined yet. And that's only in the next thousand years. What about two or three thousand?Because it would be very hard, you'd have a very hard time getting information out of it, and it would take a long time.
Why couldn't our observable universe be a "smaller universe"? We only have a limited range of observation. So for all we know, somewhere out there in the blackness beyond is where the limits of the construct are.By hard I mean hard. Computation doesn't come for free. You couldn't model our universe even if you converted each atom of the earth into a computation engine - there are fundamental limits to computation. You couldn't model our universe even if you converted every atom in the universe to a computation engine. You'd have to model a smaller universe in the universe.
Ya. That's up my alley alright.It's actually a very interesting problem.
Limits of computation - Wikipedia
These calculations were actually referenced in Kurzweil's fun Singularity is Near which I wholeheartedly recommend.
It was first believed that an optical disc couldn't hold enough data to store a VHS movie. Technological limitations have a long history of being overcome by new technology and ways of doing things. Beyond saying that, I'm no computer scientist, so I don't have the answers. I can only perform logical analysis on a surface level.But let's say you took Jupiter and converted it into a computer so you could simulate a small, simple universe. You would do this by essentially compressing it into a black hole.
The problem is now how do you program it, and how do you get information out of it. Programming it is maybe possible by collapsing Jupiter in such a way to create the starting state. Getting information out of a black hole is maybe possible if Hawking is right, but the information transfer would also be bounded. See, it would rely on quantum entanglement in the hawking radiation given off by the black hole, that also makes it slowly evaporate.
This would not allow for a lot of information to exit the computer. It also means that you might not be able to ask it anything once it's running. You could perhaps get it to answer a question, but you couldn't ask it a new one once it's running, and the answers might not come at all (due to the halting problem) and even if they did, it would be simple.
This would multiply the problem above, yes?
There's a difference between developing beliefs about possibilities based on reason, and believing something is actually the way some weakly substantiated theory or another suggests that it could be. The computational construct theory is a personal favorite because it seems better than most ancient mythology, and as seen in the Asimov debate, it is being taken seriously by bright modern minds. Cosmologists don't seem to have a better theory either. But I'm not ready to sign-up with Matrixism just yet: Matrixism: The path of the One, The Matrix religionIt feels like we're developing a belief system here, to be honest.
Indeed. My two problems with the computational cosmological model are the ideas of memory storage and consciousness. It's one thing for a super-system to be able to run a set of instructions from moment to moment, and another to store the results of everything on some "cloud". At best I'd suggest that only a few choice facets would be practical to store in memory no matter how powerful the processor itself would be. The other is consciousness. We don't know how that works, so even if a computational model of the human brain can be constructed ( as they're doing as we speak ), it's a leap in logic to assume that it would be conscious. But who knows how that line of inquiry will unfold over the next thousand years? An AI with consciousness? Is it really that far fetched?A near infinitely powerful computer would require a nearly infinitely large universe to run it in, and consequently nearly infinite consciousness to construct it - our universe is finite and bounded. It's possible there are universes that aren't.
But even in the multiverse model, those universes are all finite and bounded.
It's an interesting question.
This is an interesting approach, no doubt. I think it's basically saying that there should be an underlying uniformity (rotational symmetry) in cosmic rays if we're a simulation.Constraints on the Universe as a numerical Simulation: https://www.researchgate.net/profil...on-the-Universe-as-a-Numerical-Simulation.pdf
Notes: This is the paper Zoreh refers to in the Asimov Debate.
So I'm now reading this:unimproved Wilson fermion discretization
With the current developments in HPC and in algorithms it is now pos-
sible to simulate Quantum Chromodynamics (QCD), the fundamental force in nature that
gives rise to the strong nuclear force among protons and neutrons, and to nuclei and their
interactions. These simulations are currently performed in femto-sized universes where the
space-time continuum is replaced by a lattice, whose spatial and temporal sizes are of the
order of several femto-meters or fermis (1 fm = 10−15 m), and whose lattice spacings (dis-
cretization or pixelation) are fractions of fermis 1. This endeavor, generically referred to as
lattice gauge theory, or more specifically lattice QCD, is currently leading to new insights
into the nature of matter 2. Within the next decade, with the anticipated deployment of
exascale computing resources, it is expected that the nuclear forces will be determined from
QCD, refining and extending their current determinations from experiment, enabling pre-
dictions for processes in extreme environments, or of exotic forms of matter, not accessible
to laboratory experiments. Given the significant resources invested in determining the quan-
tum fluctuations of the fundamental fields which permeate our universe, and in calculating
nuclei from first principles (for recent works, see Refs. [4–6]), it stands to reason that future
simulation efforts will continue to extend to ever-smaller pixelations and ever-larger vol-
umes of space-time, from the femto-scale to the atomic scale, and ultimately to macroscopic
scales. If there are sufficient HPC resources available, then future scientists will likely make
the effort to perform complete simulations of molecules, cells, humans and even beyond
Constraints on the Universe as a Numerical Simulation (PDF Download Available). Available from: Constraints on the Universe as a Numerical Simulation (PDF Download Available) [accessed Sep 21, 2017].
There are, of course, many caveats
to this extrapolation. Foremost among them is the assumption that an effective Moore’s
Law will continue into the future, which requires technological and algorithmic developments
to continue as they have for the past 40 years. Related to this is the possible existence of
the technological singularity [23, 24], which could alter the curve in unpredictable ways.
Constraints on the Universe as a Numerical Simulation (PDF Download Available). Available from: Constraints on the Universe as a Numerical Simulation (PDF Download Available) [accessed Sep 21, 2017].
That is, early
simulations use the computationally “cheapest” discretizations with no improvement.
The difference between a computational construct and "God made the universe" is that in the event that it turns out that the universe we're in is a computational construct, those who see the architect as God have deified a universe creator, whereas those who simply recognize the existence of the construct see it as another facet of the universe to learn about.I guess what I'm struggling with is that how is it different than saying God made the universe?
Only if one chooses to deify it. Otherwise it's subject to the same analysis as anything else, and in this example it only boils down to sheer scale and relative power. Are these sufficient reasons to deify something? Maybe for some it is. Not for me.We're talking about beings that can't exist in this universe, and yet powerful enough to create it. Powerful enough to create a 14.5 BY simulation with sufficient complexity to create general intelligence, that can itself create civilizations.
That's a good functional approximation of God, isn't it?
The difference between a computational construct and "God made the universe" is that in the event that it turns out that the universe we're in is a computational construct, those who see the architect as God have deified a universe creator, whereas those who simply recognize the existence of the construct see it as another facet of the universe to learn about.
Only if one chooses to deify it. Otherwise it's subject to the same analysis as anything else, and in this example it only boils down to sheer scale and relative power. Are these sufficient reasons to deify something? Maybe for some it is. Not for me.
Deduction: Therefore, either God does not exist in this universe, or God is not omniscient, or God does not exist at this time.
... Why would an absolutely inert universe ever be tilted in such a way to ever produce the idea of karma, pantheism, polytheism, deism or theism? I think that question is just as demanding as any other, but YMMV ...
Well, if our free will comes from quantum mechanics, then Gods existense would take it away.IMHO, it might be that to give human creatures some real freedom of choice at some level, an omniscient, omnipotent Creator might fashion the created realm with a measure of what seems to us as stochasticity, not being out of control of the Creator, but rather giving creatures a measure of freedom to operate within the created realm.
These kinds of questions are great, and even assuming non-theism, there is a LOT to account for. Just one example. Why would a universe that was actually and really based on, derived from, and emergent out of inert, non-living, non-sentient substances ever produce entities that actually think the universe was the creation of an omnipotent Creator?
You may conclude such people are naïve and mistaken, but the thing is, someone like James Clerk Maxwell, who is recognized as near equal to Newton, had no problem with theism. John Polkinghorne is another modern example. Why would an absolutely inert universe ever be tilted in such a way to ever produce the idea of karma, pantheism, polytheism, deism or theism? I think that question is just as demanding as any other, but YMMV.
Congrats on your grade! I once got an A on a paper where I argued that there is no such thing as altruistic generosity, which I would not agree with today, hehe.
The essential problem with a lot of this discussion is that it is being done from our perspective, our understanding of physics. So, yes, from our perspective it would be very difficult to simulate a universe like our own. That is evidence of nothing. It could be that our universe was intentionally designed in this manner to prevent the constuction of infinitely deep universe simulations. To assume that any universe outside our own (the one running the simulation of us) functions on the same laws of physics is naive. As brought up early on in the video, were Mario to approach discerning what our universe is liked based on the laws and observable constructs of his own world, he would be trudging down a very wrong path.By hard I mean hard. Computation doesn't come for free. You couldn't model our universe even if you converted each atom of the earth into a computation engine - there are fundamental limits to computation.
Well, if our free will comes from quantum mechanics, then Gods existense would take it away.