How would a time machine influence computability

The basic idea is this: you wire a computer to time machine in such a way, that when an algorithm stops, or when specified amount of time has passed if you're dealing with algorithm that runs forever (this is, as we will see later, also very important for the future of human kind), the time machine transports the result back in time. This would effectively reduce time complexity of many P and NP class problems into O(1) amortized.

When the algorithm is run, the computer would start performing calculation at it's own pace, and from it's point of view, it would leave us back in time, say point A. After everything is completed it would seem to us that the operation completed in time B, just after it was started, regardless of problem size, which means that all computation must have happened in O(1) at the switch of a button. An external observer will see many operations being performed, but all of those happen in post-B time, and had not yet happened up to B. The time needed to solve the problem will be instant and will not depend on problem size, therefore it is O(1).

The analysis of time complexity with potential method also shows such operation is O(1). The term "potential" is of course used very loosely, since it usually describes the accumulated something and can therefore only be positive, whereas here we are in debt of something. I'll use this term instead of a better one "debt method", plainly because it hasn't been invented yet for obvious reasons: lack of time machine. The potential analysis is easy here. A state exactly at the start of the program S_start and after program spent O(1) time, we received the result in the very next moment, at point B in time S_stop. When we receive the result the potential method stops, and the result is S_stop - S_start, which is O(1). You can freely disregard all intermediate states, because potential method features telescoping series, of which pairs cancel each other out, doesn't matter if all subsequent states are in future, except final one.

There are some different time traveling models that must be considered before I can continue. We don't yet know how time travel works, but there is more then one possibility. I'll consider two. One of them is linear-path time in which the time is a single line. In this model if we go back in time and undo an event, everything that is a result of this specific event is immediately undone and forgotten, because it will never has happened. At each specific point-in-time there is only one path along which has been traveled to reach it, and the only consequences that exist must be from the events that happened on the traveled path behind. What could be confusing is thinking that going back in time in linear-path time would void consequences that are yet to emerge, but remember this: traveling back on linear-path time is not simply rewinding it. The only condition to consequences being void is only if the event has been undone. Merely traveling back is not yet undoing passed events. There is subtle, but important difference. If that wasn't true, it would be the same as if time machines didn't exist.

The other model is that time path is not a line, but rather a tree-like graph. This can have branching nodes, not just a single path. When traveling back in time, point-in-time returns to the parent node. Thenthere it can undo an event, but did not necessarily undo all the consequences, that had unfolded on the path we returned from. This would allow to retain some information even if we returned back in time and then continue along other path, that follows from this node on time-graph. This is a tree-path time.

Looking from machine point of view, certain amount of time has passed, but there is infinite "amount" of time available, in a way. To illustrate this: you can always add two machines together at the same time and one machine does not take the time away from the other. They will not have problems "sharing" the time, as they would have had, had they shared some other, more tangible resource.

When I said tangible resource, as opposed to time, I was thinking of energy. Time machine wouldn't help with the energy problem. The energy spent in point-in-time B would be negligible but overall certain amount of it would have to be consumed in order to get the result. Energy is not so easily shared as time, there is no infinite amount of it. And you can only spend energy, not make it. There is no workaround for law of conservation of energy. The way this would look from A point of view is that you could "borrow" an energy from the future, a sort of debt system, then you pay it back as you're "naturally" traveling forward through time. While a machine computes, it needs a steady supply of energy, otherwise it will stop and not return the result. You would run algorithms in O(1) time complexity, but the overall need of energy would remain as it was without time machine.

Here I asserted an extension to the first law of thermodynamics stating the following: energy is preserved even in case of traveling through time. This is yet unproven, but I believe it will be some day. It will naturally be proven in case of linear-path time, but also in case of tree-path time. If this wasn't the case, then time machines could offer us perpetuum mobile of the first kind, the one that produces energy. But that is a silly topic that is fit for new-age pseudo-scientific debates, not for a serious scientific article such as this. One does not simply disregard laws of thermodynamics! Once it is established that certain immutable properties exist, for instance entropy, there is no reason to dismiss the existence of immutable information. This greatly improves chances for tree-path time which is the more interesting type of time travel.

Linear-path time would make running certain computations risky, but there are ways to overcome that risk. Worst case scenario is that future people forget what the machine was calculating but they would be afraid to turn it off while it drains their precious energy. They would find themselves in a dilemma: leave it turned on and let it munch through their possibly limited energy resources or turn it off never finishing the job, and risk the possibility that machine was calculating some result that their civilization is already built upon and will now perish? The way to circumvent it is at utmost importance. It is beyond the importance of labeling servers in server room, because that may only affect present and future. Forgetting to label a time machine computer will optionally alter the past itself.

Tree-path time would have no such problems. Once the result is calculated, the information would persist no matter what happens with the machine that calculated it. The well known time machine paradox, where a consequence destroys it's reason for existing could be mitigated by either Novikov self-consistency principle or by imagining the timeline in which machine calculated the result is still unfolding in that direction, so that turning the machine off merely triggered another node in graph-time. In any case, what could possibly go wrong?

The other danger is forgetting that such machine is only a workaround to the famous P=NP conundrum, not it's antithesis. Problems may exceed the time needed to solve, well after the universe will die in big freeze or heat death. At that point the machine, which is itself a part of the then finished universe, will stop and not return result. We will know this at the point-in-time B, which is the point directly after the switch has been turned on. If no result is being returned, that means the algorithm exceeded the amount of time available. Maybe if I implement OutOfTimeException first, I might be famous some day, because some sort of exception will have to be returned to differentiate between possible causes of no result received: end of time or ProblemNotImportantAnymoreException. At least, it would be a great idea. More on that later, but first to explain why we would want to know whether OutOfTimeException was thrown.

Try, if you can, and consider a world where no OutOfTimeException exists. Consider also, that you have just started a program, that at point B still didn't return result. Congratulations, you just spent the humanity (machine_energy_cost / year ) * the_rest_of_time worth of energy in vein. This is the risk I was writing about, but it could be mended by implementing OutOfTimeException. How would OutOfTimeException return result, if the machine dies thermodynamic death? That would be problematic and the only way I can imagine that being done is to have a hard-coded switch in there. We're now into 13.77 billionth year into universe existence, give or take, and planned duration is 10 to the 10th to the 27th in worst case, which means there's still 10 to the 10th to the 27th years available at least. After this time passes, the machine has to return OutOfTimeException. This is very important, if we don't want to face consequences. Without this error you might be tempted to just restart Windows and try again and spend another huge deal of energy for nothing. This way, once the machine returned this error, at least you know it didn't reach the conclusion and there is no point in impatiently switching it on and off. You might think that because you can immediately turn the machine off after you got an exception, that no energy was ever spent for the calculatio. But that would be braking thermodynamics law again, and that is a no-no. This error message is not to be confused with the result of "no solution exists". That is a result, error is non-result.

Along with the OutOfTimeException, a partial result could be returned. It could in turn be fed into another machine to continue the calculation. You could do that a couple of times, but you would only be able to compute problems that are bigger for a some constant factor, no more, because the partial result cannot be fed into the same machine again. You would need more machines. With NP class of problems a constant factor improvement is no improvement at all, so this possibility is of limited use. It exists though.

Do we need a ProblemNotImportantAnymoreException too to tell the future people that it is OK to switch off the machine? It's effects are not entirely intuitive. Consider it exists. After you switched the machine off it would presumably return ProblemNotImportantAnymoreException. This one would be useful, because it would take some risk away from starting a computation in vein, just so that future of mankind (on that path at least, in case of a tree-path time), will have to sustain it. I bet you have many times been in a position to decide whether to leave the program running and waiting for eventual completion, which is especially hard when your desktop freezes and the cursor stops moving. In a conventional sense you have an option to kill -9 the program, but that could be dangerous in the context of linear-path time. Let me illustrate.

Consider we needed some result urgently to complete the ITER project and start the production of vast amounts of cheap energy. This is a problem that is obviously worth going into energy debt at first, just to get the result, we will be able to easily pay it back later. Two things can happen. One of them is that the result will be as we wished for, the kind that enables ITER, and it will start and the whole humanity has been freed because of all the cheap energy. Everybody now has time to go and study mathematics, physics and computer science. One amongst the mass of new students figures out a way to get to the same solution more efficiently. What could be done here is build another machine, that would save energy, compute the result more efficiently, then replace the old result with new. But as soon as you turned off the old machine and returned ProblemNotImportantAnymoreException, you have undone the old result, with it ITER, with it free humanity, the one poor gifted student now has to work in some slum, and the more efficient way of getting the result is never have been calculated.

As was said, this is only dangerous in linear-path time. In tree-path time it is virtually impossible to turn any such machine off, because that only triggers another path in which it is still turned on. As when your child wants to remove a snot from a finger and all he or she can help himself or herself with is another finger: it will stick to one of the fingers until it is eaten.

You might think you could use it when a result is that a solution is not found. You could write the result on a paper and turned off the machine, but then the same problem arises as the result has never arrived on the first place, so you didn't see it, didn't write it on the paper and you immediately forget it. You also forget what you just did, so you switch the machine on, and so on and so forth. When the result is not in our favor, this is even worse. You already committed to the energy debt, now you can't pay it back easily with cheap ITER energy. A few of these attempts and humanity will hardly sustain the ongoing computations with it's primitive nuclear plants.

What could you do with it? Computing power is not limitless, as I've shown above. We're still upper-bounded by life expectancy of the universe, which is really vast, but so are some problems. You could for instance break 2048 bit RSA encryption (in 10 to the 600th years), well under the time limitation. The RSA encryption would need to use at least 10kbit keys but surprisingly this time machine system would not kill RSA. Travelling salesman problem is irrelevant already (link to so no changes are expected here. You could try to pick 100 students from 400 hundred in such a way, that the selection would meet some restrictions(link to (in 10 to the 900th power) and pick up several million dollars from Clay Institute, but with time machine on your hands, money will probably stop being an issue.

What this machine also wouldn't do is build better machines. There are 7 common logical gates (and, or, not, nand, nor, xor, xnor) and even more are possible. These are just two-state gates, then there are three-state gates options that use True-False-FileNotFound logic. How many and which gates to use to achieve a set of truth tables that would link all possible inputs to all possible outputs? Sure all gates can be reduced to nor gates alone, but that would not solve the problem. The gates can be wired one to another in what we might call layers, but the number of layers to complete the problem is unknown in advance. Reducing all possible gates to nor would effectively only enlarge the number of gates and layers needed, maybe into even more computational complex problem. When the number of possibilities is countless, do you really notice the difference between 2^all and 7^all? Which is, I bet, more then the universe has time left, rendering the limit function irrelevant. We're better off trying to breed next Tesla person to guess solution with intuitive brilliant mind then to analytically derive it.

Parallel computing would become almost obsolete. Parallel in a time-frame that can repeatedly come back to it's own start would mean Hadoop could map and reduce virtually at the same time. That wouldn't matter though, because the same operation would finish in the same amount of virtual time as if it wasn't parallelized. They could still be used to push computability boundary beyond what can be computed in a lifetime of the universe. For instance, if a process would compute a result in four times the time the universe will exist, and can be broken into 4 parallel processes, that just transformed it from incomputable to computable one. In a world where speed has no meaning, Amdahl's law would become a computability formula.

Time-hidden parallelism would only work with algorithms that use totally separated threads, which don't communicate with others. Because they couldn't. They have, from their perspective, not even started yet. Mergesort would work, for instance. Each individual spawn process is afterwards independent of whichever process spawned it, and most importantly, of his parallel processes. It is only when the results have to merge, that they return back to it's parent process. Recursion in that sense is a step back on stack as well as in time. They only ever meet the results of parallel processes, not the processes themselves. If there was cooperation, the time machine couldn't help with that. Despite new possibilities, classical multithreading problems such as deadlocks, race conditions and such would stay, unfortunately.

After race conditions are appropriately handled, however, there is some more good news. Where time machine could help is in lockout situation, where a machine could be programmed in a way, that the time returns to the point in time, when shared resource was first locked. The parallel process would not have to wait for his critical sections to execute, because they wouldn't notice the lock. This feature alone could greatly improve scalability of parallel systems even when multiple threads are interdependent.

So, to recap this lengthy article: NP still doesn't quite equal P, concurrency problems remain, and throwing exceptions is still very much needed, but ever so unintuitively confusing. Sounds like much will remain the same with the invention of time machines.

Previous: Some useful conditional branches conversions
Next: Debug output from dll