Wikipedia:Reference desk/Archives/Computing/2015 June 20
Computing desk | ||
---|---|---|
< June 19 | << May | June | Jul >> | June 21 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 20
editBroken device driver
editLast night, my computer presented a BSOD with "driver state power failure" and was unable to find any networks whatsoever, and despite a few restarts, the situation persisted without resolution until I performed a system restore that worked wonderfully. Not clear, though: how can a a driver break, anyway? Does some weirdness introduce bad code into the driver software, like a biological cell that makes a big mistake during DNA transcription? I initially feared some sort of damage to the hardware with which the driver works, but that wouldn't be fixable with a system restore. Nyttend (talk) 14:24, 20 June 2015 (UTC)
“ | It is possible, and even tempting, to view a program as an abstract mechanism, as a device of some sort. To do so, however, is highly dangerous: the analogy is too shallow because a program is, as a mechanism, totally different from all the familiar analogue devices we grew up with. Like all digitally encoded information, it has unavoidably the uncomfortable property that the smallest possible perturbations —i.e. changes of a single bit— can have the most drastic consequences. [For the sake of completness I add that the picture is not essentially changed by the introduction of redundancy or error correction.] In the discrete world of computing, there is no meaningful metric in which "small" changes and "small" effects go hand in hand, and there never will be. | ” |
— EWD1036 |
- We can't possibly know what tiny imperceptibly small change caused this dramatic user-visible problem unless we dive deep into the problem with a powerful toolkit of software debuggers, source code, specifications, and schematics. Nimur (talk) 14:33, 20 June 2015 (UTC)
- I'm not asking for help on this specific incident; the problem's solved, so I don't care what caused it. My question is much more basic: how is it possible for drivers to become corrupted? Is it possible for the code itself to get changed, as if you went into a cell and tweaked its DNA? After all, in my experience, other programs (all the way from productivity stuff like Office to little games) don't get corrupted: documents can be corrupted, but I don't remember experiencing non-driver software that has its source code corrupted. Nyttend (talk) 14:39, 20 June 2015 (UTC)
- Perhaps the source code was not corrupted at all. There are lots of places, other than the executable program file, where state can be saved. If saved state is invalid, the same program may behave differently. Of all of these places where state can be saved, some will be "reset" by a software system restore. Shall we begin enumerating all of these places? We can start with the obvious: those items stored in the file system on the hard drive: data files, configuration files, shared libraries,... the list of relevant files can be quite extensive. But if you think "state" ends with the file system, you underestimate the state-full-ness of a modern computer! What about other peripheral hardware with nonvolatile memory, like the GPU or the main logic board controller? Do these hardware devices save persistent state that survives reboots, but not "system restore"? We really do need more information: make, model, software configuration, and a thorough listing of every single peripheral. If you wanted to report a bug to Microsoft (for example), they would need a complete system profile before they would start to investigate the problem. And again, let me emphasize: a single changed bit can cause a program to fall down a totally different path in an "if" statement, provided that the correct bit is changed.
- How would we find which bit got changed? We'd need a little bit of luck and a whole lot of tools that are not commonly available to mere mortals. When you see a "blue screen of death," that's the "user-friendly" version of a system error message. (This is a great irony!) A professional Windows driver-developer would know how to extract more information to find out exactly where the device-driver broke. If they had source-code for that driver, they could meaningfully investigate why the program terminated at that point, and caused the system to halt in an unrecoverable way. Without the code, we're searching for a proverbial needle in a haystack, and our haystack has billions of bits that all look exactly the same.
- As a normal user, a "BSOD" (or a kernel panic on other systems) is a terrible thing to encounter. But for a device hardware programmer, a "BSOD" is a good thing! It means that the problem has produced enough information to debug, so long as we know how to debug it. Nimur (talk) 14:57, 20 June 2015 (UTC)
- After reading the first part of your response (I editconflicted with your second and third paragraphs), I found the term State (computer science); is that what this means? Never heard of this before; I thought it was just the ordinary usage comparable to "condition", e.g. how the US president "shall from time to time give to the Congress Information on the State of the Union". Would your answer have been different if I had said only how can a driver break, anyway? Does some weirdness introduce bad code into the driver software, like a biological cell that makes a big mistake during DNA transcription? If so, what would you have said? Or should I take the basic answer as "The software didn't get corrupted, but there was a mistake in something it works with, and we can't tell what it was"? This specific situation is a well-documented problem, anyway. Nyttend (talk) 15:01, 20 June 2015 (UTC)
- The software (probably) didn't get corrupted, but there was a mistake in something it works with, and we can't tell what it was (unless we get a lot more information). Nimur (talk) 20:02, 21 June 2015 (UTC)
- (this is not Nimur) It is instructive to consider state in case of web applications. HTTP is (at the most basic level) stateless, you request a page, the page is served and the web server forgets you ever existed. State is the totality of things which are a certain way but could be different. Suppose you have a mail application running in your browser. You want your mails be sorted by date, so you click on the button on top of the date column. You've just created state! When you refresh the page, the mails are still sorted by date. This means this bit of information - how you want the mails sorted - must have been stored somewhere ("made persistent.") in the time between the requests. It could have been stored in the browser's cookies. The browser sends a cookie back to the server with every request. There could be a cookie for every such bit of information. What's more likely, however, is that the server created a unique identifier - the session id - that your browser remembers between requests and sends back to the webserver. This session id is the key into a database (which resides on the webserver) with all of your preferences for this session. The state got externalized!
Now suppose there are preferences which are mutually exclusive. You can't have mails sorted both by date and sender (because the sorting algrithm may not support sorting on more than one key, it is "unstable"). Suppose that through some unanticipated usage scenario you were able to affect both settings at the same time. These bits of information got out of sync with each other. At this point, which bit gets precedence (and which code path is taken) is more or less random (in the sense of "incidental"), depending on coding style, which bit is checked first etc. In a less trivial example, the so introduced inconsistencies would have the capacity to snowball and make the whole software system unusable - even permanenely so, because the corrupt, inconsistent state got stored somewhere and the only way to reset it is to reinstall the whole thing.
Good design will try to minimize the number of opportunities for inconsistencies to arise in the first place. In my example, this would be by storing the sorting order as one bit of information (which column) instead of several independent bits (a flag for each of the columns saying "sort by me".)
Hardware has state, too. The keyboard controller in your keyboard remembers a typematic rate (the rate at which keystrokes are autorepeated when the key is held down) and delay (the delay after which autorepeat kicks in). This is state. When the computer is turned off and on again, these things must be programmed into the keyboard again from your preferences which the operating system has stored. Asmrulz (talk) 16:35, 20 June 2015 (UTC) - Just as in biology, erros and inconsistencies are the more tragic and lethal the lower the level at which they occur. Consider an an RS trigger in some chip on your mainboard. It outputs a 1 when the S (for "set") line is pulsed high, and a 0, when the R (for "reset") line gets pulsed high. When both lines are low, the output doesn't change, there's a feedback loop which ensures this. This is state. What happens when both lines (S and R) are pulsed high at the same time due to some timing inaccuracies unaccounted for by the chip designer? It is undefined and there's no way to know, now one needs semiconductor physics to reason effectively about this. And there's also no way to program around such things. In everyday life, however, bugs are due to design and logic errors of the kind I mentioned, not because of flaky hardware Asmrulz (talk) 17:49, 20 June 2015 (UTC)
- It might be worth adding, as the op noticed ( After all, in my experience, other programs (all the way from productivity stuff like Office to little games) don't get corrupted), this isn't an accident, it's because of Error handling, which is one area of computer science that has come a very, very long way. There used to be a time, when a computer had a single thread, when seemingly almost ANY error could completely hang your whole system. Over the years, with each iteration, operating systems have introduced multitasking and progressively have got better at error handling, to the point now where a lot of errors these days with applications like games or "office stuff" can be recovered from, sometimes even on the fly, in the back ground! You can go into the application event log on your computer and it would be very rare indeed to see absolutely NO errors there, but you probably aren't even aware of most of them because the computer does its thing. An application running in a 'thread' can typically at most crash its own thread, which the OS should be able to shut down, but even then it doesn't always work that way, it's certainly not unheard of an application like word crashing a computer. Just google word crashing computer. Driver crashes are harder to recover from but not impossible, and there advancements are also constantly being made. A article you might find interesting related to this is Watchdog timer. :) Vespine (talk) 01:55, 22 June 2015 (UTC)
- After reading the first part of your response (I editconflicted with your second and third paragraphs), I found the term State (computer science); is that what this means? Never heard of this before; I thought it was just the ordinary usage comparable to "condition", e.g. how the US president "shall from time to time give to the Congress Information on the State of the Union". Would your answer have been different if I had said only how can a driver break, anyway? Does some weirdness introduce bad code into the driver software, like a biological cell that makes a big mistake during DNA transcription? If so, what would you have said? Or should I take the basic answer as "The software didn't get corrupted, but there was a mistake in something it works with, and we can't tell what it was"? This specific situation is a well-documented problem, anyway. Nyttend (talk) 15:01, 20 June 2015 (UTC)
- I'm not asking for help on this specific incident; the problem's solved, so I don't care what caused it. My question is much more basic: how is it possible for drivers to become corrupted? Is it possible for the code itself to get changed, as if you went into a cell and tweaked its DNA? After all, in my experience, other programs (all the way from productivity stuff like Office to little games) don't get corrupted: documents can be corrupted, but I don't remember experiencing non-driver software that has its source code corrupted. Nyttend (talk) 14:39, 20 June 2015 (UTC)
The complexity of an infinite-resources computer
editHi there,
I would like to know, whether complexity has any meaning if we assume a computer has infinite resources and abilities?
I'm talking about a machine that takes absolutely 0 time to run an algorithm, and has an infinite cache or memory space. — Preceding unsigned comment added by Exx8 (talk • contribs) 14:34, 20 June 2015 (UTC)
- In December 2013, a different user asked about "universal algorithms" that could run on hypothetical types of computers, and received some good theoretical answers. What does it really mean to run in zero time? If you think hard about this question, and think about the definition of computing, you will recognize that "zero calculation time" contradicts the definition. Nimur (talk) 14:42, 20 June 2015 (UTC)
- this is a totally different question.Exx8 (talk) — Preceding undated comment added 15:23, 20 June 2015 (UTC)
- Well, the more complex an algorithm, the greater the chance of a bug. So, we might want to replace things like complex sorts with simpler bubble sorts. Of course, this makes no sense in the real world, where a large bubble sort is absurdly slow. StuRat (talk) 17:00, 20 June 2015 (UTC)
- @StuRat: Note that computational complexity isn't the same thing as conceptual complexity. In computing, bubble sort has an average complexity of O(n2), greater than quicksort's average of O(n log n).
- To answer the OP's question, I can think of one interesting implication complexity would have given a theoretically omnipotent machine. The problems in the complexity class RE have the property that while 'yes' cases can be verified in finite time, 'no' cases cannot. Given that there is no limit on the 'finite' amount of time the yes certificate may take, this property is not very useful for ordinary computers; for human purposes there is no difference between an infinite calculation and one that takes only a few billion years. In other words, despite the existence of a finite yes-certificate a normal computer cannot give us any useful information about the solutions to the vast majority of RE and co-RE problems. By contrast, your proposed infinite computer would make these problems easily solvable. Given an RE decision problem and an algorithm for producing its yes-certificate, the algorithm would halt instantly if the answer was yes and would continue to execute if the answer was no. (The opposite for co-RE). You would effectively produce a no-certificate, reducing these problems to a lower complexity class. —Noiratsi (talk) 13:34, 21 June 2015 (UTC)
- Yes, both types of complexity are important in computing. In the theoretical case of an infinitely fast computer, only the conceptual complexity would then be an issue. StuRat (talk) 16:08, 22 June 2015 (UTC)
- The answer to this lies in the Church–Turing thesis - which demonstrates that all currently known computing mechanisms are equivalent, except for time and storage capacity concerns. Since any existing computer can both simulate - and be simulated by - a Turing machine (assuming time and storage are not an issue), it follows that any computer can do what any other computer can do. The huge complexity of (say) a modern x86 class machine compared to (say) a minimal RISC machine is only needed for performance, ease of programming or compactness of programs - and none of those things matter if time and space are not an issue.
- Some people will maintain one of the following three things:
- The human brain is somehow "more" than a Turing machine -- but modern science suggests that brains are made of neurons - and neurons can be sufficiently simulated by a turning machine, so if time and space are no concern, then we can clearly simulate a human brain with a turing machine.
- That quantum computers are somehow "more" than a Turing machine -- but, again, we can simulate (possibly VERY slowly) the operations of a quantum computer...so, again, it's nothing more than a turing machine.
- That there is some hypothetical "hypercomputer" architecture that is somehow "more" than a turing machine - but if it's made of normal 'stuff' that physics equations exist for - then it too can be simulated by a turing machine.
- So I'm 99% certain that (barring some kind of astounding future breakthrough in physics) the answer to your question is "NO!" - complexity is irrelevant if you have infinite speed and infinite storage." But it most certainly DOES matter if either of those things are finite...which, of course, they are in most practical applications.
- It's worth noting that the finite speed of light means that there is a hard limit in speed/memory - the more memory you need, the larger the machine has to be, and given the speed of light limitation, the slower it will be.
- SteveBaker (talk) 03:07, 22 June 2015 (UTC)
- The answer, I think, is that complexity does still matter, but it applies to orders of infinity rather than finite numbers. With your conditions, any algorithm with a countably infinite number of steps can be executed in zero time. But it isn't clear what happens if you need to execute a procedure with an uncountably infinite number of steps and/or uncountably infinite cache size -- for example, if you need to apply a test to every real number between zero and one. Looie496 (talk) 14:53, 22 June 2015 (UTC)
- Any algorithm that an infinitely-fast/infinite-memory turing machine can solve has be solved in a countably-infinite number of steps. So perhaps you could imagine an algorithm that required one to do something with every possible real number - which would be an uncountable infinity, for sure. But it's not something that a turing machine can solve because it operates by single (countable) operations of a tape full of 1's and 0's passing through the read/write head of the machine. So there are certainly tasks that would require more complexity than a turing machine can have.
- But machines that can solve uncountable infinities kinds of problems can't exist if their storage systems have to be made from things like atoms, electrons and photons that are countable...even if we allow for an infinite amount of them.
- So you need something that's logically more complex than a turing machine to work problems that involve uncountably infinite stuff. But such a machine can't be made of countable amounts of stuff because it it was, then a turing machine could emulate it.
- I guess on some hyper-abstract mathematical level, that says that more complexity is possible - but in any physical sense, it's not. SteveBaker (talk) 16:25, 23 June 2015 (UTC)
Apple pay
editDoes Apple pay work with any contactless nfc receiver as long as Apple have a deal with your bank? Or does it require the store to have a deal with Apple too? 2001:268:D005:E529:B926:913F:AB27:B1C (talk) 22:43, 20 June 2015 (UTC)
- Please see the first paragraph of our article on Apple Pay.--Shantavira|feed me 07:48, 21 June 2015 (UTC)
- Then why did Apple have to strike a deal with Londons transportation system for it to work there despite it working with contactless cards already? 106.142.222.212 (talk) 08:52, 22 June 2015 (UTC)
- And also why did Apple do deals with some vendors anyway? 2001:268:D005:E387:FA:DBE9:4AB2:7072 (talk) 09:20, 22 June 2015 (UTC)