Wikipedia:Reference desk/Archives/Computing/2016 January 25
Computing desk | ||
---|---|---|
< January 24 | << Dec | January | Feb >> | January 26 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 25
edit4G Over 3G
editWhat is the advantage of 4G mobile phones over 3G ones? KägeTorä - (影虎) (もしもし!) 00:52, 25 January 2016 (UTC)
- see Comparison of mobile phone standards#Comparison of wireless Internet standards. But you will also have to consider whether the phone can do it, wherther the band is on the phone, whether the carrier offers it, whether it is on a cell tower where you want to use it. Also consider that 4G is not genuine: (4G#Technical understanding). The phones may actually support Long Term Evolution or LTE Advanced, so look for that in your comparison. Graeme Bartlett (talk) 01:32, 25 January 2016 (UTC)
Unix command question
editWhen you type something like
./mysql -u root -p
What do the u and p signify? 68.142.63.182 (talk) 02:15, 25 January 2016 (UTC)
- Command line options like that tell the program what to do. From [1]: The -u root means to connect as user root and the -p means which port number to connect to. In your example, you are missing the number that would follow the -p. RudolfRed (talk) 02:50, 25 January 2016 (UTC)
- CLOSE but you only got the 1st half right. -P (case sensitive) is port. -p is PASSWORD. As in, specify user account to run the command and authenticate using the account's password. The -u -p convention is very common among command line interpreters. Vespine (talk) 03:51, 25 January 2016 (UTC)
- More generally, they mean whatever the program (in this case, mysql) interprets them to mean. They're just arguments that get passed to the program. For the meaning, look at the program's documentation. There are conventions for a few common "switches": -v usually means "verbose", making the program print more output. But this isn't enforced by anything other than programmers choosing to follow those conventions. --71.119.131.184 (talk) 03:59, 25 January 2016 (UTC)
- The relevant reference being Command-line interface#Arguments Vespine (talk) 05:16, 25 January 2016 (UTC)
- And to clarify for the OP, the -u, -p, etc. are most commonly referred to as flags or options. This is a bit different than say, an input argument. Knowing this makes similar questions easy to google. What does -e do for grep? Just google /grep e flag/, and the first hit [2] gives the answer. Sometimes the flags have similar meanings between programs (-u is often used to set user), but not always. SemanticMantis (talk) 14:54, 25 January 2016 (UTC)
- The relevant reference being Command-line interface#Arguments Vespine (talk) 05:16, 25 January 2016 (UTC)
Thank you. 68.142.63.182 (talk) 00:48, 26 January 2016 (UTC)
Is GPU built into CPU wasted if graphics card is present?
editSome CPUs have built-in GPUs which eliminate the need for a graphics card for some applications. If the processor is used in a motherboard with a graphics card inserted though, does this mean that the GPU built into the processor is a waste or does it still contribute? If not, is it a sensible strategy to look for a processor without GPU to save money? --78.148.108.55 (talk) 13:22, 25 January 2016 (UTC)
- Some laptops use switchable graphics solutions where the stand alone GPU is only supposed to be used when needed (particularly for games). But you mentioned card so I guess you're not thinking of laptops. While the same thing was tried with desktops (and similar) way back before the GPU was integrated into the CPU, it was largely abandoned for numerous reasons including changes in the Windows driver model with Vista; but probably most of all because as graphics card idle power improved, the power savings are fairly small compared to the compatility problems.
If you have an AMD GPU and the right AMD APU, these can work in a Crossfire config. But because AMD hasn't updated their APU GPUs for a while, you're limited to fairly old cards. And the benefit is small anyway if you have a mid range card. And if you have cross vendor GPU-CPU (i.e. NVIDIA - Intel, AMD/RTG - Intel, NVIDIA - AMD) hasn't really been possible for a while. Okay LucidLogix Virtu MVP tried something like that but IIRC it was worse (and worse supported) than AMD's Crossfire setup so never really took off and seems to have been largely abandoned.
Theoretically and especially with GPGPU it's possible for both to be used. Practically this rarely happens for home users. (I guess some miners and others who go out of their way may use both.) It's possible that DX12 will change things, but it's hard to say whether this is really going to happen. [3]
As for your suggestion of a sensible strategy, the answer is mostly no. For starters since we're so far into the APU/SoC era, very few CPUs don't have GPUs particularly those targeted at home users. More significantly, particularly once you get past the low end, the connection between CPU cost and production cost is very teneous. It's really all about coming up with different products for the different market segments, disabling features as needed (often not because they are broken or this saves costs but because you know people will be willing to pay more for them). And considering the poor situation AMD is in, it's really mostly Intel we're talking about here. But Intel has no real interest in encouraging the standalone GPU market.
The best you can is is if you're planning to get a standalone GPU, don't worry about the iGPU. But even this is limited utility since the best CPUs tend to have the best GPUs. (There are exceptions, particularly for Intel's top end iGPUs.)
- Here is an article from 2010: Inside Apple’s automatic graphics switching. "The main goal of Apple's automatic graphics switching is to balance graphics performance with long battery life..."
- What this means to the end-user of a Mac with automatic graphics switching is that system takes advantage of both the discrete GPU and the Intel HD GPU.
- If you are using some other hardware or system software, the onus is on the system-designer to make intelligent use of all the hardware they've got. Nimur (talk) 15:09, 25 January 2016 (UTC)
- You mean it's the responsibility of Apple? At the time, it sounds like it was only working on Apple laptops with discrete GPUs (well laptops that also had CPUs with iGPUs, but laptop CPUs without iGPUs are so rare now, it's not worth worryhing about), probably for the same reason everyone else ended up limiting it to laptops that I mentioned above. From what I can tell it's still only on Apple laptops (Macbook Pros) so I guess Apple hasn't found this any more useful than other vendors on desktops. (The other vendors who pretty much all had the technology in some form long before there even was an x86 CPU with a GPU. The technology was first used with IGPs which was when it was briefly tried with desktop.)
Perhaps Apple's implementation was better at the time, but it definitely wasn't the first. (Perhaps it still is better particularly in terms of compatibility and/or driver updats.) In fact pretty much everyone had it before that Apple implementation as your article sort of says. (To be fair my understanding is Apple also had it in some form before the article you linked to.) And as I hinted at above other vendors mostly still have it now.
Anyway, since the OP appears to be interested in desktops or similar (as I said above, they mentioned graphics cards), it remains unexplained how Apple is making "intelligent use of all the hardware they've got" for the use case the OP appears interest in.
Nil Einne (talk) 17:37, 26 January 2016 (UTC)
- Let me clarify: if the OP is using some other device, including a system that the OP has assembled themselves, then the OP is the system designer. The onus is on them - the system designer - to intelligently use the GPU or GPUs that are built into their computer. Nimur (talk) 17:38, 26 January 2016 (UTC)
- But the main point is very few system designers are actually using the GPU on the CPU if it comes with a discrete GPU on desktop like systems. Apple isn't as far as I can tell, nor are Dell, HP etc. Where this happens does happen on the dekstop, it mostly only happens at the software level and didn't have much involvement of the system designer. If you're saying when Apple sells you a MacBook Pro (or iMacs if any of them have discrete graphics cards) the onus is on Apple as the system designer; but when they sell you a Mac Pro, the onus is on you since you're not a system designer, frankly that makes no sense.
With laptops, you haven't really been able to design them yourself for a long time and pretty much all system designers have been using both GPUs in some fashion for a long time, before 2010 or GPUs integrated on to the CPU. So beyond it not being what the OP seems interested in, it doesnt seem to help much. Laptops with Linux are to some extent the only real exception since switchable dual graphics support has often been limited or if you were installing Windows yourself you do have to be careful with drivers. (Likewise if you really were designing the system yourself you do need to take a bit of care to ensure switchable dual graphics works.)
Getting back to my earlier point, it's actually been possible to use both GPUs in some fashion for a long time, especially after GPGPU began to become a thing (which was before iGPUs existed). This has been supported at some level by the OS and the systems as designed. Even if you were assembling your own system, you didn't really need to do much a lot of the time. But while it's been supported, as mentioned in my first post, it hasn't AFAIK actually been used much. This is for a variety of reasons including that the support wasn't that good and that software designers just didn't feel it was useful particularly considering the compabitility problems that can result (which to some extent relates to the support issue). For the earlier part you can I guess blame it on the system designer. For the later part, it doesn't make much sense to blame it on the system designer. Unless you use the odd definition of system designer where when I buy a Mac Pro or iMac or Alienware desktop or HP desktop or whatever from my local store and take it home to play GTA5 and Fallout 4, I'm a system desiger. (But maybe not if I bought a Dell laptop or Mac Book Pro?)
Ultimately whoever you want to blame it on and whatever you want to call them, the point is as an end user you have limited choice. If your software doesn't use both GPUs and there's no software which will fulfill the same purpose in every way but will use both GPUs and be better for it, then there's no much you can do. Except code your own software which makes little sense for most users. It gets even worse if you're talking about games. If I want to play GTA5, I'm not that likely to choose some other game just because it uses both GPUs and coding your own GTA5 or even hacking it to use both GPUs is most likely untenable even for an excellent coder.
And unless you actually have a need for the software which will use both GPUs, it doesn't make sense to run it just because the GPU is otherwise going unused. Given idle power improvements, using the GPU on the CPU will generally mean more energy consumption and heat generated which even in winter and if you use electrical heating is IMO not necessarily useful. More significantly, if the CPU supports some sort of turbo mode, using the GPU may mean the CPU which you may be using for stuff you actually want isn't clocking as high or as long. And that's not even considering possible slow downs due to the memory subsystem or CPU or other elements being used by this program you don't actually have any usefor but are just using because your GPU on CPU would otherwise go unused.
What this all means, and to get back to the OP's original point is that you may have to accept that your GPU on CPU is simply going to be unused. From your POV, it may be better if the CPU doesn't have a GPU since it's going to waste and may increase power consumption a little even when unused. But since you aren't Intel and can't control their marketing decisions, the most intelligent thing to do is to choose the best CPU based on price-performance that fits your purposes and budget. Which may sometimes mean a lower end GPU, but often isn't going to mean no GPU. To be fair, this isn't unique to Intel, all companies add features to their products for a variety of reasons and some of these features are going to be unused by a fair few end users. And just as with these cases, it may seem a bit stupid to have this feature you aren't going to use, but if it isn't causing problems you should ignore it and concentrate on the features you do want and the price.
If you really want to look in to it, LucidLogix Virtu MVP that I mentioned before is actually an interesting case study IMO. As I understand it, it was initially at least dependent on the system or motherboard. (I'm not sure if this changed. I didn't follow or research that closely when reading this except to check that it exists. Most results seem to be old probably for the reason mentioned, it didn't have much success so no one cares anymore.) But I think this was a licencing or compatibility thing, it was otherwise purely software and just required the 2 GPUs. So theoretically the system designers did provide something to use both GPUs (just as they did in the early pre iGPU days when they supported switchable graphics with IGPs and discretes).
But as I mentioned, this seemed to largely fail. Whether it was because the software wasn't that good (compability problems etc) or it didn't help enough to be worth it, or it did help a fair amount but people didn't realise; the technology mostly failed. So who you want to blame it on is complicated. FWIW it was still supported up to Windows 8/8.1 at least, not sure about 10 but I guess you could still try it if you think people were wrong to reject it. One thing which I perhaps didn't make clear enough until now, perhaps the reason why these all failed is because the actual advantage you get from using the often very slow iGPU when you have a much faster discrete GPU is very limited. (Which is another factor not in favour of LucidLogix etc. These technologies add cost so they are added to expensive systems which are also the systems which tend to have very fast discretes.)
To be fair, with the iGPU improving combined with certain non graphical tasks which aren't particularly demanding compared to the graphics being performed on the GPU with games (like the physics or sometimes even parts of the AI) and where even the weak iGPU is a lot better than the CPU, it does seem like it would make sense to use the iGPU. And with the GPU capable of sharing resources with the CPU it can mean despite the low performance compared to the discrete it has particularly advantages. AMD definitely believed in the HSA for a long time (I think they still do) and there's also interest it in on mobiles (albeit these don't have discretes). So perhaps with DX12 and similar combined with other areas of progress, this really will finally take off. However since we can't be sure this will happen, I don't believe it makes sense to choose a higher end iGPU (or even an iGPU if you find you do have the choice) because you may want day use it.
P.S. It's possible even now you're one of the few that does have use for the iGPU. So I guess the OP should explore their software and check. If they find they are, then I guess you could say they're one of the lucky ones. It still doesn't change the fact that for most people, it seems intelligent use is no use and to answer the OPs question most of the time it does effectively go to waste but not the suggested strategy is probably not sensible.
- But the main point is very few system designers are actually using the GPU on the CPU if it comes with a discrete GPU on desktop like systems. Apple isn't as far as I can tell, nor are Dell, HP etc. Where this happens does happen on the dekstop, it mostly only happens at the software level and didn't have much involvement of the system designer. If you're saying when Apple sells you a MacBook Pro (or iMacs if any of them have discrete graphics cards) the onus is on Apple as the system designer; but when they sell you a Mac Pro, the onus is on you since you're not a system designer, frankly that makes no sense.
- Let me clarify: if the OP is using some other device, including a system that the OP has assembled themselves, then the OP is the system designer. The onus is on them - the system designer - to intelligently use the GPU or GPUs that are built into their computer. Nimur (talk) 17:38, 26 January 2016 (UTC)
- You mean it's the responsibility of Apple? At the time, it sounds like it was only working on Apple laptops with discrete GPUs (well laptops that also had CPUs with iGPUs, but laptop CPUs without iGPUs are so rare now, it's not worth worryhing about), probably for the same reason everyone else ended up limiting it to laptops that I mentioned above. From what I can tell it's still only on Apple laptops (Macbook Pros) so I guess Apple hasn't found this any more useful than other vendors on desktops. (The other vendors who pretty much all had the technology in some form long before there even was an x86 CPU with a GPU. The technology was first used with IGPs which was when it was briefly tried with desktop.)
Handbrake not detecting video episode length correctly
editI'd like to encode MKV files from a folder containing the contents of a DVD (VOB files etc). The DVD itself is a thousand miles away and I won't have a chance to collect it for three months. The VOBs appear to play correctly, though I haven't watched them all the way through, and Handbrake detects them as valid sources, but instead of 3 episodes of roughly 1 hour each, it shows 2, the second of which is roughly 2 hours. It must be missing the point at which the second episode ends. Without access to the original DVD, is there some way I can get Handbrake to see this? 94.12.81.251 (talk) 14:13, 25 January 2016 (UTC)
- This is a common problem. The HandBrake user guide points you to this forum post, a guide to troubleshooting chapter scan problems. There is a UI preference to "scan all chapters" which forces a thorough scan of the input; that takes a long time, but will probably be needed in your case.
- If that doesn't work, try following the full troubleshooting guide step-by-step. Nimur (talk) 14:40, 25 January 2016 (UTC)
Understanding sandboxes
editWhy isn't everything, and not only the browser, sandboxed? At least, the common targets of virus and malware could be sandboxed.--Scicurious (talk) 15:53, 25 January 2016 (UTC)
- Sandboxing imposes very severe limits on what software can do. For example, JavaScript can't just decide to upload a file to the server...it has to ask you first...and it has to ask you every time. That restriction sharply limits what JavaScript is able to do. SteveBaker (talk) 16:05, 25 January 2016 (UTC)
- Sounds like a reasonable limitation. Why should I allow MS Word, Adobe, Excel, Power Point, connect to a server and upload information to it?--Scicurious (talk) 16:40, 25 January 2016 (UTC)
- No, you don't understand - if those programs were "sandboxed" in the way that JavaScript programs are in a browser, then they wouldn't be able to load any files whatever from your local computer AT ALL without asking you to hand them the filename explicitly every single time. Those programs probably load up dozens of other files on startup - things like fonts and configuration files, spell-check dictionaries and so forth. None of those things would be allowed in a sandboxed environment because you don't want an untrusted program to start rummaging around on your harddrive looking for (say) credit card numbers and passwords and sending them off to the bad guys. With programs that aren't sandboxed - you have to trust them not to do those kinds of things...and when they do, you get problems. But shutting a program off from reading and writing arbitrary files on the hard drive imposes serious limitations on what they can do...which would actually force those kinds of operations to happen "in the cloud" where they can't hurt your machine. Making that limitation would effectively force all applications to be web-based and run in-browser where they can't get at your private stuff. SteveBaker (talk) 15:08, 28 January 2016 (UTC)
- Many programs work with other programs. Operating systems are built on inter-process communications. If you sandbox everything, you are basically saying that no program can talk to any other program without explicit permission on both sides for every single communication attempt. So, as a very simple anecdote, I run Gimp. I want to scan an image. XSane is a different program. Gimp cannot talk to XSane because Gimp is sandboxed. I have to specifically allow Gimp to talk to XSane. Then, XSane goes to talk to my scanner. Oops. The scanner driver is a different program. XSane can't talk to the scanner. I have to specifically allow XSane to talk to the scanner driver. A simple task of scanning an image creates at least two "Are you sure" prompts. As you probably know, humans don't read those. They just click "yes", "agree", "proceed", "just do what I told you to do" without reading. So, sandboxing with human intervention for inter-process communications doesn't solve the problem in any way. It makes it worse because it trains users to ignore the communication warnings for common communications. So, you should be asking why things are sandboxed in the first place. It is because users tend to install all kinds of plugins and addons without any concern for security. Sandboxing is a poor fix for human ignorance/laziness/stubbornness/stupidity... A better fix would be to remove the ability for users to cram their web browser full of plugins that they don't need and cause security issues. 199.15.144.250 (talk) 20:30, 25 January 2016 (UTC)
- The design goal of Qubes OS is for applications to run in their own lightweight, quickly spun-up virtualisation environment. Communication with other applications, and access to the filesystem and devices, is mediated by the virtualisation envelope (which is a lot more granular than a simple yes/no sandbox). It's also possible (I'm remembering from some presentation I saw, hopefully correctly) to spin up a disposable application instance - so e.g. if you wanted to do some online banking, it could produce a browser instance clone, and once you're done and you close the instance, the whole VM and all its storage is destroyed, taking with it any credentials, cookies, session-keys and the like. -- Finlay McWalterᚠTalk 20:41, 25 January 2016 (UTC)
- Consider that naive sandboxing (unlike the fancy stuff Finaly mentions) would create serious obstacles to simple things like Clipboard_(computing) functionality. The Unix permissions are not the same as sandboxing, but they've been shoring up obvious security problems for a very long time now, using an analogous paradigm of only letting programs do certain things in certain places. Linux variants and OSX are all Unix-like (or *nix, etc.), so both have nice permissions systems, I don't know if the Win systems ever got around to doing user/group/file permissions better. SemanticMantis (talk) 21:22, 26 January 2016 (UTC)
- It is for the same reason that security sucks in general: it's difficult and few people understand how to do it. And desktop OSes don't provide good support for it.
- I strongly disagree with the people above who think that with sandboxing you can never get anything done. Sandboxing just means that you have isolated software components with clearly specified channels of communication between them. That's good software design, regardless. The current desktop OS model only supports the equivalent of spaghetti code: everyone has unrestricted access to everyone else's data on the same desktop, even across users. It's that way because these systems were designed before the current malware crisis, and people tend to adapt to the status quo and not realize how much better things could be. -- BenRG (talk) 22:29, 26 January 2016 (UTC)
- Your apparent definitions of sandboxing and operating systems are not what I've seen in the last 30 years of computer science. I will try to be brief: Operating systems have a goal of protecting the system by restricting access to resources. Your claim "everyone has unrestricted access to everyone else's data on the same desktop" is completely false for multi-user operating systems (everything since Windows 3.0). If you have an account on my computer and you try to access my files or the memory of my processes, you can't. It is restricted. Malware is programmed to exploit cracks in those restrictions. It is short-lived because the operating system programmers will program a fix for the malware threat - but then it is too late for those infected. Sandboxing completely removes access to the resource. It isn't enough to say it restricts access. It removes any and all access. Often, it will create a virtual resource to trick the process into thinking it has access to a resource. So, that should cure the problem, but programmers can find cracks in the borders of the sandbox and exploit those. As for the claim that "you can never get anything done," that is not what I was stating. I was stating that a sandbox will create a lot more warnings about breaching the sandbox security when trying to get things done. That will train users to ignore the warnings, making them pointless and making the entire point of a breachable sandbox pointless. 199.15.144.250 (talk) 13:20, 28 January 2016 (UTC)