Wikipedia:Reference desk/Archives/Computing/2013 July 30

Computing desk
< July 29 << Jun | July | Aug >> July 31 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 30

edit

Mobile Station Assisted GPS

edit

[1]

Will Mobile Station(MS) Assisted GPS calculate the location of the mobile regardless of emergency call?

Usually Mobile Station Assisted GPS calculates position of the mobile in network server in terms of latitude and longitude and Mobile Station Based calculates the position of the mobile in mobile itself. Both of these methods are used during emergency call. Is it possible to calculates the position of the mobile even if the call is not emergency call?

I request to clear my doubts at your earliest.

Thank you,


JOHN ROSEs (talk) 11:24, 30 July 2013 (UTC) JOHN ROSE 30-JULY-2013[reply]

Strange problem with executing an application written in C++

edit

I am experimenting a bit with C++ containers. I've written some code using vectors. The code looks valid to me according to what I've read at http://www.programmingincpp.com/vector-and-list.html.

#include <iostream> #include <vector> #include <string> using namespace std; int main() { string lowstr; string upstr; cout << "Enter lower bound of search interval: " << endl; cin >> lowstr; cout << "Enter upper bound of search interval: " << endl; cin >> upstr; int lowlength = (unsigned) lowstr.size(); int uplength = (unsigned) upstr.size(); vector<char> lowbound(lowlength); vector<char> upbound(uplength); for(int i = 0; i = lowlength; i++){ lowbound.push_back(lowstr.at(i)); } for (int j = 0; j = uplength; j++){ upbound.push_back(upstr.at(j)); } cout << lowbound[0] << endl; cout << upbound[0] << endl; return 0; }

The code seems to compile. However, when I execute the program, after entering the two values, I get the error

terminate called after throwing an instance of 'std::out_of_range'

what(): basic_string::at

This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information.

In addition I get the typical Windows error telling me the exe isn't functioning properly anymore. Any ideas what the problem is here and how it could be fixed? -- Toshio Yamaguchi 12:26, 30 July 2013 (UTC)[reply]

The message pretty clearly says that you called basic_string::at() with a value that was out_of_range. Here, it looks like you meant < instead of = in the two loop conditions. --Tardis (talk) 12:38, 30 July 2013 (UTC)[reply]
The error also occurs when I replace i = lowlength; with i < lowlength; in the loop condition. -- Toshio Yamaguchi 13:12, 30 July 2013 (UTC)[reply]
Try similar change in the for j loop. When your program enters the loop, it initializes j with zero, then it substitutes uplength to j. Next it checks the value, which turns out non-zero, so it goes to at – and it uses the j value, which is equal upstr.size(), and that, I suppose, causes the error. --CiaPan (talk) 14:35, 30 July 2013 (UTC)[reply]
What exactly do you believe this program should be doing? As the code is written above, the code does no useful work (even if we fix the incorrect for-loop condition, replacing = with < where it would make slightly more sense).
As written, the program prompts the user to input string boundaries, and then assigns those boundaries to the strings: lowstr and upstr. Then, for reasons unknown, the program iterates over the strings, and appends characters to a second set of strings, lowbound and upbound. Finally, the program prints one character from each of these "outputs."
On my system, if I make the modification described above, can compile the program; execute it; and it correctly prints nothing (because the for-loops do nothing), and the program returns no output.
I suspect our OP needs a little more help than a mere C++ syntax check. The code is syntactically correct, but the instructions contained in the program makes no logical sense. Nimur (talk) 16:56, 30 July 2013 (UTC)[reply]
It's weirder than that, because lowbound and upbound aren't strings, they're vector<char>, and constructing them with vector<char> lowbound(lowlength); means "make a vector and add in /lowlength/ number of 0s". The code then checks lowbound[0], which is thus always going to be a 0. That's surely not what Toshio wants. It's not necessary to specify the sizes of a vector when creating it (Toshio may have come from Java, where doing is an optional parameter in Vector construction). It's perfectly sensible to say vector<char> lowbound; but even with that, I'm not understanding what the code is supposed to achieve. -- Finlay McWalterTalk 17:10, 30 July 2013 (UTC)[reply]
I suspect Toshio is trying to fill a vector<char> with each character from a string, so that if lowstr contains "fred", lowbound should end up as a vector of 4 elements containing 'f', 'r', 'e' and 'd'. Not only is the for loop is incorrect, I'm not sure the default constructor for a vector<char> takes a parameter, and does the string class even contain an "at" method? Astronaut (talk) 18:56, 30 July 2013 (UTC)[reply]
Yes, that is correct. I want to store the symbols from the strings into the vectors. Then the program should print the first symbol in each vector. I see now that I might have made a thought error. As I see it, pushback appends the symbol at position i in lowstr behind the end of the vector. -- Toshio Yamaguchi 20:23, 30 July 2013 (UTC)[reply]
I ran a test and it compiled and ran ok.
The vector object can be created without the need to supply a size; the syntax would be vector<char> lowbound;.
Indeed the string class does not have an "at" method. The syntax to get a single character is just like in c; lowstr[i].
The "push_back" method simply adds a new element at the end of the vector; in your case the syntax would be lowbound.push_back(lowstr[i]); which adds the single character lowstr[i] to the vector.
Fixing these and the for loop made it work. Astronaut (talk) 16:59, 31 July 2013 (UTC)[reply]

Extreme Data Compression Theory

edit

I've had a theoretical idea for a while, and i spoke with a computer science professor of mine to confirm the possibility of it. The idea is a method of extreme data compression. Theoretically, a data compression program that takes up a large amount of disk space could be crated, being large due to many mappings between raw data and its compressed equivalent. The larger such a program would be, the more compressed it could make things. At the extreme side of this, a compressed file could simply contain the character "i", which would possibly decompress into the entire directory tree and files needed to play World of Warcraft or something like this, since the letter "i" could be mapped to the binary code for a rar file containing such data. I wouldn't mind programming a 4GB compression program if it meant 1TB of media would fit in 50GB of space. ;)

The only problem I'm thinking of is that if we are looking at compressing programs or movies or etc, we are looking at compressing sets of binary.... which can only be compressed into.... binary! In other words, we have a raw code, from which are a number of raw characters, and there is then a compressed code, which may contain a number of characters representing the compressed equivalent. If the cardinality of the raw and compressed character set is the same, we cant really have a mapping of raw -> compressed that saves us anything, unless there are unused or wasted combinations of characters in the raw set. It would need to be the case that the cardinality of available characters in the raw text/binary/whatever is LESS than the cardinality of the compressed representation. For example: English words -> binary. There are about 250,000 words in the English language, according to one source. Any of these words could be encoded using 18 bits in binary (which could encode 262,144 words). Since each letter of a word takes 8 bits to encode, this means that every word could be encoded in just over 2 characters. Since the average word is longer than 2 letters, we have compression!

What ways can my idea be realized? I thought it as simple as having a large mapping from one set to another, but as i stated above, i no longer think this is the case. The only thing i can think of is taking an original binary string, and passing it through the Huffman algorithm maybe as many as 10 times, which should make the string of binary smaller and smaller, since we are mapping from binary to binary, and the compression is gained when we see lots of repetition!

Any insight on this is highly valued and appreciated. Thanks!

216.173.145.47 (talk) 19:01, 30 July 2013 (UTC)[reply]

Steve Baker contributed an excellent and detailed explanation in response to a similar question nearly five years ago. I believe we have discussed this more recently, as well; you're welcome to search our archives. The long-and-short of it: very high compression-ratios are achievable only when you place significant restrictions on the type of input data to the compression-algorithm. Nimur (talk) 19:09, 30 July 2013 (UTC)[reply]
Thanks for finding that! I was thinking of my old question soon as I read this one. Sidebar: how did you get on with the image registration of those World Cup pics I sent you? Haven't seen anything come of that. Zunaid 11:27, 31 July 2013 (UTC)[reply]
Indeed! Those world cup photos are still present on nimur-s10e, a small laptop computer whose screen physically snapped off a few years ago, impeding progress... let me see if I can make it boot, transfer the image data to a Mac whose screen hasn't snapped off yet, and give it another go, probably later this week! With luck, they'll be ready in time for Brazil! Nimur (talk) 12:23, 31 July 2013 (UTC)[reply]
Understanding the Pigeonhole principle is key to understanding why arbitrary compression is impossible. 88.112.41.6 (talk) 21:07, 30 July 2013 (UTC)[reply]
As was said above, for any given size of data there's always more possible different data sets than possible results of compression, so you can't compress everything.
You may discover any (finite) number of rules to describe some patterns in data and implement those rules in compression program. The program then will be extremely effective for data fitting corresponding classes of patterns. You can even implement a dictionary, which would contain the WoW environment as one of its entry, so it would restore it by a short, one-byte index... However, the program will fail to compress data that don't fit any pattern. For every compressing algorithm there exist data sets incompressible by that algorithm – see Kolmogorov complexity#Compression (also Talk:Kolmogorov complexity#Compression) and Incompressible string. --CiaPan (talk) 05:52, 31 July 2013 (UTC)[reply]

This makes sense, and i read the linked info and understand why the pigeonhole principle makes it not possible to compress arbitrary data. However, what about my Huffman algorithm idea? It would compress the data by finding patterns in say, blocks of 8 bytes, then take the output and put patterns into that etc..... Of course, for each encoding there would need to be a decode key, so the amount of saved data would hopefully be enough to more than offset a reasonably sized file header saying what symbols represent what. Wouldn't this work?

216.173.145.47 (talk) 19:07, 31 July 2013 (UTC)[reply]

You're assuming there will be patterns in those blocks - this is reasonable for structured data, but once you've Huffman encoded it once, the result will be much more random, and harder to compress. Theoretically, running it twice could shrink down some very repetetive strings, but probably not as well as a zip file would. For random (or already well-compressed) data, Huffman encoding won't do much at all - the odds of two 64-bit blocks matching are 1:18,446,744,073,709,551,616 (2^64). 209.131.76.183 (talk) 11:56, 1 August 2013 (UTC)[reply]

faster boots after unplugging?

edit

I have a Windows 8 computer (came with Win 7) and restarting takes close to 10 minutes to get back to a functioning desktop. If I shut it down, unplug the power cord, plug it back in, and turn it on, it boots much, much faster. (I have done it this way only twice and I haven't timed it.) Could there be a reason for this? Bubba73 You talkin' to me? 19:13, 30 July 2013 (UTC)[reply]

There could be a reason, but you have not provided enough information for us to determine what that reason is. It is plausible that a boot from zero power is faster than a restore from a low-power hibernation state. This is probably not the intended performance; it may indicate that your computer's power-savings settings are misconfigured; or that you have some malfunctioning system component or software extension. But from the information provided, we can only speculate wildly. Nimur (talk) 21:22, 30 July 2013 (UTC)[reply]
I know in reality this feature is enabled on random event like the on you are having. I'm thinking it may be the cause. 2A02:8422:1191:6E00:56E6:FCFF:FEDB:2BBA (talk) 23:38, 30 July 2013 (UTC)[reply]
I don't think that is it. I need to measure it, but it is like 2 minutes versus 10 minutes if it is from a power-off state. Bubba73 You talkin' to me? 02:56, 31 July 2013 (UTC)[reply]
It's possible that there are many processes etc. running in the background (perhaps even malicious ones...) And when you do a restart, these are responsible for the slowdown (they have to finish what they're doing/close down); whereas when you unplug the computer, all processes and backround stuff is terminated, and you are starting the computer with nothing (in theory) slowing it down... --Yellow1996 (talk) 01:02, 31 July 2013 (UTC)[reply]
It isn't in hibernation mode - I don't use that. I normally restart after I've been using it for a while. As for malicious software, I use Avast, Malware Bytes, and occasionally Microsoft Safety scanner, and they don't find anything. The processes that are starting up start up whether I'm restarting or booting cold, so I don't see what difference that makes. Bubba73 You talkin' to me? 02:30, 31 July 2013 (UTC)[reply]
It seems I've slightly misunderstood your question - all the slowdown is when you're booting up (getting to the desktop,) not when it's shutting down (leaving the desktop)? I wasn't suggesting you were in hibernation mode; I mean processes running in the background while you're using the computer, then they are still running when you hit restart... therefore potentially slowing down the restart process. But, when you are booting after unplugging, these processes have been terminated and don't slow down the booting process. However this only applies if you are noticing slowdown right after you initially hit restart (and are leaving the desktop.) --Yellow1996 (talk) 16:41, 31 July 2013 (UTC)[reply]
The slowness is in booting from a restart. The process manager measured it as taking 11:34. If I shut it down (not necessary to unplug), I timed the boot at 1:05. Bubba73 You talkin' to me? 14:27, 1 August 2013 (UTC)[reply]


Some devices will keep state as long as they have standby power, but I can't think of anything that would cause that sort of slowdown on bootup. For example, I remember dial-up modems getting stuck in a state that persisted through a reboot, but could be cleared with a power cycle. The things that I can think of right now that have access to standby power are devices built into your motherboard, PCI/PCI Express devices, and USB devices plugged into "charging" ports that stay on even when the PC is off. I have seen some Intel BIOSes that have intermittent trouble enumerating certain USB devices at power-on, but I can't say if it has anything to do with the way the power was cycled. In that case, the PC stays stuck on the BIOS splash screen for a while. Where is the slow step in your bootup? 209.131.76.183 (talk) 11:41, 31 July 2013 (UTC)[reply]
The black screen is up for a long time and then it shows the desktop for a long time before it becomes fully active. I need to get some measurements. Bubba73 You talkin' to me? 16:06, 31 July 2013 (UTC)[reply]
This site [2] shows how to use Process Monitor to make a log of the bootup process. It may help figure out what is going slowly, but it does produce a ton of information (as in millions of events) you'll need to filter through. There used to be a better Microsoft tool specifically for boot profiling, but it is no longer supported. 209.131.76.183 (talk) 16:45, 31 July 2013 (UTC)[reply]
Thanks, I downloaded that and tried it. First I did a restart and it said that the reboot took 11:34. It saved 23 files, mostly 300-400MB each! Then I told it to log the bootup, shut down, unplugged, and restarted. However, it did not give me the message that it had created a boot-time log, even though I had told it to enable boot logging. But it only took about a minute to boot. I told it to do the boot logging again. This time I shut it down, but didn't unplug it. I timed it with my watch. Process Monitor again did not tell me that it had a boot log, but I timed it at 1:05 until the desktop was active. So the unplugging the power cord doesn't make the difference - but powering down does. A little over 1 minute for that method versus 11-1/2 minutes for a restart! Bubba73 You talkin' to me? 18:18, 31 July 2013 (UTC)[reply]

Update: Microsoft Answers is helping me with the problem. In the event log, the check for a Microsoft Office license is in there hundreds of times in a row. That might be the problem. Bubba73 You talkin' to me? 14:43, 1 August 2013 (UTC)[reply]

I'd think that was definitely it! So something's gone awry and it is checking for a license way more times than it needs to - but how to stop it from happening? A search turned up nothing about this problem (and I've never seen it myself) so it's possible this is unique to your computer; contacting Microsoft directly (because - after all - it is their software) is probably your best bet. --Yellow1996 (talk) 16:24, 1 August 2013 (UTC)[reply]
Maybe, although it seems odd that it would change depending on whether you shut down or reboot. It's too bad that it isn't logging the boot process. I would try disconnecting any unneccesary hardware in order to rule it out. It would also be interesting to see if the slowdown happens when booting into safe mode. 209.131.76.183 (talk) 17:42, 1 August 2013 (UTC)[reply]
There are about 260 calls to that, but they are spread over 32 seconds, but it seems to be taking a lot more than 32 seconds. Maybe running 260 processes is taking the time. It also does about 60 of "PM MsiInstaller". Bubba73 You talkin' to me? 00:09, 2 August 2013 (UTC)[reply]
IIRC the license checks need an internet connection in order to go through; so when the computer is booting up from a turned-off state then it's possible it takes a little bit for the connection to be established, whereas when the computer is rebooted, the connection is maintained the entire time. --Yellow1996 (talk) 01:00, 2 August 2013 (UTC)[reply]
It does look like it is trying to make a connection. However, that seems to be the opposite of what you would expect. The cold boot is fast - the boot from a restart is horribly slow. Bubba73 You talkin' to me? 02:22, 2 August 2013 (UTC)[reply]
Actually, that's exactly what I would expect. The cold boot is fast because the internet connection is severed when the computer is turned off, and has to be reconnected when you turn it on. Since there isn't a connection, Office can't start spamming license requests. However, when you do a restart - the connection is maintained, and while your computer is coming to the desktop, Office has a connection to the internet and can therefore spam those requests; slowing down your system. --Yellow1996 (talk) 16:25, 2 August 2013 (UTC)[reply]
But I was thinking that it was probably trying to connect. If it is connected to the internet, there wouldn't be any need for it to send 250-260 requests. If it isn't connected, then it might. The cold boot seemed to send only six request. Why would the software have to check the license over 250 times in a 32-second period? Bubba73 You talkin' to me? 17:44, 2 August 2013 (UTC)[reply]
That's true - I never thought about it like that. I have no idea why it would send 250+ requests though; that's extremely excessive. Is this just the trialware office that comes with new computers? If so (and if you don't plan on using office - I myself get by with only Notepad and MSW Word Processor, but that's because I have no use for software like Excel and Powerpoint. Plus, there are many free alternatives to office's programs) then you could simply remove the trialware from your system. If you do plan on buying and registering MS Office, then I'd contact Microsoft to see what's happening here. --Yellow1996 (talk) 00:51, 3 August 2013 (UTC)[reply]
It is the paid, registered version of the home version of MS Office. After my initial question to Microsoft Answers, they asked me to look at the event log. I told them about what I found, but they haven't replied to that (and it has been over 24 hours). Bubba73 You talkin' to me? 15:07, 3 August 2013 (UTC)[reply]
That's unfortunate. In that case, then I suggest you contact MS directly (as I believe - correct me if I'm wrong - that MS Answers is run by signed-up volunteers?) perhaps via e-mail. Since you've already registered and paid for Office this should not be happening and is a serious bug Microsoft needs to solve. --Yellow1996 (talk) 22:44, 3 August 2013 (UTC)[reply]
MS Answers is addressing the problem and I think the person is a MS employee. I did another test and looked at the event log. This time the restart took 7 minutes instead of 11-1/2 and it did not show 250+ calls to verify the Office license. Bubba73 You talkin' to me? 16:36, 4 August 2013 (UTC)[reply]
Okay; Well, if anyone knows how to fix this problem, it is definitely Microsoft. Very strange about the fluctuation in time - which further outlines the unpredictable-nature of this bug. I wish you the best of luck. :) --Yellow1996 (talk) 16:50, 4 August 2013 (UTC)[reply]
And how much time do I want to invest in it? If I remember to power down, it doesn't take too much extra time. I could spend hours to save 30 seconds each time I boot. Bubba73 You talkin' to me? 18:58, 4 August 2013 (UTC)[reply]
True enough. As long as you remember to power down instead of restarting (which - really - there isn't much difference between) you won't have to worry about it anymore. In my reply above I had missed the fact that the license calls had ceased - I'm glad that's sorted itself out; though that sure was strange, strange behaviour! --Yellow1996 (talk) 01:21, 5 August 2013 (UTC)[reply]
The license calls were gone from the second test and it booted in 7+ minutes instead of 11-1/2. But a cold boot is 1:05. Many things I install ask to restart the computer. I have to remember to shut down instead. I plan to upgrade from W8 to 8.1 when it comes out, and perhaps that will help. Bubba73 You talkin' to me? 01:28, 5 August 2013 (UTC)[reply]

VPS and tunneling streaming video

edit

I was reading an interesting article how YouTube and Netflix streams can seem really bad even on high-bandwidth broadband connections [3]. In the reader comments, a reader mentioned he was able to "tunnel YouTube to a VPS" and get HD streams without any problems[4]. I hadn't even heard of VPS until this. Besides subscribing to the VPS service, how do you set it up to tunnel streaming video to bypass the ISP's video throttling? --157.254.210.11 (talk) 23:08, 30 July 2013 (UTC)[reply]