Wikipedia:Reference desk/Archives/Computing/2016 February 22
Computing desk | ||
---|---|---|
< February 21 | << Jan | February | Mar >> | February 23 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 22
editThomas edison
edithello i need a reply as soon as possible for this, it is for a school assignmetn and iv beensearching for 1 hour now for the answer so i decided to make a post the answer i need help with is (Famous inventor) Thomas Edison. How is this persons name now used to describe or measure something in electronics.? — Preceding unsigned comment added by Buck Dunford (talk • contribs) 03:21, 22 February 2016 (UTC)
- Many light bulbs have an Edison base (that's the wide screw threads at the bottom). Note that while our article features incandescent light bulb pics, many newer CFL and LED lights continue to use the same base. StuRat (talk) 03:28, 22 February 2016 (UTC)
- Do you refer to the Edison effect ? StuRat (talk) 04:02, 22 February 2016 (UTC)
Programmable game controller
editYears ago I had a Microsoft SideWinder Strategic Commander game controller. Does anyone know if there's anything similar on the market nowadays? I can't find anything obvious out there, everything seems to be in programmable keyboards nowadays. I need ot to be Windows 7/10 compatible. -mattbuck (Talk) 15:12, 22 February 2016 (UTC)
- The Steam controller is very configurable (in reality the configuration is its driver, but that was probably the case for the sidewinder too), where one can map different buttons to different actions and map the "joysticks", touchpad, triggers etc. to different actions as well. An example config is here. I don't know, however, if it's possible to configure one button to generate a sequence of keystrokes. I should say I've personally not used a Steam controller, but every review I've seen of them has been positive. -- Finlay McWalterᚠTalk 15:54, 22 February 2016 (UTC)
- On looking further, it seems the Steam Controller does not have macro (keysequence) support), but that people have had success using a custom key on the controller to trigger a macro defined in AutoHotkey. -- Finlay McWalterᚠTalk 16:02, 22 February 2016 (UTC)
- It may be useful to search for macro controllers, if you’re looking to be able to do something like map a single button-press to a sequence of commands. —67.14.236.50 (talk) 23:01, 27 February 2016 (UTC)
Transition from http to https
editSo I've just switched my personal websites over from http to https using the "Let's Encrypt" free service. They seem to be working just fine. I'm wondering though about URL's I have embedded in HTML, JavaScript and PHP code... Is it best-practice these days to request https from other sites that I link to? All of them? Do I have to check each link manually to see which support it and which don't? I know that some browsers rewrite URLs that the user types in - but what about those embedded in other places? SteveBaker (talk) 15:55, 22 February 2016 (UTC)
- This may have nothing to do with your question, but this is what your question made me think about... I just did a test in Google Chrome and in Firefox. I made a page that does an Ajax request to a page using http instead of https. I accessed the main page using https and let the ajax run. On the server, the ajax request is logged in access_log, not in ssl_access_log. So, I received it via http, not https (as I stated in the JavaScript). However, neither browser threw a warning. I hoped it would be like when you have a secure page that includes an image using http. You get a warning that insecure content is being included in the secure page. Therefore, the clients are not being told that unsecured content is being made via Ajax in their secure page. It is up to the developer to ensure that if "http" is included in the ajax, they must use "https". Better, don't use an absolute URL. A relative URL will maintain the https part of the page. 209.149.114.211 (talk) 16:27, 22 February 2016 (UTC)
- Yeah - I already use relative URL's where I can - but it's not always possible to keep everything on the same server. But suppose I say something in HTML like "You can find out more about this on <A HREF="http://wikipedia.org">Wikipedia<\A>" ? Should that be 'https'? (Wikipedia supports https - but what about a site like slashdot that (currently) does not?) SteveBaker (talk) 16:51, 22 February 2016 (UTC)
- A relative URL without a scheme (http: or https:) is valid, per RFC 3986: "Uniform Resource Identifier (URI): Generic Syntax", Section 4.2. It looks like this: <img src="//domain.com/img/logo.png"/> When making external links, try HTTPS first, if that does not work then link to the HTTP version. Using HTTPS Everwhere may be useful. The Quixotic Potato (talk) 17:30, 22 February 2016 (UTC)
- I don't see the benefit of connecting to Wikipedia or Slashdot using HTTPS. Yes - it encrypts your traffic. No - it does not hide that you are going to Wikipedia or Slashdot. Wikipedia automatically redirects to HTTPS if you sign in. So, if you link to it using HTTP, it automatically switches to HTTPS. The same with Facebook. So, there is no point to use HTTPS when HTTP will automatically redirect as necessary. It won't throw a warning on the client because the link isn't being included in the source.
- On a side note... I avoid using HTTPS at one place where I work. They have idiots in charge of IT. Instead of security, they just purchase and run a deep-packet inspection tool that automatically decrypts all HTTPS traffic. The client gets a fake certificate and has to accept it to continue. I don't want to fill my browser with fake certificates, so I use HTTP. 209.149.114.211 (talk) 18:51, 22 February 2016 (UTC)
- I know there are some issues for some people with https - I'm not shutting off http access. But there is a strong trend for browsers to search for https sites first - so there is an incentive to support both.
- The idea of https is not so much that the content arrives encrypted - or that you are somehow hidden from tracking - but that it guarantees against man-in-the-middle attacks. Your IT dept is (in effect) performing a man-in-the-middle attack on you - so it's good that you're being warned! SteveBaker (talk) 20:27, 22 February 2016 (UTC)
- Note that wikimedia sites or at least wikipedia and commons have used HTTPS by default for all users since last year [1] [2]. Apparently their blog still doesn't (although does support HTTPS). As The Quixotic Potato mentioned, protocol relative links are valid, including supported by mediawiki. Now that wikipedia uses HTTPS by default, I use protocol relative links when HTTPS works properly on the site (and also HTTP, but very few sites don't at least partially support HTTP) under the assumption anyone not using HTTPS must be doing it intentionally for some reason and may want to continue it to the other site. IMO it would make sense to do the same for your website for sites that support both HTTPS and HTTP. I would suggest the same even for sites like slashdot which only partially support HTTPS (will redirect you to HTTP). However it may pay to test sites using a proxy or Tor or something. I've found it's quite common for sites using CloudFlare or Akamai or other CDNs to be misconfigured and gave certificate problems with HTTPS. This has included at least one US government website although I think that was over 2 years ago. I sometimes suspect, but I've never really looked in to it, that this is partially because I'm in NZ. Of course some of them, e.g. I think this applies to TechCrunch, are probably just borked for everyone. Nil Einne (talk) 16:40, 23 February 2016 (UTC)
- Hmmm - interesting. I'm also concerned that WinXP/IE8 and WinXP/Chrome users may have trouble with HTTPS. I've put a .htaccess file in place that redirects the https: site to http: when it detects a WinXP agent...but again, I worry that other https sites that I link to may not do that. SteveBaker (talk) 16:57, 23 February 2016 (UTC)
- Note that wikimedia sites or at least wikipedia and commons have used HTTPS by default for all users since last year [1] [2]. Apparently their blog still doesn't (although does support HTTPS). As The Quixotic Potato mentioned, protocol relative links are valid, including supported by mediawiki. Now that wikipedia uses HTTPS by default, I use protocol relative links when HTTPS works properly on the site (and also HTTP, but very few sites don't at least partially support HTTP) under the assumption anyone not using HTTPS must be doing it intentionally for some reason and may want to continue it to the other site. IMO it would make sense to do the same for your website for sites that support both HTTPS and HTTP. I would suggest the same even for sites like slashdot which only partially support HTTPS (will redirect you to HTTP). However it may pay to test sites using a proxy or Tor or something. I've found it's quite common for sites using CloudFlare or Akamai or other CDNs to be misconfigured and gave certificate problems with HTTPS. This has included at least one US government website although I think that was over 2 years ago. I sometimes suspect, but I've never really looked in to it, that this is partially because I'm in NZ. Of course some of them, e.g. I think this applies to TechCrunch, are probably just borked for everyone. Nil Einne (talk) 16:40, 23 February 2016 (UTC)
Listing duplicated filenames
editOn Fedora 20 Linux, I have a directory containing subdirectories and sub-subdirectories. These sub-subdirectories contain files whose names follow a regular pattern. Some of the names might be duplicated in separate directories. The directory structure is rigid, i.e. the files always reside at the same depth under the main directory. There are about a quarter million such files.
How do I get a listing of all the filenames that appear in more than one sub-subdirectory, and the directories they appear in? JIP | Talk 16:11, 22 February 2016 (UTC)
- Well, if you do an "ls -R1" that'll get you a recursive list of all of the files - without the preceding pathnames - one per line. If you don't want to look for duplicated sub-directories, then you could pipe that through grep -v '\\' to get rid of the directory names. Pipe the results of that through 'sort' to get the identical names in consecutive order and then save the results into a file. Next, push the resulting file through 'uniq' to reduce it to a list without duplicates and save that to a second file. Then you can 'diff' the sort'ed list and the sort/uniq'ed list files to get a list of all of the duplicated files - which you'll want to push through 'uniq' again so you only have one copy of each filename.
- That process gets you a list of duplicated filenames, one per line, with no directory names.
- OK - so now you need to know which directories those are in. The 'find' tool is a natural for doing that - so you'll want to use something like 'sed' to take every line of the file we just made and turn it from whatever into:
- find . -name whatever -print
- ...so you now have a script with a 'find' command for each duplicated file. Then you can run that as a script to get a complete list of every duplicated filename and all of the paths that it occurs in.
- If you might have spaces or weird characters in your file names - or if there are hidden files, devices, links, symlinks and other monstrosities that you need to be careful of...you'll need to think this through more carefully - but that's an outline of a shell-based means to do it.
- Like everything you can do in shell - there are probably a dozen different ways to do it. SteveBaker (talk) 16:41, 22 February 2016 (UTC)
- (ec)I don't know of a single command that does this. Just a train of thought... You can use ls -LR . to get a list of all files under the current directory (change . to the directory you are interested in). You can sort that list and then use uniq -c to get a list of unique names with a count. You can use grep -v "^\s*1\s" to omit any entry that has a count of 1. Then, you can use sed "s/^\s*[0-9]*\s*//" to remove the count from those that remain. Then, you can loop over the result list with locate (assuming you've run updatedb recently). This will create a list of duplicated filenames with full paths to each one: ls -LR . | sort | uniq -c | grep -v "^\s*1\s" | sed "s/^\s*[0-9]*\s*//" | while read x; do locate -bP '\\'$x; done
- That is not "easy". However, it is what came to me as I went from "I have a list of files" to "I want a list of the locations of each duplicate file." I'm sure someone else will chime in with a command that does exactly what you want without all the extra work. 209.149.114.211 (talk) 16:48, 22 February 2016 (UTC)
- I think we are talking about essentially the same approach here.
- One thing you CAN'T do is to use wildcards to find files - with millions of files, you'll overflow the maximum length of the command line...and in any case, it would be insanely slow. SteveBaker (talk) 16:55, 22 February 2016 (UTC)
- →List of file comparison tools --Hans Haase (有问题吗) 18:14, 22 February 2016 (UTC)
- I believe that most (if not all) of those tools are for comparing the contents of files - not the names of files in different directories - which is what is being asked here. It's possible that one or two of the tools might do what our OP requests - but you're not going to discover that from List of file comparison tools. SteveBaker (talk) 20:22, 22 February 2016 (UTC)
- I would do something like this
find . -type f | perl -e 'while(<>){chomp; $p=$_; ($f)=/.*\/(.*)/; push @{$d{$f}}, $p; } foreach $f (keys %d){ print(join(" ", @{$d{$f}}, "\n")) if @{$d{$f}} > 1 }'
If your filenames can contain spaces you may want to replace the separator in the join statement with some other character, just for readability. Mnudelman (talk) 20:07, 22 February 2016 (UTC)
- I'm a little unclear on your question. Do you actually care about which directories the files are in? In other words if you have:
top1 top2 | \ | sub1 sub2 sub1 | \ | | foo bar bar foo
- ...do you want both "foo" and "bar" printed, or only "foo"? If it's the first, you're just asking for all files that have the same name, which is trivial. --71.119.131.184 (talk) 21:32, 22 February 2016 (UTC)
- Yes, I want both "foo" and "bar" printed. JIP | Talk 05:46, 23 February 2016 (UTC)
If you want to do it all in shell scripting, you could do it by first using sort | uniq -d
to pick out the repeated filenames, then find
to report the instances of each one:
find . -type f | sed 's:.*/::' | sort | uniq -d | while IFS="" read basename do echo "Repeated name $basename:" find . -type f -name "$basename" echo done
The IFS=""
bit may be obscure: it's necessary in case any of the filenames may start or end with whitespace, because read
was not originally intended for this sort of use and strips leading and trailing whitespace from the line it reads.
Of course this appraoch is potentially inefficient since it will use find
repeatedly, but that may not be important. --69.159.9.222 (talk) 06:20, 23 February 2016 (UTC)
- That script will still fail on filenames containing *?[ (special to -name) or \ (special to read) or newlines. It's O(n²) in the worst case, as you pointed out, which could be a problem when n ≈ 250000. sed, sort and uniq use the current locale, which could cause problems.
- I don't understand why people go to all the effort of trying to get bash to do anything reliably. There are much better programming languages out there. Here's a solution in Python:
import collections, os
dirs_by_name = collections.defaultdict(list)
for dir, subdirs, files in os.walk('.'):
for file in files:
dirs_by_name[file].append(dir)
for file, dirs in dirs_by_name.items():
if len(dirs) > 1:
print('%r is in %r' % (file, dirs))
- -- BenRG (talk) 09:36, 23 February 2016 (UTC)
- Yes, those are good points. I like to say that if you want to make a shell script reliable in the face of weird filenames then you should be writing it in Perl. However, sometimes you know the filenames don't have "difficult" characters and it's just easier to use the shell. In this case I wanted to point out the convenience of the
sort | uniq
combination. --69.159.61.172 (talk) 09:23, 25 February 2016 (UTC)
- Yes, those are good points. I like to say that if you want to make a shell script reliable in the face of weird filenames then you should be writing it in Perl. However, sometimes you know the filenames don't have "difficult" characters and it's just easier to use the shell. In this case I wanted to point out the convenience of the
- This Python program worked and gave me a list of quite a lot of files, which appear to be correct so far (I didn't check quite thoroughly yet). But it seems to traverse the directory structure as deep as it can. I want it to only go two subdirectories deep. How can I do that? I don't understand much of Python. JIP | Talk 19:47, 23 February 2016 (UTC)
- os.walk() isn't well-suited to that: you could count the slashes in the directory names, but it might be nicer to just do it manually:
from os import path
def srch(d,lev):
for f in os.listdir(d):
df=path.join(d,f)
if lev:
if path.isdir(df): srch(df,lev-1)
elif path.isfile(df): dirs_by_name[f].append(d)
srch(".",2)
- This replaces just the walk loop in BenRG's code. If you have symlinks, be warned that this will recurse through them (albeit only to 2 levels!). --Tardis (talk) 23:18, 23 February 2016 (UTC)
Google date
editPlease can you tell me how to find out when the "Google Street View" photographic car will be visiting a certain area? I do NOT wish to be photographed so I want to stay indoors in the day it is due in my area. Thank you. — Preceding unsigned comment added by Haridtanton (talk • contribs) 20:27, 22 February 2016 (UTC)
- They give rather general info on this page. It is important to note that Google doesn't completely control every camera used to get images for Google Maps. From that page, you will see links to information about how the images are obtained. You should also note that Google has put a lot of effort into blurring out the faces of people or, often, completely removing people from images. The next goal is to completely remove cars from the images. The goal is to remove temporary obstacles to make the images better. The goal is not to snap of photo of you standing on your front porch in you bathrobe. 209.149.114.211 (talk) 20:43, 22 February 2016 (UTC)
- Add to this that you can ask Google to remove pictures from their service where you or your house are depicted. --Scicurious (talk) 22:28, 22 February 2016 (UTC)
Windows 7 computer crashed
editA week ago my Dell rack mount R5400 crashed. The OS is Windows 7. Every user worst nightmare. I do have an external Toshiba drive with ample memory for backups and I have created system repair disks. The last time I created a repair disk was last October. Last night a friend of mine, a very competent professional software developer and I, decided to try to restore the system. We went through some motions: changed the boot from the system drive to the onboard CD, etc. It seemed everything worked until it began "restoring" and all of a sudden a message appeared "No disk that can be used for recovering the system disk can be found." That was an affront. We saw a list of dates when the system image was created, last time on Feb 15th, just a week ago. There were others, earlier dates as well. What is the problem? Googling showed that it is a common situation facing folks who try to restore their systems. My rackmount computer is rather old although functionally efficient. I doubt I have the original windows restoration disk somewhere and also after so many updates over the years that will be useless. What options do I have?
One web site suggested to contact Microsoft for a bootable disk. But they are now promoting Win10 nonstop, I doubt I will get a lot of help. I will appreciate any suggestions.
Thanks, - Alex --AboutFace 22 (talk) 22:23, 22 February 2016 (UTC)
- The Dell™ Latitude™ E5400 is a laptop, do you mean the Dell™ Precision R5400? There are some suggestions on various sites saying that this problem occurs when the target partition is smaller than the original source partition. The target DISK size must be at least as large as the source DISK, regardless of the size of the PARTITION you're trying to restore. Using Diskpart also seems to be useful. See here. I would ask Dell, not Microsoft. Try this. My experience with Microsoft support is that they are slow. You may be able to boot from USB if you enable legacy usb emulation in the BIOS. Does the computer still have the restore partition? If so press F8 at boot and select Repair The Computer. I also found this and this and that. The Quixotic Potato (talk) 00:47, 23 February 2016 (UTC)
Thank you very much, @the Quixotic Potato. It is plenty. We are aware of Diskpart but it seems to be a command line utility, so the question is how to get there, I mean the cmd prompt. Quite possible your other pointers will lead to solution. It will have to be studied. You are correct, it is R5400, not E5400, a rather big machine. It might take me a couple of days to digest it all. Many thanks, --AboutFace 22 (talk) 14:04, 23 February 2016 (UTC)
- @AboutFace 22: In my experience the Dell Chat Support is quite helpful BTW. Unfortunately it requires a Dell Service Tag or Dell Express Service Code, and I do not have one of those because all my computers are custommade, but maybe you have a Dell Service Tag on your machine. This page explains how to find the service tag (its usually a small sticker on the back of the machine). The Quixotic Potato (talk) 15:50, 23 February 2016 (UTC)
@The Quixotic Potato, I do have service tags. Most of my information is in a database which is now not accessible in this computer but I think the service tag might be also attached to the machine as a sticker. I will explore this route. Thank you very much. --AboutFace 22 (talk) 14:45, 24 February 2016 (UTC)
- I hope you've solved your problem. If not, then maybe your friend can burn a liveCD, which can be used to retrieve files from the system (e.g. the database that contains the service tag). 10 years ago I used Knoppix exclusively for this purpose. See also: Ultimate Boot CD for Windows & BartPE & How to create a Windows 7 liveCD. The Quixotic Potato (talk) 02:47, 27 February 2016 (UTC)