Wikipedia:Reference desk/Archives/Computing/2012 June 9

Computing desk
< June 8 << May | June | Jul >> June 10 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 9

edit

Firefox's recent "File + New Tab" addition

edit

In my last update of Firefox (13.0), they seem to be trying to copy Opera's selection screen (which I like) where you can select from 9 web sites. Unfortunately, it doesn't seem to work well at all:

1) There doesn't appear to be an option to manually change the selections, they just populate with random recently visited pages, including many different pages from the same site.

2) The is an X to remove unwanted selections, but they just repopulate the page when I next visit those sites.

3) There's also a push-pin which supposedly makes the selection a permanent addition to the screen, but doesn't seem to work. The next time I restart Firefox, I get a new random assortment of selections. And, sometimes when I use the push-pin, that selection deletes instead, as if I had hit the X.

4) Each selection has text only, no image, and the text is the title of that web page, with no option to change it.

So, is this new feature just "not ready for prime time", or is this really how they designed it to work ? StuRat (talk) 00:32, 9 June 2012 (UTC)[reply]

Wouldn't surprise me, even the extensions for Firefox for this aren't very good at it. about:configbrowser.newtab.url ¦ Reisio (talk) 00:36, 9 June 2012 (UTC)[reply]
Kind of a nit-picky point but I think Safari was the first to come out with this feature. It's what I use on my Macs at home and it works perfectly, i.e. just how you seem to be expecting the FF version to work. So, if you want to use Safari, there it is. Dismas|(talk) 17:44, 9 June 2012 (UTC)[reply]
Safari definitely gets this feature right, but has other problems. ¦ Reisio (talk) 19:32, 9 June 2012 (UTC)[reply]
Sounds like we need to create a Frankenbrowser, with this feature from Safari and the best features from the other browsers, too. StuRat (talk) 05:30, 11 June 2012 (UTC)[reply]

Microsoft Windows 7 - Microsoft Windows XP.

edit

Hi there, Microsoft Windows 7 and Microsoft Windows XP. Consistently use Windows 7 to store photos and now find that like rabbits they keep on duplicating, this seems to be happening on the older XP also. Advised that if this continues it will eventually crash the computer! Have searched Microsoft for any downloads but find I am referred to another company which I suspect uses a Corporation in India, and my experience tells me to keep well away. Is there any free software that will fix this problem - or software at a one off cost - or any other solution? How binding is the agreement you agree to online to access the supposedly free download product? Clicked off and uninstalled product when the monthly cost came up. Eyesight does not allow me to scrutinize online contracts. Probably too old and old fashioned, but should not any company provide immediate free downloads to fix problems on their own product after you purchase it? Would it be a temporary solution to download all pictures on to a USB stick and then delete all pictures until problem is solved? Have all Microsoft latest updates automatically set to download, but apparently not one for this problem!!

Help!!!!

Hamish 84.Hamish84 (talk) 03:44, 9 June 2012 (UTC)[reply]

Are you using some type of photo management software ? If so, then, yes, by all means, uninstall it. Just put your photos in regular old computer folders. I agree that software support is pathetic. If you can't read items on the screen, make the text bigger (usually with CTRL +, and then make it smaller with CTRL -). StuRat (talk) 04:05, 9 June 2012 (UTC)[reply]
You greater problem may be what that company could be doing with the other data on both your computers. If you do remove your pics to a USB and delete the offending program then be sure to run a comprehensive security scan over both machines and your USB stick before you proceded any further. FYI, I use Google's Picasa3 for my photo management software. Its free an easy. Benyoch ...Don't panic! Don't panic!... (talk) 04:24, 9 June 2012 (UTC)[reply]
I am not convinced there is a program deliberately duplicating your photos. It is quite possibly something in what you are doing when you access your photos which is responsible. How are your photos stored?
If you are storing them on your own PC, maybe using software that came with your camera, then different things can happen depending on exactly which method you choose to copy the photos to the PC; for example it is quite possible that when you connect your camera to the PC to copy any recently taken photos, all the photos in the camera are copied again to a new folder.
On the other hand, if you are using an online service like Picassa or Flickr, whenever you access the photo galley, your internet browser will keep a copy in the cache. If you then search your computer for all images, you find multiple copies of your photos - one where you expect and others in folders you have never heard of.
Unfortunately, this is often something which is hard to analyse without actually seeing you use your computer. Maybe you have a friend or relative who is familiar with the workings of Windows and who could sit with you for a while to see how you access your photos. Astronaut (talk) 13:30, 9 June 2012 (UTC)[reply]

pdf slow - images

edit

Hello. I've begun to notice that pdf's of scanned documents are unnaturally slow - for example http://archive.org/details/gentshistoryhul01ohlsgoog (or any other from this site, or a downloaded pdf from google books)

Using the download link page http://ia600404.us.archive.org/13/items/gentshistoryhul01ohlsgoog/ - comparing .djvu , .pdf , and the jp2.zip files after decompressing. (using adobe, caminova, and picassa ACDsee to view)

On my (admittedly slow XP 32bit) machine (using single page, non-scrolling view) it takes nearly a second (or a good fraction) of to get the next page displayed after pressing PgDn. On the .djvu or .jpg file it is nearly instantaneous. To add insult the .pdf file doesn't appear to contain any text data - so effectively it's just a slideshow. Task manager shows .pdf uses much more cpu - the situation is just the same with google chromes built in pdf viewer.. (it's clearly not hard-disk bandwidth as the jpg slideshow is far faster)

So - is there an simple explanation for this? - are there better pdf viewers?, why is pdf such a dog for images? Oranjblud (talk) 13:43, 9 June 2012 (UTC)[reply]

There may be an argument that PDFs necessarily encode images in a slow way (they are usually compressed page by page and there isn't any caching usually, to my knowledge), but it almost certainly depends mostly on the PDF viewer in question. Adobe Reader is notoriously bloatware; try Foxit Reader and see if it does better for you. There are a number of alternatives to Adobe in any case. --Mr.98 (talk) 14:28, 9 June 2012 (UTC)[reply]
If I look at the DjVu and the PDF on that archive.org page you linked (where the two are almost the same in size, so the images are surely the same resolution) with evince the pdf and djvu perform identically. There's nothing intrinsically poor about PDF, and while there are sometimes ones around that have been strangely constructed, or have over-detailed scans, this case isn't such an example. So the poorness you're experiencing can, I believe, be ascribed to the crummy Reader. There's no text in the PDF because none was put there by the scanning program. -- Finlay McWalterTalk 14:53, 9 June 2012 (UTC)[reply]
I will try those and see how they work. The aside I made about text was that there is not OCR text overlay in the pdf to slow things down - in fact - there is one in the djvu and it performs better.. As far as I know they images are the same as you guess - the JP2000 file contains the original scans I believe.Oranjblud (talk) 15:04, 9 June 2012 (UTC)[reply]
I see what you mean about the text. It may be, incidentally, that cominoa and ACDsee use Adobe's .dll to decode and render the PDF, which would explain why they work as badly as it. That Chrome does too is more surprising, as Chrome has its own decode and render (which, as far as I can tell, turns a PDF into the same datastructure as Chrome uses to represent the DOM of an html+css document). One thought - you may have some security software which is checking the PDF (as PDFs can contain embedded scripts, which may, at least in theory, have malicious use). So if the security software is reading the PDFs but not the DjVus, that would be an obvious slowdown (but I'd expect that on open, not on page-down). -- Finlay McWalterTalk 15:19, 9 June 2012 (UTC)[reply]
(I used ACDsee for the JP2000s, and Caminova for the .djvu this one - both peformed well on those file types - afaik they don't display pdfs)
I checked with evince and foxit - it's clear that both are quicker (or at least 'snappier') on smaller pdfs. evince appears (?) to do extra near page caching as well.
However - when moving to bigger documents eg http://archive.org/details/cu31924013922905 (1000pages 100M) - both are as bad as adobe, evince is worse (tens of seconds pause). Foxit appears similar. Both have second(s) of delay when jumping to a new page somewhere in the book. But, when I try the djvu version its performance is good (1 sec initial program startup, fraction of seconds to jump). This is pretty much what I'd expect - as the book is essentially a slideshow of images.
I looks like the issue is intrinsic to pdfs filetypes - I can't imagine what they've done to the implementation of images to make it so slow. Does anyone know why this would be , or maybe suggest another experiment.Oranjblud (talk) 15:58, 9 June 2012 (UTC)[reply]
(more) (It's not indexing or an antivirus-scan that is causing the slow down - checked) - I think the cause might be the image decompression algorthym - I get heavy (ie full) cpu when moving to a page. The cpu then continues at full for a good while after - I discovered that it was fine tuning the image (only noticeable if I jump to a page with 300% magnification) - I would assume that it has multi-detail level images and it progressivelt applies the higher details as they load - however - my suspicion is that it is it is post processing the image - ie applying sharpness and maybe contrast filters (this would explain the delay) - the ammount of CPU suggests that it is not just adding a final dither to the image - but that it peforming post processing everytime - does anyone known anything about pdfs doing this. (and maybe if true a way to stop this nonsense?) Oranjblud (talk) 16:36, 9 June 2012 (UTC)[reply]
I've noticed this also and wondered about it. For this particular book, on the laptop I happen to be using right now, I get around 1-2 pages per second viewing the PDF version with Foxit, Sumatra or Evince, 4-5 pages per second viewing the DjVu version with Evince, and about 6 pages per second with WinDjVu. (Note that this contradicts Finlay McWalter's experience with Evince). I've found that to be typical, at least for books from the Internet Archive.
Most of the pages of gentshistoryhul01ohlsgoog.pdf are compressed with JBIG2 (about which I know a bit) and DjVu uses a closely related format called JB2 (about which I know nothing). Unless there's something in JB2 that allows much faster decompression, which I doubt, it must come down to software quality. I guess somebody put a lot of effort into optimizing DjVuLibre's JB2 decompression, while nobody ever made a fast open-source JBIG2 implementation. -- BenRG (talk) 19:07, 9 June 2012 (UTC)[reply]

ok, thanks. It looks likely the decompression is to blame. At least I know it's not just me. Not much I can do about this in the short term, except use the djvu when available. On the other hand the idea of using OCR'd data for text image compression is an interesting one (although JBIG doesn't seem to explicitly try to do this - maybe mk.2 should..)

Seeing as the compression appears to be the problem I think I might try printing the pdf - using print to pdf software to get an uncompressed pdf - and see if that helps at all..Oranjblud (talk) 00:36, 10 June 2012 (UTC)[reply]

"Fused off" features in Intel chips

edit

I have read that a lot of Intel's chips are the same with certain features "fused off". What does this mean and does this have anything to do with silicon fuses? Are there any Intel chips that cannot be "fused off" as in they are entirely static in their structure? --Melab±1 15:36, 9 June 2012 (UTC)[reply]

Silicon economics are very complicated. Start by reading how VLSI fabrication works. Sometimes, it is cheaper (or just makes more sense) to release two "versions" of a hardware product that are in fact sharing a lot of features. They may be built on the same wafers; they may even be identical all the way to the end of the fab line. At some stage on the proverbial "assembly line," the chips may be binned; and certain manufacturing steps are applied to different end-products. Depending when this occurs on the assembly-line, those subsequent hardware changes may be irreversible. For example, if a different photomask step is applied to different chips that originated from the same wafer lots, the resulting chips can never be made "identical" again - short of melting the materials down and starting from scratch. On the other hand, if the products diverge fairly late - say, after the silicon fabrication is done, the manufacturer might use a one-time programmable memory (the device you are vaguely referring to as a "silicon fuse"). As always, depending on the exact nature of that memory, these changes can be "permanently" burned in, or they might be reversible using specialized equipment. (Old-fashioned UV-light erasable EPROMs come to mind).
Intel's manufacturing process and the details of its massive product-lineup are very complicated, so you'll have to ask a more specific question about a specific technology if you want a specific answer. For now, you can accept the very generic case: "some end-products might be built from identical precursors." It is even plausible that similar variants of Intel processors are effectively identical, aside from a few bits in an on-chip ROM. Despite prevalent rumors on the internet, this does not usually mean that you can build a "home upgrade" kit to "unfuse" a device, turning a low-cost Intel i3 into a high-cost Intel i7 by "unfusing/enabling" its "locked" features. Nimur (talk) 16:02, 9 June 2012 (UTC)[reply]
I use "silicon fuses" or "eFUSEs" in place of "one-time programmable memory" when the fuses in question are not arranged in large enough amounts to hold a single program and are instead used for configuration. I though that maybe the bridges between the cores some in Intel's processors were connected by fuses for pathways. To sell a less powerful chip for a certain range, Intel would then blow the bridge between the extra cores and the rest of the processor. That is what I hypothesized, anyway. --Melab±1 19:20, 9 June 2012 (UTC)[reply]

execution of correlated subquery in oracle

edit

Sir my doubt is about correlated subquery execution When it has more than one sub query. I know correlated subquery execution when it has one sub query.but if it has more than one sub query I am not able to understand. I have refferd so many books and web sites.but I could not find it. Finally I am asking you. for me you spend some time and give detailed answer Below there are two queries.i know execution of query1.but I do not know execution of query 2. I am asking my doubts by comparing with query1.you please read clear my doubt.do not think this big doubt.i am expressing my doubt clearly .so it became big doubt.


query 1: Select e.ename,e.city from emp1 e where exists(select f.ename from emp2 f where f.ename=’ajay’ and e.city=f.city));


Query2: Select e.ename from employee e where exists(select ‘x’ from emp_company c where e.ename=c.ename and exists(select ‘x’ from company m where c.cname =m.cname and m.city=’bombay’));--


Doubt1: In first step of execution of query1 ,employee table’s first row ename,city considered. Then what happens in first step of execution of query2.


Doubt2: In second step of execution of query1 ,the considered city from main query is compared with every row of emp2. What happens in second step of execution of query2.?


third step of execution of query1:

while comparing city from main query with every row of emp2,if any row satisfies condition that rows ename is added to a list. What happens in 3rd step of execution of query2.?


In 4th step of execution of query1 ,the formed list is returned. To main query. What happens in 4th step of execution of query2.?


In 5th step of execution of query1,if the returned list is not empty then exists evaluates to true.then emp1 table’s ename,city added out put. What happens in 5th step of execution of query2.?


In 6th step of execution of query1 emp1 ename,city are selected from second row of emp1 table. What happens in 6th step of execution of query2.?


Can You please explain execution of query2 as I explained query1?. — Preceding unsigned comment added by Phanihup (talkcontribs) 16:42, 9 June 2012 (UTC)[reply]

I am not familiar with the specific analysis technique you are using, but here is my attempt at describing the behavior of Query2.
  1. Consider each record of the employee table.
  2. For each such record, consider each record of the emp_company.
    1. If the ename's of the employee and emp_company tables match, continue, otherwise ignore this emp_company record.
    2. For each emp_company record with a matching emp_record table, consider each record of the company table.
      1. If the cname's of the company and emp_company match and the city is 'bombay' continue, otherwise ignore this compay record.
      2. Add 'x' to list-A. (Note, for an exists() test, it generally doesn't matter what value is actually selected.)
      3. Repeat for each additional company record.
    3. After all the company records are processed, examine list-A.
    4. If list-A is non-empty, continue, otherwise ignore this emp_company record.
    5. Add 'x' to list-B.
    6. Repeat for each additional emp_company record.
  3. After all the emp-company records are processed, examine list=B.
  4. If list-B is non-empty, continue, otherwise ignore this employee record.
  5. Add the employee ename to the final results.
  6. Repeat for each additional employee record.
Note that in the above procedure, the subqueries are processed repeatedly for each candidate record from the parent and list-A and list-B are reset each time. In reality, the query optimizer of the database engine will utilize various shortcuts to improve efficiency and may even rewrite the query to something much different that was provided. For example, once a match is found in an exists() condition, it is not necessary to search for additional matches, Also, the engine might make use of database indexes to locate matches without actually scanning the tables. It might reverse the execution logic - first finding companies located in Bombay, then finding related emp_company records (using an index), and then looking up the matching employee records. The point is, while the above step-by-step procedure describes how the query is written, it is highly unlikely that the database engine will use that same method to actually retrieve the results.
I hope this helps. -- Tom N (tcncv) talk/contrib 01:52, 10 June 2012 (UTC)[reply]