Wikipedia:Reference desk/Archives/Computing/2014 July 1

Computing desk
< June 30 << Jun | July | Aug >> July 2 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 1

edit

Is there a way to stop YouTube "drift" ?

edit

Let's say I do a search on "Irish Setter", and get a bunch of videos starring Irish Setters, then I click on one that has an Irish Setter chasing a goose. Now my list of Irish Setter videos is replaced, or at least diluted, by a list of goose videos, perhaps even pinching women's butts (also called a "goose"). I have to repeat the search to get the original list back, after watching each vid. Is there a way to stop this kind of "drift" ? StuRat (talk) 15:42, 1 July 2014 (UTC)[reply]

You could open each video in a new tab using the right-click menu - that way your original search results are still there for you to return to. --Nicknack009 (talk) 15:46, 1 July 2014 (UTC)[reply]
In most browsers, clicking your mouse's wheel also opens items in a new tab if your mouse supports it. -- 140.202.10.134 (talk) 16:22, 1 July 2014 (UTC)[reply]
Shift-click, too.
On 3-button mice, middle button. - ¡Ouch! (hurt me / more pain)
Oops... "middle button" is not different from IP 140's advice. The mouse wheel is merely a button that doesn't look like one. - ¡Ouch! (hurt me / more pain) 08:46, 2 July 2014 (UTC)[reply]
In general, it is ineffective when you ask a web service provider not to change the content they deliver based on their prior history with you. All you can do is pleasantly request... but if the service provider wants to deliver specific content, tailored for your viewing habits, they can. And there's nothing your browser(s) can do about it. Nimur (talk) 18:47, 1 July 2014 (UTC)[reply]
That's true if they're basing it on your overall history (to the extent they can identify you), but it's another matter if they're only basing it on how exactly you got to the current page. --50.100.189.160 (talk) 20:15, 1 July 2014 (UTC)[reply]
The "related files" are usually based on what's related to "this" file (the video on the page) and not on your most recent search. It can be an annoyance if you want to "stay", but it can be a feature, too. Depending on your definition of "feature."
Six degrees of separation will probably not work on YT videos, though. - ¡Ouch! (hurt me / more pain) 08:08, 2 July 2014 (UTC)[reply]

Xml parsing

edit

Does it matter much how you parse a marked text like xml? Does ambiguity arise? Does it take more time with different parsers? — Preceding unsigned comment added by Abaget (talkcontribs) 21:25, 1 July 2014 (UTC)[reply]

Time should definitely vary at least a little between different parsers. I'm not sure, but I think the point is that any xml that validates (XML validation) should be able to be parsed unambiguously. However, that assumes that there are no errors in the parser. Of course a badly-written parser could give spurious output compared to a "correct" parser. Here's a short blurb about parsers from w3schools [1], and see a few descriptions of different parsing methods in our XML article. SemanticMantis (talk) 20:23, 2 July 2014 (UTC)[reply]
I will answer your three questions for XML. To answer the question of ambiguity, XML was made with the purpose to be unambiguous. As for other markup languages it all depends on that specific language. The time will change for different parsers, it all depends on what type of parser and for what it is used for. Meaning that time could change if you are parsing an XML file with java compared to C++. If something is validated already then it wont have to be validated again, so could cut down on time. As for the number of times you parse, it does not matter as long as their are not major errors. XML parsers are designed so that if it runs into a problem or an error it will just "choke and die" which means the XML document is bad. Here are where I got my information from: http://oreilly.com/catalog/perlxml/chapter/ch03.html, http://docs.oracle.com/cd/B12037_01/appdev.101/b10794/adx04paj.htm, https://en.wikipedia.org/wiki/XML, https://en.wikipedia.org/wiki/Markup_language.Ladi224 (talk) 02:41, 4 July 2014 (UTC)[reply]

Acrobat Pro file sizes

edit

Hi, I've just been using Acrobat Pro to turn a Word document with lots of embedded images into a PDF. I've been experimenting with several of the Acrobat Pro options and am confused about how the file size varies so enormously depending on whether one uses 'redaction', 'remove hidden information' and 'sanitize'.

Original: 88MB Redaction: 17MB Remove hidden: 5MB Sanitize: 4MB

What is changing? In particular, how does redaction affect the file size so much? Does it reduce image quality even if I'm not redacting anything from images? Thanks — Preceding unsigned comment added by 146.90.206.207 (talk) 23:18, 1 July 2014 (UTC)[reply]

I'm guessing, but a PDF that big (4MB sanitized) is probably not built in one piece, and Acrobat keeps a revision history, like Wiki does. So you'd have N versions, (page 1), (page 1+2), (pages 1-3), ... , (pages 1-N) all end to end within the same PDF. A .PDF is compressed, but that will not save much space at multi-megabyte sizes. You end up with basically N copies of page 1, N-1 copies of page 2...
If you take the ORIGINAL and sanitize it, is it 4M, too? If you remove hidden from the ogirinal, will it be 5MB?
217.255.173.35 (talk) 11:47, 4 July 2014 (UTC)[reply]