User talk:Nbarth/Archive 2015

Latest comment: 9 years ago by Aisteco in topic Sytem virtual machine


Invert sugar

Hi, boiling green tea with sugar is unlikely to produce invert sugar to the best of my knowledge. Green tea is base, not acidic, while inversion requires presence of acid (or enzymes). Regards, kashmiri TALK 22:09, 4 January 2015 (UTC)

Thanks Kashmiri!
I was just copying/rearranging existing materials (specifically as of this revision at the Maghrebi mint tea article, which I’ve also fixed); given your reply I looked into this further.
To conclude: couldn’t find any reliable sources saying hydrolysis happened here, and based on my understanding of the chemistry there’s likely to be some hydrolysis, but very little, so eliminated mention of it.
AFAICT, green tea is weakly acidic, not basic: this chemistry textbook gives a pH of 5.8, and this page states that they measured a pH of 6.18 for Celestial Seasonings Green Tea.
Apparently on digestion green tea has an alkalizing effect though.
Also the invert sugar page, as of this revision, states that acid isn’t necessary (it just catalyzes the reaction):
Invert sugar syrup may also be produced without the use of acids or enzymes by thermal means alone: two parts granulated sucrose and one part water simmered for five to seven minutes will convert a modest portion to invert sugar.
However, this is likely quite minimal, and any perceptible effect swamped by the brewing of the tea and mint; as some beer brewers discuss at Does table sugar invert in boiling wort?, boiling sugar water for a few minutes isn’t going to have much effect.
Thanks for catching this!
—Nils von Barth (nbarth) (talk) 05:01, 5 January 2015 (UTC)
Hi Nils, thanks for clarifying. My school memories of chemistry classes are rather basic, and I perhaps shouldn't have reverted before discussing. Interesting to know about green tea's acidity as well, I always thought tannins make all teas base, but thanks for sources :) Regards, kashmiri TALK 09:43, 5 January 2015 (UTC)
No problem kashmiri – lacked references anyway and good outcome!
—Nils von Barth (nbarth) (talk) 00:00, 12 January 2015 (UTC)

Rollback granted

 

Hi Nbarth. After reviewing your request for rollback, I have enabled rollback on your account. Keep in mind these things when going to use rollback:

  • Getting rollback is no more momentous than installing Twinkle.
  • Rollback should be used to revert clear cases of vandalism only, and not good faith edits.
  • Rollback should never be used to edit war.
  • If abused, rollback rights can be revoked.
  • Use common sense.

If you no longer want rollback, contact me and I'll remove it. Also, for some more information on how to use rollback, see Wikipedia:New admin school/Rollback (even though you're not an admin). I'm sure you'll do great with rollback, but feel free to leave me a message on my talk page if you run into troubles or have any questions about appropriate/inappropriate use of rollback. Thank you for helping to reduce vandalism. Happy editing! — MusikAnimal talk 20:09, 18 January 2015 (UTC)

Thanks MusikAnimal – I’ll use my new powers responsibly!
—Nils von Barth (nbarth) (talk) 04:12, 19 January 2015 (UTC)

Re open and closed classes in Japanese

Do you know of any sources that address this matter? I don't know the language, but it seems to me that if new verbs are sometimes (even if we say "rarely") introduced by adding "-ru", then it can't be said conclusively that verbs are a closed class. Is it really significantly more common for a new pronoun to be introduced than for a "-ru" verb to gain currency? And are pronouns even a word class in Japanese? W. P. Uzer (talk) 08:31, 22 February 2015 (UTC)

Hi W.P.,
Thanks for bringing this up; I’ve added lots of sources and clarified as of this edit.
You’re correct, pronouns are a disputed class in Japanese; added reference for analogous situation in Thai and Lao (open class).
Verbing by adding -ru is a recent and marginal innovation, so their closedness has weakened some, but verbs (and more so, adjectives) as closed class is an accurate description of Japanese word categories for over 1,000 years: Chinese words were imported as nouns, being used as (inflected) verbs or adjectives extremely rarely – at least 1,000:1, probably 10,000:1.
—Nils von Barth (nbarth) (talk) 01:01, 23 February 2015 (UTC)
Great, thanks for the reply and your edits, that's clarified it a lot. W. P. Uzer (talk) 10:47, 23 February 2015 (UTC)

Ideophones

But I do have another objection - ideophones are surely not a part of speech or word class in the way the notion is normally understood? These are words with a certain type of derivation, not with a particular type of grammatical behavior. W. P. Uzer (talk) 10:55, 23 February 2015 (UTC)

(>.<) You have a sharp eye.
You’re right, the classification of ideophones is debatable, and strictly speaking it’s a phonosemantic word class (term used by some), based on derivation. However, even grammatically “in the vast majority of cases” they’re a category of adverbials (Japanese is typical in this respect, and many African languages are likewise). I’ve added a note and refs elaborating and qualifying at this edit, and over at Ideophone in this edit. See especially the Childs ref, which discusses their classification and open status.
Does this improve matters?
Thanks for your (quite constructive) criticism!
—Nils von Barth (nbarth) (talk) 03:55, 24 February 2015 (UTC)
Again, thanks, yes, that certainly clears things up. W. P. Uzer (talk) 07:50, 24 February 2015 (UTC)

Closure (computer programming)

I'm not too happy with this edit. This discussion really only applies to lexically scoped languages and not to dynamically scoped ones (dynamic scoping is what you get if you try to implement first-class functions without using closures, as early LISPs did), so this would be an important distinction to make. —Ruud 12:33, 26 February 2015 (UTC)

Thanks for bringing this up, and good point.
I was just being picky about the term “scope” (strictly “a region of code where a binding is valid”), but the distinction you point out is gets at the core of what a closure is: it’s about name binding, and accessing variables outside their scope (through a reference). (I avoid the term “dynamic scoping” in favor of “dynamic binding” to emphasize that it’s the binding rule that differs.)
As of this edit I’m rewritten it to emphasize that it’s the static binding that’s at the root of a closure. The result is a bit longer, but seems to front the essence of closures (which are hard enough to explain anyway!).
How is it?
—Nils von Barth (nbarth) (talk) 05:53, 27 February 2015 (UTC)
I had a try at rewriting the introduction myself, I hope it is agreeable with you. As I removed several of the paragraphs you introduced, or otherwise undid some of the changes you made, I think I should explain why:
  • Yes, closure are all about name binding, more specifically an implementation technique for achieving lexically scoped name binding. I made this into the first sentence of the article. (It's at least as important to explain why a closure is, as what a closure is.)
  • What I realized after writing most of the text below is that, when talking about name binding, it is important to distinguish between the static binding structure of a program and the dynamic binding structure (if a recursive function refers to one of its local variables, than it will refer to one particular variable (definition, binder) in its static binding structure, but it will refer to different (substitution) instances of that variable in its dynamic binding structure). As closures only show up in the dynamic semantics of a language, I think they are about dynamic binding structure and what I wrote below should be consistent with that. Our article on Name binding currently does not seem to address this issue at all...
  • Terminology: note that there is a distinction between when name binding is resolved (at compile-time or run-time; let's call this early binding/static dispatch respectively late binding/dynamic dispatch, even though that's not how these terms are exactly used in practice) and where a name gets bound to (the named thing closest to you in the environment of the static semantics of the language, or closest to you in the environment of the dynamic semantics of the language; let's call this lexically scoped respectively dynamically scoped). Now, early binding/static dispatch usually implies lexical scoping and dynamic scoping implies late binding/dynamic dispatch (because resolving dynamic scoping at compile-time in general is an undecidable problem). This means people will refer with the terms static scoping to either early binding, static dispatch or lexical scoping, and with dynamic scoping to either dynamic scoping (in the sense I defined earlier), late binding or dynamic dispatch (programming language theorists are a bit more pedantic and because of this, as well as disregarding early and late binding as uninteresting implementation details, will indeed mean lexical scoping when they talk about static scoping, but most programmers tend to be a bit less precise.) This is quite problematic, as it is perfectly possible to have lexical scoping and late binding; this is exactly what closures do: the scoping is lexical, but binding is resolved at run-time using the environment in the closure. For this reason I think it's best to use the unambiguous term "lexical scoping" over the ambiguous "static scoping", at least in the introduction. (This is also the reason why I removed your footnote "Static binding is usually but not always done at compile time. ..." As explained, this is not correct when using closures. The line between early and late binding is arguably a bit blurry, though: is a binding of a name referring to a function parameter resolved at compile time because the parameter can be found by adding a constant offset to your mark pointer, or is it resolved at run-time because the parameter can only be found by adding a constant offset to your mark pointer?)
  • Regarding the example, as code fragments are the programmer's equivalent of "a picture is worth a thousand words", I think it's important to give it in the lede (it was previously in a separate section and after checking some StackOverflow questions referring to this article, I think it's safe to say that most people don't bother scrolling down past the table of contents). The only reason why I believe programmers find closures difficult to grasp is that they are unfamiliar with nested functions and thus with free variables. An examples should make them understand this aspect more quickly than a theoretical explanation. I also think it's important to give the example in pseudocode, mostly for the reasons given at MOS:PSEUDOCODE. It think the current psuedocode also strikes a good balance between the two languages with first-class functions that readers are most likely familiar with (Python and Javascript). Both languages are not ideal for discussing issues related to scoping: Python because it has implicit variable declaration, occasionally requiring nonlocal annotations, and Javascript because it's scoping rules are a bit odd in general.
  • Some nitpicks: "in some cases (such as ML) it contains the values of each of the names", I think most ML compilers store references to immutable values in closures instead of the values themselves, Java and C++11 are two of the few languages I know that store actual copies of the captured variables. "or passed as arguments to other function calls", closures are used for this in most functional languages, but they are not necessary as access links in the activation frames suffice. "do not support calling nested functions after the enclosing function has exited", Pascal achieves this not by disallowing the call or segfaulting (as GNU C does), but by disallowing functions to return other functions (taking them as arguments is fine) and disallowing functions to be stored in data structures, avoiding them from escaping from the scope of their free variables and thus the upwards funarg problem.
Cheers, —Ruud 12:00, 28 February 2015 (UTC)
Thanks Ruud for your very thoughtful edit and response; I agree it makes the article much clearer!
In particular, thanks for tightening up the intro – agreed that leading with an example is important. (Frankly the practice is simpler than the theory, though implementation…)
I didn’t know about MOS:PSEUDOCODE; it makes sense, much as one might think Python is basically pseudo-code ;) – thanks!
AFAICT the main items you cut were rather wordy discussions of what was binding to what; agreed that these obscure the main point, though perhaps they can usefully be incorporated at name binding or non-local variable.
I slightly tweaked the wording in this edit, and made some minor edits in the body of the article, but otherwise don’t plan to make any further significant changes to the closure article at this time.
I’ll see about working on Name binding, Nested function, Non-local variable, and Funarg problem though, in particular incorporating your notes above. Cheers!
—Nils von Barth (nbarth) (talk) 23:55, 28 February 2015 (UTC)

Sytem virtual machine

This article was created by you and the title contains a misspelling. Should it be moved? There is already a page named System virtual machine that redirects to Virtual machine.

Aisteco (talk) 20:19, 16 October 2015 (UTC)

OMG, *embarrassed* (sorry, embarassed [sic] ;)
Thanks so much!
This was from a split that I haven’t gotten around to finishing; see Talk:Virtual machine#Split into separate pages for systems and process.3F.
I’ll split it properly and see about deleting the useless misspelling, then update here – thanks again!
—Nils von Barth (nbarth) (talk) 02:16, 19 October 2015 (UTC)
:) I figured that was the case.
Aisteco (talk) 02:43, 19 October 2015 (UTC)