Talk:Formal grammar/Archive 1

Latest comment: 18 years ago by Dreftymac in topic request clarification
Archive 1

Lindenmayer systems

Interesting remark about the Lindenmayer systems! I didn't know those. Is there really no distinction between terminals and non-terminals? I saw that there is a distinction between variables and constants. I also got the impressession that L systems were always context-free. Is that correct? Finally it should probably be explained somewhere that L systems are not about defining a formal language but about defining (infinite) lists (or trees?) of strings where every next string is defined by applying all the rules to all the variables once. -- Jan Hidders 15:20 Aug 8, 2002 (PDT)


No, L-systems aren't always context free. Some are context free and some are context sensetive, just like formal grammars (c.f. [1]). The terms constant and variable in an L-system don't really affect its behavior. In a given L-system, if you take a constant and reclassify it as a "variable", that doesn't change the output of the system. But for a grammar, if you take a terminal and reclassified it as a "nonterminal", that usually changes the language that is produced. L-systems also differ in the order in which the rules are applied. A given rule is applied at every possible place in the generation n string before being applied to any parts of the generation n+1 string. Both grammars and L-systems define a sequence of strings. For grammars, we usually don't care about the sequence, but sometimes we do (e.g. when compilers parse). The same is true for L-systems. Usually we just care about the final string (for a grammar) or the limit (for an L-system). The only important difference between the two that isn't mentioned is that L-systems usually produce an infinite sequence. I'll add that. --LC 19:16 Aug 8, 2002 (PDT)

I'm very confused about the first part of this page (and so everything after as well) 1. S -> aSb 2. S -> ba. How does it know which rule to use, 1 or 2? The example shows it using rule 1 a few times then suddenly using rule 2 and being done. I would think it would keep using rule 1 forever. Or if it is cycling through the rules it would use rule 1 one time then rule 2 the second and stop. But the example showing rule 1 at least twice. Nothing in rule 1 seems to tell it to use rule 2 after X number of times, that I can tell. (JLJ)

One has at each time a free choice as to which rule to use, as long as it's applicable. If it were specified which rule to use in every situation, you could get only one word out of a grammar, but it should get a whole language, that is a large class of words. Andre Engels 13:20, 26 Sep 2003 (UTC)

It would be nice if someone 'verticalised' the examples. I tried but I think it doesn't work with wiki markup only. So I guess it needs to be put in a table. Guaka 17:51, 29 Nov 2003 (UTC)

Expert attention needed

Cleanup from May 2005

This article needs some additional context and information to help make it accessible to more readers. For example, students of linguistics and computer science should find this article a workable entry point and a helpful guide to this area of knowledge.

In general:

  • The article uses lots of abstract examples. It would greatly benefit from some concrete examples using real computer and natural languages.

In the introduction:

  • One important consideration is to explain the application of formal grammars. How are they used in computer science? The article Generative grammar explains that formal grammars are used in the study of natural languages, which is an important point not mentioned in this article.
  • It seems that there should be some mathematical relationship between generative and analytical grammars. If so, what is it? (This is made clearer later on, in the introduction to analytic grammars.)
  • What types of languages are described by and studied using these types of grammars - computer languages? Natural languages? (The answer is both.)

In the section on generative grammars:

  • There is a lot of very readable material in Generative grammar, and a lot of mathematical material in the section of this article on generative grammars. A comprehensive article could use both. It seems that the other article should be merged with the appropriate section in this article, or vice versa.
  • The production example in the introduction would be clearer if presented in a vertical format.
  • In accordance with Wikipedia style, the subsection "formal definition" should be pushed to the end of the section, to move less technical content (accessible to a wider audience) up front.
  • In the section explaining the different types of generative grammar, it's not made very clear that the later types of grammars are proper subsets of earlier types.
  • Can we get some examples of theories of grammar of various types?

In the section on regular grammars:

  • It should be noted that epsilon represents the empty string.

The section on analytic grammars is very short.

--Beland 02:31, 3 May 2005 (UTC)

Comment moved from article.

Dear author: please note: this is very unclear because you have not explained how to make sense of the symbols that summarize the new grammar you are talking about, anbn | n > 0. So what happens is that we, the beginners, who are looking to wikipedia for help, are lost from here on out. Same with the next section. Don't be afraid to over explain. If someone already knows part of the explanation, they will just read that part faster. 24.184.208.26 16:39 September 18, 2005 (UTC)

L-systems

I've removed the following statement from the article, replacing it with a much weaker one:

Generative formal grammars are identical to Lindenmayer systems (L-systems), except that L-systems are not affected by a distinction between terminals and nonterminals, L-systems have restrictions on the order in which the rules are applied, and L-systems can run forever, generating an infinite sequence of strings. Typically, each string is associated with a set of points in space, and the "output" of the L-system is defined to be the limit of those sets.

This doesn't really tie in with the description of L systems (which I'll admit to never having encountered before) on the L-system article. If I'm reading that article correctly, what an L-system calls a "variable" is a nonterminal and what it calls a "constant" is a terminal. There doesn't seem to be any difference except one of terminology. Perhaps I'm wrong here.

Also, the article doesn't mention any restriction on the order in which rules may be applied, and a generative grammar may also generate infinite strings. And the last sentence could also be applied to a generative grammar; it's just a different way of interpreting what a language means. JulesH 17:30, 31 August 2006 (UTC)

On re-reading, I think perhaps the distinction is that an L-system allows nonterminals to appear in the resulting string...? Can anybody confirm this? JulesH 17:31, 31 August 2006 (UTC)

By my reading, that's part of it, but there's also the difference that in an L-system, you apply all applicable production rules at each turn (except in the case of conflicting rules, where you must choose randomly). Whereas a generative grammar produces a tree of strings, where the leaves are elements of the language, an L-system seems to produce a list of strings, each of which is an element in the language — except that an L-system needn't work on strings. And that whereas a generative grammar always produces the same tree, a given L-system can produce a different list each time (because the generative grammar identifies all possibilities at each turn, whereas an L-system chooses randomly between the various possibilities). Does that make sense?

Intention to Revise/Edit Boldly

This article has been flagged with an {{expert}} tag and indeed has some issues. Since it falls within my area of expertise, I intend to edit boldly. Not before letting others here have an opportunity to discuss this, however. Please let's discuss any concerns here, first. One thing I certainly will attempt is a standardization of the notation to the literature, as in   rather than the long arrow used as it stands. Also, a lot of citations in the relevant literature, et cetera. OK ... thoughts? -- QTJ 15:34, 23 October 2006 (UTC)

Going to try this the slow way first and see how it fares. :-) -- QTJ 00:17, 24 October 2006 (UTC)
PS: If you want to play edit tag, we can take down the {{expert}} tag for the time being? -- QTJ 00:19, 24 October 2006 (UTC)
Sure. Ruakh 14:39, 24 October 2006 (UTC)
OK -- done. Let the games begin. -- QTJ 18:21, 24 October 2006 (UTC)
Tag -- you're it. -- QTJ 05:51, 25 October 2006 (UTC)
After the next wave, time to put in some reliable sources. -- QTJ 05:54, 25 October 2006 (UTC)
I put in some first references. -- QTJ 17:56, 26 October 2006 (UTC)

request clarification

The following construction appears a few times under generative grammar:

   All languages generated by a _____ can be recognized in ______ time by _______

(with appropriate values for the blanks). The request for clarification is this: just what is meant by "can be recognized"? The article could benefit from a bit more explanation of what it means for a parser to 'recognize' a language, or what parsing and recognizing actually is ... to someone familiar with the linguistics, but not the computing. This need for clarification does not apply to the notion of parser itself, nor the notion of Big O notation, since those already have articles, but just to the notion of can be recognized by. drefty.mac 10:31, 28 October 2006 (UTC)

How about:
... all strings in any context-free language generated by a context-free 
grammar can be recognized as belonging to that language (or rejected 
as not belonging to that language) in at least ...

-- QTJ 15:37, 28 October 2006 (UTC)

This works well IMHO. The primary thought is, it may be (stylistically) cumbersome to replace all the constructions with the longer version, so just add a little to the introduction,

such as:

   ... The language of the grammar is the set of all the strings 
   that can be generated using this process. ... (diagram) ... 
   Any string is said to  be recognized if it can be 
   conclusively rejected or accepted as a  member of the set.    
Rationale: 1) this presents 'recognized string' as a technical property; 2) this introduces that not every string is 'recognized' (and thus not necc. even 'recognizable' c.f., halting, incompleteness); 3) this allows us to talk about how this property applies to languages (is it just for context-free languges?); 4) this allows us to talk about whether this property applies to strings of both generative and analytic grammars. Points 3) and 4) may admittedly be irrelevant or obvious to an expert, but the English connotations of the words "parse" "generate" "recognize" and "analyze" (analytic) have high potential for confusion, especially when contrasted with the technical definitions. This kind of clarification I think dovetails with the (more substantial) improvements in clarity and accessibility already done for this article. drefty.mac 17:28, 29 October 2006 (UTC)
An article like this (core concept: formal grammars, as opposed to specialization of the concept: context-free grammar) really might benefit from a "Terminology" section. The language of grammar theory is very precise and quirky: "closed under" and even the word language: "This grammar generates the language" or "This grammar generates strings" ... "This parser accepts strings in the language." How far one wishes to get into these subtle (but important) uses of words that outside of grammar and formal language theory is really a function of the intended readership one wishes to target. Educated layperson with set theory already understood? Et cetera. To my eye, for instance:

 

is the absolutely best way to define the canonical context-sensitive language. Words such as "iff" rather than "if and only if", ad nauseam. But I do remember when such constructions were utterly intimidating, so I respect that such things must be neutered quite a bit. -- QTJ 18:16, 29 October 2006 (UTC)
This is neither here nor there, but I don't think   is a context-sensitive language, actually. (I know "context-sensitive" sounds like the opposite of "context-free", but actually it's a weaker term: all context-free languages are context-sensitive, but not all context-sensitive languages are context-free.) Ruakh 22:04, 29 October 2006 (UTC)
Actually it is not context-free, per proof by the pumping lemma. I know that such statements need Reliable Sources, so see: Sipser, Michael, Introduction to the Theory of Computation, PWS Publishing Company, pp. 117-118, 1997.

. (See Pumping lemma for context-free languages.)

Sipser uses the classical pumping lemma proof mechanism there (on those pages). This particular language has been used throughout the literature as the canonical non-context-free language that is not unrestricted (thus -- it is the canonical type 1 (i.e. context-sensitive) language). It's use as the canonical Type 1 language is nearly ubiquitous. Others have classed it into a family known as "mildly" context-sensitive because it (and two or so other small languages) are "context-sensitive" by definition, inclusion of them into the weak-family leads to other interesting results. (I could cite chapter and verse on this, but it's not part of this article -- it would be something like mildly context-sensitive languages, and is more specialized). Cheers. -- QTJ 22:14, 29 October 2006 (UTC)
For more on "mildly context-sensitive" see Tree adjoining grammar. -- QTJ 22:28, 29 October 2006 (UTC)

The "mildly context-sensitive languages" go towards this statment in the main article:

Many extensions and variations on Chomsky's original hierarchy of formal grammars have been developed more recently, both by linguists and by computer scientists, usually either in order to increase their expressive power or in order to make them easier to analyze or parse.

The above notion is citable to a reliable source (I'll remember which eventually). The concept, however, has led to many formalism offshoots. The general reasoning behind it is to come up with some subset of context-sensitivity such that it's not Turing complete (and thus remains formally decidable), while encasing enough observably "interesting" languages as to merit the study involved. One shortcoming of pure Type 1 classification is that it lumps some esoteric languages in with some interesting but manageable contenders such as   (or genetic pseudoknots). The ideal is to get really, really close to covering all the languages needed to do interesting research on formal language, without entering the space of the halting problem. Examples of study in this direction are found in Boolean grammars for instance. Moreover, it's not just about the power of a formalism, but the tractability of known open problems using formal systems. Some problems can be expressed more concisely with certain formalisms, and manipulated with more agility. For instance, the use of syntactic predicates often makes it easier to manipulate the   generating grammars: that is, it involves fewer steps or reduces the problem to the intersection of two type 3 (regular) languages. While many refuse to go all the way to Type 0 Turing completeness, considering acidentally stepping into that a disaster, others attempt to reach Type 0, but in an orderly and predictable fashion. Some of the adaptive grammar formalisms, for instance, are Turing powerful, but don't run away from it. That's enough lecturing. I don't know how much of the above applies to this article. Some may (and I can supply citations from reliable sources for that), some might not (so why bother)? -- QTJ 22:47, 29 October 2006 (UTC)

The item QTJ cites in blockquote is an example of the astute balance I think this article maintains between rigor, depth, and accessibility to non-technically-expert readers of the Wikipedia on this subject. It is a concise, sufficiently glib overview of an (IMHO) abstruse sub-concept that is subject to debate. It minimizes potential for confusion, but guides the more nuanced or expert reader to a point of departure for further independent research; in a way that maintains the credibility and authoritativeness of the article as a whole. This is precisely the type of balance that distinguishes adequate 'technical' articles from the truly excellent ones.
Therefore, if I may boldly advocate for the 'educated layman' demographic, please continue the astute development of this article with an introductory Terminology section, reflective of the 'core' focus of the material. Hats off to you contributing authors. drefty.mac 04:13, 30 October 2006 (UTC)