This page concerns a formal development of 1-D lattice models. Moved here from extensive discussions at User talk:Linas. The content here is not original research, it is a reformated record of a conversation that I had with younger Wikipedia editor, wherein I tried to explain certain concepts and ideas that are 'well-known' in the literature, and were pertinent to a certain set of articles that were under discussion.

Intro

edit

Consider the one-dimensional, two-state lattice model, with the lattice having a countable infinity of lattice positions. The set of all possible states of the lattice is the set of all possible (infinitely-long) strings in two letters. Please note that this set of strings is a very large set: it has   elements in the set. The cardinality of this set is the cardinality of the continuum. The (classical) Hamiltonian is a function that assigns a real value to each possible element of this set. The Hamiltonian is not the only interesting function; one may consider  , and so on.

A minor problem arises: how should one label the points in this set? Each point is associated with an infinite string in two letters. This set is not enumerable, so one cannot label them with integers. Operations with respect to a label, such as summation or integration, are problematic: summation is defined only for countable sets, while integration has no rigorous definition for naive point sets.

In order to do statistical mechanics on this lattice, we will need to integrate over subsets of this set, or possibly integrate over all of the set. The problem now arises: how does one define an integral on this space? Since the points in the space are not countable, one cannot just say "oh, perform a sum", since a "sum", is, by definition, something that is defined only on countable sets. We need an analog of a sum that works on uncountable spaces. I am aware of only one such analog, and that is the machinery of measure theory. If there are other analogs, I do not know what they are.

However, the machinery of measure theory forces a radical shift in the language, notation and concepts used to discuss the problem. First of all, one can no longer employ the idea that the space consists of a set of points. There is a reason for this: not every subset of of an uncountably-infinite set is "measurable". In fact, almost all subsets of an uncountably-infinite set are not measureable. So the very first step of measure theory is to throw away almost all subsets of the total space. One keeps only the measureable sets.

How does one do this? Need to define a sigma algebra for this space. I am aware of several possible sigma-algebras for this space. I claim, without proof, that the algebra generated by the cylinder sets is the most "natural" for this problem. The reason for this will hopefully emerge later in the discussion. A homework problem might be to list as many different possible sigma algebras as possible for this space. A real bonus would be to prove that this list is a complete list.

In this new language, the Hamiltonian takes on a new and very different appearance. It is no longer a function from points to the reals, but a function from elements of the sigma algebra to the reals. Intuitively, this "new" Hamilton can be visualized or intuited as the "average" of the old, classical Hamiltonian, the average taken over a certain subset of points. However, this intuition is dangerous, and can lead to errors: in particular, one has the chicken-and-egg problem of how to define the "average" of the old, classical Hamiltonian. The new Hamiltonian is different, and cannot be constructed from the old Hamiltonian. However, there is a way to prove that the new Hamiltonian is a faithful representation of the old one. If one has choosen one's sigma algebra wisely, then for every possible point in the old set, there is a sequence or filter or net of elements of the sigma algebra that contain the old point, and the measure of the elements in this sequence decreases to zero.

Suppose that the elements of this net are denoted by   for integer n. The net is ordered so that   whenever  . The "new" Hamiltonian is some function that assigns a real value to each  . We can then say that the new and the old Hamiltonians are equal or equivalent when, for the "classical" point p,

 

and

 

and

 

where   is the classical energy of the point p which the net is converging to. Again: to be clear: the point p is a possible configuration of the lattice model; each and every point p corresponds to an infinite string in two letters, and vice-versa, in one-to-one correspondence. If a net can be found for all points p, and the above relationship holds for all points p, and it holds for all possible nets to the point p, then we can honestly and truthfully insists that the "new" and the "old" Hamiltonians "are the same".

The above sets up a rigorous and mathematically precise vocabulary for the further discussion of the problem. Let me know if you have any questions, or if anything was unclear. In the meanwhile, I will contemplate how to present the next steps. linas 21:32, 3 September 2006 (UTC)

Dear Linas. Yes, I think I am with you so far---very clear. (I was just going to add that the above should be true for all such nets to the point p, but then you got there first.)

The sigma algebra

edit

OK, next installment: the explicit construction of the sigma algebra. Let the positions on the lattice be labelled by an integer n. The elements of the subbase of the sigma algebra can be completely enumerated by ordered pairs (n,s) where s is a string in two letters of finite length k, for integer  . Visualize the pair (n,s) as the set of all states where the lattice values between location n and n+k-1 are equal to the string s. Use   to denote an element of this sub-base.

The subbase of a topology is a collection of sets, which, by intersection and union, generate the rest of the topology. A brief review of intersection and union is in order. The intersection   is the set of all configurations that match s at n AND match t at m. The union   is as above, with OR taking the place of AND.

Let the two letters be "A" and "B". Then for example,

 

and

 

and

 

with   representing the entire space. One notworthy aspect of this topology is the somewhat "backwards" relation between the string labels and the sets, in that when two strings overlap, its is likely that the intersection of the corresponding sets will be empty, whereas when the strings don't overlap, the intersection will never be empty. Thus, for example,

 

whenever  . (I believe that this property will be responsible for some of the "fractalishness", to be defined and discussed later.)

OK, with you so far. This topology seems pretty natural.
At some point, we should list all other possible topologies.
Agreed.
The dyadic map maps these to the real numbers. Thus, the natural topology (the topology of open sets) on the reals can be pulled back. There are topologies on the reals, e.g. all closed sets, or the set of half-open, half-closed sets. Both of these last two are finer than the natural topology, and thus make just about anything discontinuous, and have other things that make them unhappy, on the reals. Pulled back, they could be intersting.
Hmm. Insane thought: filter -> dual is ideal (order theory) -> is there such a thing as Zariski topology for order theory?

The measure

edit

The measure is a function   that assigns to each element   a real, non-negative value. We'll normalize the measure such that

 

and

 

The requirement of sigma additivity implies that all other values will be less than one. A measure will be said to be translation invariant if

 

for all  . We can use sigma additivity to construct a collection of translation invariant measures as follows. Let

 

for some  . Then sigma-additivity requires that

 

which follows from the fact that the union of these two disjoint sets is the entire space. One may readily deduce that

 

where   is the number of times the letter 'A' occurs in the string s, and likewise for  . In most physics applications, the canonical and symmetric choice would be  , but it should be clear that this is not mathematically constrained. One might even be able to make physical arguments to have x be something other than one-half, if, for example (and maybe a bad example), some external force is causing there to be more spins pointing in one direction than the other, on average.

Lets pursue this idea just a bit further. Let

 

be the set of all strings of length k. Then,   one has  , that is, this set of strings defines sets that are pair-wise disjoint. One has

 

and, from sigma additivity, we deduce

 

The appearance of the binomial coefficient is another hint that there might be something fractal around the corner. The binomial coefficient has many relations to fractals; for example, of one considers their values modulo a prime p, then Pascal's triangle takes the form of the finite-difference version of Sierpinski's gasket. Sums over rows i.e.   generate the Batrachion curve, which is the finite-difference version of the Takagi curve -- which has a fractal, dyadic symmetry. Yes, this is a stretch at this point in the game, but the powerful p-adic fractal nature of the binomial coefficient should not be overlooked or under-estimated in its importance.

OK. Well we will see if it crops up later.

Other translation-invariant measures

edit

The partition function, to be constructed later, will be seen to be another translation-invariant measure, not taking the form above.

Non-translation-invariant measures

edit

For completeness, here's an example of a non-translation-invariant measure. One may assign

 

for some arbitrary sequence  . One then has

 

To get the measure on the remaining elements of the subbase, one uses a multiplicative construction, requiring that

 

and

 

I believe this measure is fully self-consistent. I think more general measures are possible, combining the above with partition-function-like measures.

Some general remarks

edit

It should be noted that, up to this point, the theory developed so far is more or less isomorphic to that of a certain class of Markov chains, and also for subshifts of finite type, and to a lesser degree, but more generally, measure-preserving dynamical systems. I don't want to pursue these relations just right now.

FWIW, having a measure is more or less sufficient for defining entropy, for example, the Kolmogorov-Sinai entropy. If one has a metric, one may also have a topological entropy. These are all related to the entropies that can be given on Markov chains and in information theory and of course stat mech in general; the problem is to a large degree a problem of wildly varying notation. Perhaps we'll explore these later.

We'll also have to do partition function.

Entropy

edit

Let   be a partition of   into k measurable pair-wise disjoint pieces. The information entropy of a partition Q is defined as

 

I believe that the correct definition of the "measure-theoretic entropy" is then

 

where the supremum is taken over all finite measurable partitions. I'm not sure, I'm guessing that

 

or something like that, for the translation-invariant measures given above, although I'd have to do some thinking to verify that this is correct. Anyway, its gives a general idea.

The metric

edit

Punting for now, until we need it.

OK. I don't really want to preempt your next installment, but I do have two comments so far.
(1) I guess you will explain precisely where the metric comes into the definition of the ising model, so I won't preempt that discussion at all.
(2) You haven't yet defined the p-adic metric, so I should probably wait. But...suppose I have two strings, s^1 and s^2, defined by  , where  , and i=1,2. I think by the dyadic map we both mean an assignment of the numbers
  to these strings. Will the dyadic metric be something like
 ?
If not, please do ignore me and continue! If so, my comment is that this is not quite the same as looking at how many letters match up, and how many letters differ, which goes back to what I said way above: {...001} is near to {...000}, but {100...} is *not* near to {000...}. Moreover, all the states like {011110} will be nearer to {000000}, the reason for this discrepancy being the weighting of   in the definition of  .
I suppose in the finite N case, one might prefer the metric
 , or even
 , if one doesn't care about the ordering of the spins. I can see there might be problems with this in the limit  , so perhaps you are about to convince me that the p-adic metric is the best one can do (as far as looking at how many letters match up), in the infinite limit. Or perhaps I have jumped in too early, and the above is not where you are aiming for.

UPDATE Ah, I can see I may have conflated some different ideas above---at least I've reminded myself what the p-adic norm is. Well, if the comments turn out to be relevant to what you want to say, all very well. Otherwise, I will let you carry on with the exposition before trying to second-guess you. --Jpod2 16:38, 4 September 2006 (UTC)

The Hamiltonian

edit

Next step: write down the value of the Ising model Hamiltonian for the subbase elements  . The classical Hamiltonian for the Ising model is

 

Here, the sum over i is a sum over all lattice positions i. At each lattice position, there is a spin   having a value of +1 or -1. The nearest neighbor interaction energy is J, with sign such that when spins are aligned, the energy is lower. The magnetic field is B. The argument s is understood to be a bi-infinitely long string in two letters. Let letter A be   and B be  .

The quantum Hamiltonian will be a set of values defined for the subbase elements  . I'm calling this the "quantum Hamiltonian" as it is meant to be a mathematically rigorous formulation of a functional integral or Feynman path integral. (It differs from the Feynman path integral by not being weighted by a factor of  . Later on it will be seen that is weighting factor is essentially the partition function, and can be folded into the measure (because its sigma-additive and obeys the other axioms of a measure)).

For notational convenience, let

 

to avoid typing so many parenthesis.

The first important property of the quantum Hamiltonian will be translation invariance. That is, one will have

 

for all integers m, n. It may sound trivial, but this is important: the translational invariance already fulfills one aspect of what it means to be "fractal" or "self-similar": when a function "over here" looks like the function "over there". Now, an objection might be that a sine wave is self-similar in this sense, and no-one calls sine-waves fractal. To get the remaining aspects of self-similarity, we also need scaling as length scales are changed.

Without further ado:

 

The total space is normalized to zero energy.

 
 

and

 
 
 
 

and

 
 
 
 
 
 
 
 

The rules for creating this list are simple enough. Define

  = number of aligned pairs minus number of opposite pairs

where, by "pairs" I mean "pairs of nearest neighbors". More formally, let V be the function that looks only at the letter in position zero, and the letter in position one, and returns +1 if they are the same, and -1 if they are different. Let   be the shift operator on the lattice, which takes (n,s) as an argument, and returns (n-1,s). Then

 

Letting #A and #B be the number of letters A and B in the string, as before. Then one has

 

It is presumably not hard to see that as the string gets longer and longer, this approaches the classical Hamiltonian, thus fulfilling one of the requirements that the classical and quantum Hamiltonians correspond.

The above uses a normalization of

 

so that the Hamiltonian here is consistent with what's in the intro above. It has the nice property of making the energy smaller when the set is smaller.

(I notice here that the classical energy will be infinite unless there are equal numbers of A's and B's in the infinite string, and equal numbers of transitions. That's bad: and worse, there are conceptual difficulties with the question: what does it mean to have an "equal number of transitions"? I think a strong case could be made that, for the infinite lattice, the "classical Hamiltonian" cannot be rigorously defined, except as the limit of the quantum Hamiltonian.)

Anyway, I note that, by increasing the length of the string by one letter, one has the identity:

 

I notice that this relation accounts for some of what I've been thinking of as being "fractal" in the back of my mind; I won't go into why it seems this way just yet.

Functional integrals

edit

The "whole point" of going through this exercise in measure theory and what not is to place the physicists notion of a functional integral or Feynman path integral on firm footing. The correct way to think of this "quantum Hamiltonian" is as (for the example of the string s=ABB):

 

There is a problem with posing such infinite sums, though, and the problem arises precisely because such sums are ill-defined. Sometimes, one can get lucky and work with them directly, but more usually, one can get caught in a trap. The whole point of taking the route of measure theory is to avoid having to deal with such constructions.

To make it clear that this is indeed the familiar path integral of QFT, imagine replacing the two-state Ising model with an arbitrary-valued scalar field. Then the above infinite product of sums becomes the familiar-looking

 

where   is some non-quantized function, for example,

 

which should be familiar from QFT. There are several mathematical problems with the physicist's notation. First, it gives the appearance of integrating over nonsense sets, such as the Vitali set, and the kinds of sets that allow the Banach-Tarski paradox to happen (which you should review, as its a marvelously simple game with two letters.) The second problem is that physicists sometimes write crap like

 

implying that F is some operator whose trace can be taken, when what they're really trying to say is that the volume form is like a determinant. The yucko-ness of this is that operators imply that there is some Hilbert space, and some basis for that space, which has, uh-oh, the cardinality of the continuum, and thus cannot even be labelled by an integer index. Oops. Never mind that in order to take the trace of something, it must be trace-class, which the operators typically found in QFT almost never are. Don't get me wrong, physicists achieve marvelous things, and most of what they do can be made rigorous, but sometimes its nice to actually know what's going on instead of running on fumes and intuition.

I shouldn't be too strident: the trick that is employed by physicists to keep this stuff sane, at least for stat mech, is to multiply the integrand by   which helps kill some of the nasty infinite energy states. We can do that here too, but don't need to do so a-priori. It can be shown, in a rigorous fashion, that the   factor can be understood to be a measure, in that it obeys the axioms of a measure, and thus using it to integrate with helps keep things finite, and helps keep some of the nasty-business on a short leash. The   can play a similar role, but it is more ambiguous.

Fractalness

edit

Anyway, (1) I have to go to work, and (2) I'll have to think a bit on how to best present a case for scaling in this example. I'll hand-wave a bit now: as one changes scale on the lattice, one wants to compare energies to those of similar-but-averaged lattice configs. I've never attempted to do this before, and have only the vaguest memory of having heard of such a thing in some lecture -- and so may have to stumble about a bit. I hope it works out :-) I also encourage you to try making a scaling argument on your own.

Commentary

edit
"I have to go to work,"
I'm glad one of us does something useful! :)
Heh.


I'm not going to write much here, but I've read and I think understood what you are doing so far. I'm not sure where the p-adic metric will come in, yet, but I will bite my tongue until you've completed that part of the programme. ALl the best--Jpod2 19:55, 4 September 2006 (UTC)
Well, there may be irrelevancies :-) I'm not sure why I wrote about the measure. Other than to illustrate that on general principles, that this topology is measureable. It may still come in handy.
Hi. Well, it all seems quite clear so far, it's a good exposition. I need to think a bit more about it, but for now...
(1) I think the terminology `quantum hamiltonian' is maybe a bit confusing. For me, your hamiltonian is best thought of as the appropriate classical hamiltonian in the infinite-N limit. There may be some subtleties, but isn't it quite analogous to field theory? Where one has the hamiltonian
 
formed from the hamiltonian density  . I realise I'm not formulating that too rigorously, but it seems like an appropriate analogy, with your `classical hamiltonian'   and your `quantum hamiltonian' H(n,s) being something like  . (Hmmm, i've just edited this, and the analogy maybe isn't quite right, but anyway.)
Conversely, I would use the term quantum hamiltonian either to mean the operator on some appropriate hilbert space in the quantum theory, or else the vacuum expectation value of this operator i.e. something like
 ,
where the integral is over the space of states.

Oh gosh, its supposed to be a mathematically rigorous version of a Feynman path integral or functional integral; that's the whole point. I added a section above to make this clear.

No, no, you're right. I did realise that the right way to think about your `quantum hamiltonian' was to integrate over all the non-fixed elements of the infinite string. But I guess physicists would call the H(\Omega) the vacuum expectation value of the quantum hamiltonian. But does your H(n,s) correspond to an expectation value of the quantum hamiltonian in some state? I have to think about it, perhaps it does.
But let's not get hung up on this, as I think it's terminological. More interesting discussion below...--Jpod2 08:06, 6 September 2006 (UTC)
Anyway, it might be just an issue of terminology---but I thought it better to be explicit in case I am misunderstanding something in what you mean.
(2) on the translation inv/scaling/self-similarity. I agree that requiring translation invariance is appropriate, but i'm not sure this will lead us to fractal-ness. Remember the discussion we had over on the scale invariance pages. We were considering functions satisfying:
 
either for all dilatations,   (scale-invariant functions), or else a discrete subset (self-similar functions). In both cases there are as many continuous functions as you like satisfying this kind of condition, but which are *not* fractal.
I guess I'm saying I wouldn't be surprised if there is some form of self-similarity/scaling, for sure (e.g. the blocking transformation of the real space RG), but whether actual fractals (e.g. the ?-mark function) appear would be the interesting point. We will see...one other comment is that I think the sections above where you comment on self-similarity, the arguments all hold in some form in the finite-N case, don't they? But perhaps the conclusions will be different in the infinite-N case, I'm not sure.

Yes, by scaling, I do mean functions that satisfy  . However, as you may recall, the difference between a parabola, which scales like this for  , and a "true fractal" is that the fractal scales like this "in many places", and not just the origin. For the ?-mark function, pick a rational, any rational, and I can show you a section of the curve, starting at that rational, that is an exact rescaling of the whole thing (in fact, I can show you infinitely many such intervals).

Yes, what I had in mind was explicitly to demonstrate the blocking transformation, I was hoping this wouldn't be too hard, and then use that to make the claim "ah ha, that means it must be a fractal". Given your comment, I suspect the conversation may need to resolve the question of "what is a fractal", especially when the topology is not the natural topology of Euclidean spaces.

Well, I hope it's not just my comment that has made you suspect this! I guess it is clear that self-similarity \nRightarrow fractal-ness.
Anyway, making the blocking transformation more rigorous is worthwhile, but if the RG flow of the Ising model demonstrates what you were thinking about scaling, then maybe we can think about also about familiar results. Do the known properties of the Ising model RG flow coincide with the points you want to make about scaling/self-similarity?
I don't know. I'm doing this "cold", I don't have some ref I'm peeking at, so I'm not sure what the outcome will be.
There is certainly scaling and scale invariance (in certain senses), but (as always, it seems!) I am not sure there is fractal-ness. Perhaps the main statement there is that the partition function is invariant under the blocking transformation/rescaling, with an appropriate rescaling of couplings. At a fixed point there are further statements one can make. However, I am not quite sure of the precise idea you want to develop, so I will not try to second-guess you.
I don't exactly have a "precise idea"; rather I'm following my nose. By intuition, yes, things would be most "fractal" at the fixed point. What I'm looking for is not just randomness, but an overall symmetry pattern. Thus, for example, the list of H values above has a readily appearant pattern. What is that pattern? How can that pattern be characterized? Is there an underlying monoid that describes the pattern? Can we use something analogous to Noether's theorem to deduce something like: "pattern moniod"-->"quantity X scales like Y?" or "quantity X is a conserved current?" I know an answer to the first few questions, but I don't know the answers to the last few questions. I really think there is some conservation-law like thing, but I don't know what it is.linas 14:38, 6 September 2006 (UTC)
Where does the p-adic norm come into all of this? and the original discussion on your userpage, about the dyadic mapping? It was the latter I thought was artificial, I was always expecting scaling of some sort to appear. I did rewrite the scale invariance page, after all:) --Jpod2 08:38, 6 September 2006 (UTC)
I assumed that the metric would be needed to argue that two states are "near" to each other. May still need it, but clearly haven't yet. I've attempted to stay away from the dyadic mapping, since it seems slightly "poisonous". However, if you are willing to keep an open mind with a grain of salt, then try graphing the sequence of H's given above. Alternately, I'll try to prepare some graphs next week, since I know what I'm looking for there, and what the relevant features are.linas 14:38, 6 September 2006 (UTC)
I am willing to keep an open mind. But graph them as a function of what? Any graphing would seem to depend on a choice of ordering of the states---i.e. what states are near each other. Which brings us back to choosing a mapping from the lattice to R, or (pretty much) equivalently choosing a metric on the states. I don't think the p-adic norm is even the most natural one (in that it doesn't quite just compare number of spins up and down). But, as you know, I have always seen this step as artifical, *whichever* mapping one chooses---simply because it is a choice that the ising model doesn't dictate.
Anyway, we've said all that before. Back to the blocking. I realise you are developing this exposition as we go along, but I guess you have seen a (physicist's) derivation of the RG flow of the 1d Ising model? I.e. the blocking transformation and the necessary rescaling of B, T, J. But these couplings are not fractals as a function of lattice spacing, AFAIK.
Maybe 20 years ago, a dim memory at best. I do not expect any physical observable to be a fractal function of any other physical observable. Even when a system has a phase transtion, the parameter dependence are smooth. Even critical opalescence is visually more or less smooth. You only see something "random" if you examine a very small region of physical space, and even then, its "random" not fractal.
Right. I just think that *some* of the scaling properties you are getting at above (relating to blocking transformations) may be already well-known, and not fractal.
Of course, whether treating this rigorously brings something else to light is a good question. But, I'm still not sure what your intuition is about what will turn out to be fractal.
(Did you read my further comments Re: quantum/classical hamiltonian above? It's getting a little confusing on this thread. Anyway, I'm still not sure that `quantum hamiltonian' would be the usual terminology for H(n,s), but it's probably not important.)
I don't see any "further comments"... Perhaps it should be called the "second-quantized hamiltonian"? If you just want some hilbert space, any hilbert space at all, may I remind you that the   make excellent basis vectors. linas 23:56, 6 September 2006 (UTC)
I just meant the above: "I did realise that the right way to think about your `quantum hamiltonian' was to integrate over all the non-fixed elements of the infinite string. Physicists would call H(\Omega) the vacuum expectation value of the quantum hamiltonian. But does your H(n,s) correspond to an expectation value of the quantum hamiltonian in some state?"
I just mean that given the classical ising model, there is usually a set of operators physicists (perhaps unrigorously) have in mind when they think about the `quantum' theory. I'm not sure what quantity your H(n,s) might correspond in the quantum theory of the ising model. But anyway, whether we call it the quantum hamiltonian isn't important. I don't think the terminology is important to your scaling arguments.
Which brings me back to...I'm still not sure with the graphing of the sequence of H(s) above how one can avoid a choice of ordering of states, and so it seems (unless I am completely missing your point) that we are back to a problem very similar to the point I first mentioned.
I completely agree that seeing fractal behaviour in H(s) which depends on the choice of this ordering is interesting in its own right, but I'm still unclear what it can ever tell us about the ising model (or more generally quantum field theory).
Did you mean on my user page that you want to continue the discussion over email? All the best--Jpod2 08:56, 7 September 2006 (UTC)

More commentary

edit

No, I wanted to say that I was out of town and wouldn't be able to respond, and did not want to say this publically. Anyway, I'm back. FWIW, I am now reviewing/re-reading the book I'd mentioned previously, and yes, its chapter 7, by Dieter H Mayer, which contains the good stuff. Its stated far more elegantly than what I say above, and the focus is quite different, (its less topological), but the claims are far stronger than what I've been making. He credits Ruelle, Sinai, Bowen, others for taking the transfer-matrix method for solving 1-D lattice spin models w/ finite-range interactions and extending it to infinite-range forces, and the the transfer operator. He suggests a better understanding of the 1D case may lead to results on 2D and 3D lattice spin systems. He then develops the isomorphism between the 1-D spin systems in full generality and subshifts of finite type in full generality. Claims the transfer operator is generally applicable to Axiom A systems. Claims that Selberg's trace formula can be used to give a complete solution to the quantum mechanics of free particles on any surface of constant negative curvature (e.g. Hadamard's billiards), i.e. The transfer operator tells you how to get the spectra of the laplacian on the surface. (but does not develop any of these last claims; just gives refs to them) He notes that for dynamics on the Poincare disk, the transfer operator is the Gauss-Kuzmin-Wirsing operator, which is the transfer matrix for the Kac model. The Kac model is just the Ising model with an inf-range force   where   is the spin at lattice location n. The point being that this all has an exact solution, dating back to Gauss, who discovered the thing while studying continued fractions. Anyway, his treatment is fairly abstract, but very educational, and more elegant than what I can say.

Anyway, what I get out of it is a glimpse of some deep connection between number theory, chaos, K-theory and the spectral theory of automorphic forms, with this connection can be explicitly demonstrated for the very special case of Riemann surfaces of const negative curvature. Unfortunately, I can at best plod along ("...what I lack in ability, I make up for in denial ... and all the girlies say I'm pretty fly (for a white guy)..." linas 01:10, 17 September 2006 (UTC)

Hi. Thanks for the reference and links. It sounds like there're some interesting connections, there. Is it possible to summarise precisely which objects in the lattice model look fractal, or are related to fractals? Say the Ising model?
In particular, does it have anything to do with graphing the hamiltonian, as I think you were suggesting on your userpage, and throughout this discussion? E.g. just above: "try graphing the sequence of H's given above".
If so, I don't really understand how your recent comments address my very last question above, on the ordering of the states, s, in this graph H(s). Which is in essence the same as my very first question, (3). But perhaps I am just missing your point. All the best--Jpod2 09:02, 17 September 2006 (UTC)
I can't answer your question until we first agree on a definition of "fractal". Hadamard's billiards is a good example: here is a smooth, infinitely-differentiable manifold, with smooth, infinitely differentiable geodesics. Yet these geodesics are clearly chaotic, and furthermore, "fractal". There are a variety of graphical ways of seeing this, whereupon it is visually appearant. The non-graphical ways of seeing chaos is to use some abstract definition, e.g. to show something is ergodic or is mixing (mathematics). To show that a chaotic system is also fractal, one must also show that it is self-similar, with a self-similarity monoid that is the dyadic monoid (or more generally, any free monoid). For me, that seems enough. Does one need to show something more to demonstrate something is fractal?
I believe any one-D spin system can be easily shown to be ergodic, by recasting it as the corresponding subshift of finite type (i.e. more or less a discrete-time Markov chain), and noting that Markov chains and related ideas are generally at least ergodic, if not weak-mixing or strong mixing. As to the system having self-similarities, I think we can embed the action of the dyadic monoid in there. For me, the dyadic monoid is very special, since its a subset of the modular group, and it is exactly this embedding (and nothing more/nothing else) that accounts for all the various overlaps between number theory and chaos.
I have a hypothesis (I have no idea if its been clearly stated somewhere, or proved; perhaps by Gromov?) which I suspect is rigid (in the sense of rigidity theory): Any non-trivial action of any free monoid on any smooth finite-dimensional manifold is necessarily dense in some measureable subset of the manifold (with respect to the "natural" topology of the manifold). linas 15:29, 17 September 2006 (UTC)
Dear Linas. I thought it was clear from your userpage what sense of fractal you expected to be intrinsic to these lattice models; you claimed that the ?-mark function (or similar) would naturally appear. You still seemed to think that just above when you asked me to "graph the sequence of H's".
I have created the various graphs, and posted them at [1] (This pdf is poorly structured and more sophmoric than our discussion; I'm not sure who I'm writing for).
But AFAICT nothing in the lattice model chooses the choice of ordering of the states for you in this graph. Did you realise before I mentioned it that your construction had this apparent arbitrariness? (I.e. the choice of the dyadic mapping).
Since you believe that this mapping is "arbitrary" (it is not) I have attempted to stay away from the mapping so as to avoid confrontation. That is why I attempted to focus on the question of "what is a fractal" instead. At the same time, it is becoming clear to me that *any* mapping from any free monoid to any smooth finite-dimensional manifold will be inherently fractal, no matter how you map.
Anyway, I paid close attention to your rigorous construction above (which I think is quite nicely done) in an attempt to understand if this dyadic mapping could be seen to occur naturally in the infinite N limit. I haven't understood that yet, and I find it difficult to carry on the conversation, because I think either you don't understand my question, or I don't understand your answers.
I don't understand your question. The dyadic mapping occurs whenever one has a string in two letters. I sense that what you want to say is something along the lines of "but ah-ha, its not *natural*, its artificially imposed, it wouldn't occur in the model if I didn't force it into there".
Hi. Yes, I would say (and have said) exactly that. But perhaps without the `ah-ha' :)
But that would be wrong. It is wrong for two reasons: first, it is well known that strings in two letters occur generally in anything hyperbolic or chaotic (more generally, free groups and free monoids occur in anything chaotic). Secondly, the one-D lattice models are provably isomorphic to ergodic measure-preserving dynamical systems, (and this can be shown without much effort). Ergodicity at its core is about the dyadic mapping, or more generally diophantine-type equations. Perhaps your questions revolve about the notion of "naturalness" in mathematics? Or perhaps about why strings are "hyperbolic"? Or should we review ergodicity?
Not to labour the point, but however natural/useful/related-to-fractals-in-other-ways the dyadic mapping is, the fact remains that the ising model didn't dictate that I have to use it in this specific instance. I could have used some other mapping from the states to the real line, and graphed H as a function of that instead. If there was any robust prediction about a property of the ising model, it would surely have to be independent of this choice of mapping. What part of this paragraph do you disagree with?
Argh. I don't disagree with any of it. And that is why I stopped talking about the mapping: because you don't like it. Well, actually, there is something to disagree with: I'm not sure that there exists "some other mapping from the states to the real number line" that isn't isomorphic the dyadic mapping. I challenge you to find such a mapping. In particular, I believe a mapping that sorts the states by increasing energy is isomorphic to the dyadic mapping. By "isomorphic" I mean "can be reached through rotations and reflections of the binary tree". Be forwarned: the set of rotations and reflections is very large. This is an off-the-cuff remark, and I haven't thought it through, but intuitively, I just cannot think of any other possible mappings. I'll try to think of how to prove this assertion, since I think its true.
To me, the dyadic map seems clever the first time you see it---that states in the ising model can be easily mapped to binary numbers. But then you realise that you have made a choice in this particular mapping, and the choice doesn't seem intrinsically related to the ising model itself. It's clever because you don't realise you're making the choice as you do it, precisely because it *seems* so natural.
If you could give me a clear answer---do you still think the fractal functions you mentioned on your userpage are intrinsic to these lattice models? All the best--Jpod2 15:55, 17 September 2006 (UTC)
Yes, utterly. And I now realize, more deeply and importantly than I first imagined. And, upon review of the literature, its clear that this is hardly something I've dreamed up; there are Fields medalists who have written on this. linas 22:52, 17 September 2006 (UTC)
Fair enough. I've tried, but I'm afraid I don't see this particular connection as anything intrinsic to these models, and therefore I am skeptical of styling this `secret fractal life of QFT'. But it's up to you, of course. You've alluded to many other connections with fractals, and I have no doubt that there are connections. But look---the key thing about something like SLE is that the fractal dimensions there are directly related to anomalous dimensions, i.e. properties of the lattice model. This tells me there is something inherently fractal there in a specific and quantifiable sense. In contrast, I doubt that the fractal function H(s) we have discussed can do the same thing---see my comments just above. I'd be very interested in your future investigations if you find such a connection, but until then can't think of anything else to say--Jpod2 23:16, 17 September 2006 (UTC)
OK. I frankly don't understand why you remain skeptical and resistant. I've tried to make it clear that there have been decades of work on this topic by thousands of people, and it has shaped everything from the renormalization group to materials science to string theory. I tried to make a clear presentation of textbook materials, which I think you understood. But still there is something that you don't like; but you can't verbalize it, and you'd rather imply that I'm somehow wrong and you are somehow right. I find this attitude quite frustrating. linas 03:15, 18 September 2006 (UTC)
Dear Linas. I believe I have verbalised it, and I have little doubt that other physicists/mathematicians understand the arbitrariness and artificiality to which I refer. However, I will try again.
I understand that you have at times tried to avoid mention of the dyadic mapping. But you didn't really "stop talking about it", because whenever I ask you to show me something specifically fractal, you always need to go back to that mapping, viz the graphs in your pdf above. Do you accept this? Other than that you have just fallen back to generalities about lattice models being fractal (which I don't doubt---it's always been the specifics of your claim about H(s) I've been questioning).
"I believe a mapping that sorts the states by increasing energy is isomorphic to the dyadic mapping."
Perhaps that's true, but then the graph of H(s) *no longer looks fractal*! Do we agree on that, at least? But if it's not important that H(s) even looks like a fractal, then what exactly *is* its important property? It clearly can't be a fractal dimension or anything similar. I.e. what property of the function H(s) actually tells you anything about the Ising model? And is what it tells you related to any specific fractal curve?
AFAICT from your original responses, you didn't realise you were implicitly using the dyadic mapping to define your fractal curve. But for some reason you won't acknowledge this, and as a result you have elevated the importance of the dyadic map to an axiom! Final questions from me:
(1) If I use a dyadic mapping from states to R, and then graph H(s), then which property of the curve H(s) yields which property of the original lattice model?
(2) If i use a mapping sorting the states by increasing energy, and then graph H(s), then which property of the curve H(s) yields which property of the original lattice model?
If there is an answer to (1) or (2) then I believe it must be the same, because the lattice model does not dictate the mapping. Do we agree on this?
Please no more responses about decades of work on fractals etc. Resorting to telling me that there is decades of work on fractals and lattice models is a straw man argument, and somewhat insulting. We both know I am aware of this, but we also know that this does not mean there is decades of work on and corroboration of your specific idea. This blustering frustrates me, but doesn't convince me, as it doesn't support your particular assertion about H(s) and its relation to properties of the lattice model. all the best --Jpod2 09:14, 18 September 2006 (UTC)

Yet more commentary

edit

(UNINDENT) Hi. I've thought about this a little more. Let me try to summarise what we agree on, so that at least we don't finish on a sour note. I hope to collaborate on future articles.

(1) Fractals are important in a number of different ways in the study of lattice models. A recent example I know about is SLE, where one can relate certain fractal dimensions to specific quantities in the lattice model (anomalous dimensions). As you've pointed out, there is also a long history of looking at fractals and lattice models before that, which I know less about than you.

(2) When one considers the RG flow of a lattice model via a blocking transformation, certain objects display self-similarity as a function of the blocking scale---but this self-similarity is not thought to be fractal.

(3)You have pointed a number of times to the literature mentioned above, but I believe your motivation is that you want to bring to light something that is under-discussed, or not discussed at all---therefore something not well-known in the extensive literature.

(4) In particular, the new thing you want to bring to the discussion of fractals in lattice models is related to the graphs, H(s).

(5) If one maps to the real numbers with the dyadic mapping, there may be a sense in which H(s) is a fairly well-known fractal curve.

(6) One could have used an alternative mapping to the real numbers, such that H(s) is not obviously fractal---moreover, no one choice of mapping is dictated by the construction of the original lattice model.

  • This is a question, rather than something I think we agree on. From (4), can one relate any properties of the fractal curve to specific objects in the lattice model? My doubt that any of the fractal properties of (5) are relevant is due to the apparent arbitrariness, viz (6). Perhaps you can extract some property which is independent of the choice of mapping. But what is it? And can it said to be fractal, since some of the mappings do not result in obviously fractal curves H(s)?

Given that I've asked these questions before, I'm not sure I expect you to give me a concrete answer. But I would suggest they would be useful things to think about, in addition to wherever else you want to take your investigation. --Jpod2 21:38, 18 September 2006 (UTC)

Its late, I'll try to reply later. In the meanwhile, I spent some time polishing http://www.linas.org/math/fdist.pdf Ignore the beginning, start reading at section 4, "spin lattice models". Please make a close reading of section 4.2.2 "Physical interpretation" and 4.2.3 "Gibbs states and self similarity". In particular, section 4.2.3 makes an argument that any mapping from the configuration space of any one-D lattice to the reals will be fractal, provided that the mapping preserves a notion of translation invariance. Since translation invariance is an "essential" physical symmetry (ignoring localized Anderson states), it is something we want to preserve in a map, and this urge leads inevitably to the fractals. I believe the argument is bullet-proof, have a shot.
I'm not sure of what to make of this result. The topologies of 1D lattices and the topology of the reals are in a certain sense incomensurate, and so forcing a mapping between the two forces fractals to arise. So perhaps this is no surprise. And yet, there are fractals: why? Why should such a map of topologies cause this to happen?
My plan for the next few months is to reverse the process: take the fractals I'm familiar with, and map them back to a 1D lattice, and then see if the physics of lattices can provide any insight to the structure of the fractals. I have no idea what to expect; there may be something, there may be a dead end. I'm not working deeply here, just taking a shallow survey. linas 04:09, 20 September 2006 (UTC)
What happens if the maps r and g are the identity? It's not obvious to me that hf^{-1} will then be a fractal. I think my questions above remain pertinent, so I'll stop for a bit and give you a chance to think and reply. ALl the best--Jpod2 08:39, 20 September 2006 (UTC)
Trivial maps should be understood to be trivially excluded, but perhaps I should explicitly exclude them. I added a section 4.2.4 this morning that attempts to define "what it means to be fractal". Your point (6) above is rebutted by section 4.2.3 which contains a proof that no such map can exist, if one also wants the map to be translationally invariant and be unchanged by inversion of the lattice.
Hi. Well I guess depends on what you mean by trivial. So now you are excluding the mapping where the states are ordered in terms of energy, whereas just above you were including it?
After thinking about it some more, I want to exclude that map, on the grounds that it does not preserve translation invariance.
But surely it does preserve translation invariance! The energy of a state is invariant under translation of a spin chain.
Doesn't that have g and r being the identity? Why do you exclude g and r being the identity?
Because I want g and r to generate a hyperbolic group, or more precisely a mere monoid subset of a hyperbolic group would suffice. If g and r don't generate something hyperbolic, then one cannot get a fractal. In the paper I wrote "a free monoid" (a monoid subset of a free group): free groups are inherently hyperbolic: I realize now they don't need to be free, just hyperbolic.
It just begins to sound like your definition of which mappings are `allowed' is precisely those which will give you the behaviour you wanted to prove exists in the first place. It seems tautological, and I'm frankly confused about what the assumptions are, now.
I thought that asking for a map that preserves the notion of "translation invariance" would be non-controversial. Translation invariance is a popular idea in physics. Symmetry in general is pretty pop in physics. I want the maps to preserve the symmetries. When I force the maps to preserve the symmetries, then yes, the result is a fractal. Symmetry-preserving maps in mathematics are calledhomomorphisms" because the preserve the morphology of the thing being studied. If that sounds tautological, then I've succeeded in defining a category (mathematics) Fr of fractals! :-)
I don't understand. Whether maintaining the symmetries of the Hamiltonian is important is a moot point, since the whole procedure is so arbitrary, but even if I play the game by your rules then maps with g and r trivial are most certainly translation invariant. For example the Hamiltonian itself! Sure, you are preserving the symmetry for f, but also imposing that its action must be in some sense non-trivial. That is the tautological point in some sense. See more commentary below.
The point (3) about not being well-known: "well-known" is a relative term. I think it is "well known" that essentially anything and everything that is "hyperbolic" is somehow fractal.
My point in (3) is that you seem to want it both ways. On the one hand you claim that you want to explain something that is under-discussed or not discussed at all, but on the other hand frequently refer back to the existing literature over 40 years as a means to bolster your argument. (Just read back above, that's what you have said at various points.) So frankly I'm confused about what you want to show that's new, and what is supported by existing literature. Has the fractal nature of the graph H(s) (when using the dyadic mapping) been discussed in the literature, for example the textbook you were using? My impression was that this was something new, perhaps that is wrong. Is there anything new?
I doubt there is anything new in what I discussed. I'm mostly regurgitating semi-digested memories of things I've read and seminars I've attended. Yet, at the same time, I'm synthesizing things from these memories: there is a circle of ideas I'm interested in, and some things that I'm trying to grasp. So my trying to understand these things is certainly new, to me.
But is H(s), and its significance as a fractal, specifically mentioned in these things?
This is deep and broad, and not particularly a secret. What perhaps is not well-known is that hyperbolicity can be seen in lattice theories, and I am not sure why this is the case. It may be a failure of the established physics community to keep track of what is going on in quantum chaos, or a failure of the string theorists or geometers to explain to the layman that the hot topic of the day is the pervasive nature of dynamics on hyperbolic structures. The comment (2) is strange, Phil Anderson et al (Philip Warren Anderson) clearly established that the RG is nothing more and nothing less than a fractal phenomenon in the 1970's and the 1980's and I thought he got the Nobel prize for that, so I have no idea why people wouldn't know this now. linas 15:22, 20 September 2006 (UTC)
By (2), I referred to something in the discussion above, where at some point you wanted to perform a blocking transformation and then deduce that certain quantities are self-similar. Which is true, but they are not thought to be fractal in the usual understanding---I can recommend some notes on the RG flow for the 1D or 2d Ising model, which I think you said above you were unfamiliar with.
I thought I'd need the blocking transform for the core argument, but it seems I don't. which is not to say its not interesting or not important. For example, the blocking transform was one of the main tools of Duoady and Hubbard when they performed their analysis of the Mandelbrot set. Viz, solve the laplacian outside of the Mandelbrot set (holding the interior of the set at "constant voltage"). The flow-lines or rays that are perpendicular to the equipotential have a natural labelling in two letters. Applying block substitutions gives the self-symmetries of the M-set. (No, I don't think duoady/hubbard even hinted at anything RG-like, so that's different, but the point is that block substitution works when one is dealing with hyperbolic groups, and possibly works only?? because one is working with hyperbolic groups ??).
(This idea I believe is quite distinct from your ideas about H(s), but I wanted to include it in my description of what we agreed on, because you had brought it up earlier on. Does that make the comment less strange?)
I believe my main question above is still outstanding. If you restrict the kind of functions f to be such that all graphs H(f^{-1}(s)) are fractal, well they will certainly be fractal. But what property of these graphs H(s) tells you anything about the original ising model? --Jpod2 16:49, 20 September 2006 (UTC)
Don't know about you, but I have learned that under extremely lax requirements, 1D lattice models are invariant under the action of a highly hyperbolic free group, which is something I did not know a week ago. Is this new? I suspect if this was mentioned to Barry McCoy or Jacques Perk you'd get a demented smile and be written off as a blithering idiot, so no, its not new. What can I do with my new-gained knowledge? Well, I have plans.... linas 04:05, 21 September 2006 (UTC)
(I mention Jacques because I am one of his failed students; he was still mad at me last time I saw him. Barry walked the hallways as well.)
I'm confused. Why isn't the ordering of states by energy translation-invariant? Translating a spin chain state doesn't change its energy. (See also my comments above.)
My question on the literature was: does the literature *specifically* talk about these fractal maps H(f^{-1}(s), and their significance?
I don't know why you want to rule out r and g being the identity (as I believe they would be in the case of states ordered by energy), but I can believe that if you do make them nontrivial then you will end up with a fractal (or something related to a fractal) H(f^{-1}(s). I'm just not sure what making this restriction has achieved.
I think perhaps ultimately you are interested in comparing two translation and inversion-invariant maps from the lattice to the real line. One is the Hamiltonian, which has r and g the identity. The other is f, for which you are imposing r and g non-trivial. I guess it is interesting that under these conditions H(f^{-1}(s)) is a fractal. But I don't think it will translate into any specific property of the lattice model. I can't say much more than this, and I don't think you are able to respond since I have asked many times what fractal property of these maps gives you any information about the lattice model.
Of course, there may be many connections between fractals and properties of 1D lattice models. I'm just talking about this specific map, H(s).--Jpod2 07:28, 21 September 2006 (UTC)

Yes, there is an error in the presentation, or at least its gone off in a misleading direction. I'll try to fix it shortly; the fully invariant x-form is a Baker's map, I beleive. Please stop being so combative.

I find statements such as "I don't think you are able to respond since I have asked many times what fractal property of these maps gives you any information about the lattice model." rude and offensive. I have gone far out of my way to try to answer your questions. I have attempted to answer your every question, so don't say that you have asked many times and I haven't answered.

To be clear, I am not interested in lattice models per se, and am not trying to discover some new property of lattice models. I am interested in the structure of the collection of strings in two letters. For example, the collection of all strings in the two letters x,p modulo the non-abelian ideal (mathematics) xp-px-1=0 is a non-abelian algebraic variety called the Heisenberg group. There are lots of fun things one can do with this variety, such as contemplate Moyal products or Weyl quantization or non-commutative geometry in general. For me, the lattice models give a potentially useful tool for working with strings in two letters.

I want to rule out g and r being the identity because that's boring. I also want to rule-out the commutative case of gr=rg since that generates a simple Euclidean lattice. Not that lattices aren't interesting: The rationals p/q can be taken as the lattice (p,q), and so can the Gaussian integers p+iq e.g complex multiplication is interesting. So there are certainly interesting games one can play with lattices: for example, the number of points underneath a hyperbola on a 2D lattice is the divisor function. The game I'm playing is to pick through a salad of related ideas, and see what happens when I mash them together. I tend to return to strings of two letters, since these tend to provide the simplest examples for all of the related ideas.

For example, here's another utterly crazy idea: (infinite dimensional) Grassmanians can be mapped to the infinite binary tree, although there is no unique or "natural" mapping. This means that a supermanifold can be visualized as a fiber bundle with base space that is the bosonic component, and the fermionic fibers having a structure of a binary tree. Can I use this to gain some new insight into supersymmetry? probably not. Can I have fun doing highly unorthodox thinking on applying symmetries of the binary tree (i.e. strings in two letters) to Grassman algebras or thier classifying spaces? Sure I can have fun that way. And why not?

Sorry to throw this salad of ideas at you, but you are pushing on lattice models, and I am pursuing ergodicity and strings. linas 17:40, 21 September 2006 (UTC)

Sorry for being combative, but I've found it frustrating sometimes in this discussion. Isn't it fair to think that at various times that you have implied you are interested in (the physics of) lattice models? And there *are* some questions I feel I've asked a number of times. It would be far less frustrating if sometimes you said `I don't know', you seem to be quite stubborn at times---sorry if that seem harsh.
Anyway, let's forget about the unpleasantness if possible. I think we agree that it is important that you are imposing that g and r are non-trivial. Given this requirement, is my summary above fairly reasonable?
I.e.: "I think perhaps ultimately you are interested in comparing two translation and inversion-invariant maps from the lattice to the real line. One is the Hamiltonian, which has r and g the identity. The other is f, for which you are imposing r and g non-trivial. I guess it is interesting that under these conditions H(f^{-1}(s)) is a fractal."
That *is* interesting if one can pin down precisely how it works.
I have at various points been interested in supermanifolds---the intricacies of which are very interesting. Their physical relevance is also rather interesting and subtle. However, that takes us a little far from the discussion here, and I must get on with the thesis. Perhaps something to pick up on another time? All the best--Jpod2 21:19, 21 September 2006 (UTC)

I've tried to be stubborn where its important and I tried to say I don't know when I didn't know. I need to do some additional work before I agree to any summary. I think its silly to look at g and r the identity-- of course something is translation invariant/covariant if you disallow translations-- the point is to describe the symmetry, not insist on a trivial symmetry. A problem with the current paper is that it fails to distinguish the single-sided lattice (extending from 0 to the right) from the lattice that extends in both directions. Truncating (or adding) a lattice position from a single-sided lattice isn't an invariant op -- the notion of translation has to be more carefully defined. To map the double-sided lattice, I need two numbers in the range of [0,1], and not one number (one number for the binary fraction to the left, the other for that to the right). Translating the lattice by one is equivalent to halving the one number and doubling the other: the Baker's transform. From the point of view of dynamical systems, the Baker's transform has a bunch of interesting new properties that the 1D transforms don't have. Its time-reversal-invariant, since one is not throwing away bits by chopping them off. But this time-reversal symmetry is of "spontaneously broken" in a certain way, in that backwards-time and forwards-time eigenvalues are not the same for its transfer operator -- this is what got me started on this track in the first place. Please be patient while I try to write all of this down using more precise notation and definitions and etc. Please realize this is an evening/weekend activity for me; my day job expects me to make progress on a very different set of topics. linas 14:11, 22 September 2006 (UTC)

I've updated the paper at that URL to explicitly work out the details of the one-sided and the two-sided lattice. In the process, I've gotten distracted by a related computation that is fascinating but seems to be difficult; I'll be preoccupied for a while. linas 14:21, 25 September 2006 (UTC)