I alluded in one of my very first posts here to a calculus class that I was teaching using the ring . The class was a six-week, 5-day-per-week intensive course covering the usual material of a college Calculus I course. I’ve been promising Greg and Jim that I’d write up some of my experiences with the course. I think I have had enough time to process the experience — let’s talk non-nonstandard calculus.

Maybe a word about the name before we begin: I think that a lot of my philosophy on mathematics came from taking several analysis classes at Smith College from logician Jim Henle. I learned nonstandard analysis from him one summer, but also learned what he called “non-nonstandard analysis”. The idea was to get infinitesimals into calculus without the big ultrafilter-style logic machinery of standard nonstandard analysis. I’d like to say more about this version of non-nonstandard analysis sometime in the future, but not just this moment. But the main idea is this: we want infinitesimals, but we don’t want to bring in the heavy machinery if we can avoid it. The (serious) tradeoff is that we lose the transfer principal when we use non-nonstandard analysis.

Now, a bit more about the class I taught. Most students had not seen much beyond precalculus. Maybe they had been told how to formally take derivatives of polynomials, but not much more. So I had a fantastically clean slate to work with. This may have seriously affected the outcome of the class.

In the first serious lecture, I introduced as a positive number so small that . They thought this was a little funny, but came around to it after I drew an analogy to and : even if they had philosophical objections to thinking about or as “numbers”, their recent experience with complex numbers in high school put them in a good place for accepting as a useful formal symbol, if not an honest-to-goodness number.We could then quickly move on to computing the derivative of some elementary functions like :

I also pointed out that computers are using a number system which is closer to this than it is to : since floating point processors have finite precision there is a smallest positive denormal. It must square to zero. So if you are judging mathematical truth by “is relevant to things I know in the real world”, it seems that nilsquare infinitesimals aren’t such a strange idea.

On the second day, I gave them a few useful tidbits: as an axiom, I said . Then by looking at a right triangle of height , we decided that and . From here, it is easy to compute derivatives of relatively complex functions by hand. By the end of the third day, there were homework problems (generally solved correctly) like “use the definition of the derivative to find the derivative of ”:

so the derivative is .

By asking them to compute the derivative of things like or by hand, they rapidly re-learned (and remembered!) the angle-sum formulas for sine and cosine and the algebraic rules involving exponents. I never allowed formula sheets or calculators or anything like that in the class or on the tests — the kids were genuinely remembering the formulas and the derivations. Even the weaker students at the end of the course could all compute the derivative of as a routine computation:

but even better, they remembered or because these manipulations had become common to them. They used them every day in nearly every problem, and they remembered. No formula sheet hell involved.

When a number like appeared in the denominator, the students quickly figured out that they could multiply on the top and bottom by to “realify” the denominator. I was surprised that they saw this trick so quickly, until I realized that they had been doing the exact same thing with complex numbers for several years already. The derivative of , with no more than two steps left out:

On the final, I asked them to compute the derivatives of , , and from the definition. Almost uniformly, the students could create correct computations of the derivative. The next question on the final was “prove the quotient rule”. Nearly everybody in the class, even the C-students, did this derivation properly. They were not told beforehand that anything like this would be on the test.

Another thing which they were fairly good at was using differentials, since the differential was not some mysterious formal symbol to them but an actual infinitessimal value. From this perspective, it is easy to see how to use differentials to get good approximations. This also made it as easy to work with implicit differentiation as with explicit differentiation, which it turn makes computations of related rates much cleaner than usual. From my perspective, it also makes the usual geometric ways of demonstrating the product rule or the fact that more rigorous: the area added really is just some lengths times the width of my chalk. By treating infinitesimals on the same footing as finite numbers, the approximation schemes of calculus become more intuitive. I think every mathematician has discovered this on their own, in their own private language. Why not make the language commensurable with the computations that we do?

I’ll write plenty more about the other topics of the course in the future, but this should give you some idea of what the class was like and why I felt that nilsquare infinitesimals were a productive way to teach calculus. We really didn’t use limits at all: they were all replaced with infinitesimals and approximation schemes. It is true that limits, infinitesimals and approximation schemes are all equivalent ideas, but I believe the latter two more closely model our internal, geometric understanding of calculus. So: what do you readers think?

August 29, 2007 at 12:46 am |

Actually, it reminds me of something Louis Kauffman has on his door. It’s all of integral and differential calculus on two pages, and the only difference is he uses a lowercase delta instead of dx.

My qualm here is that you manage to completely bypass limits. What effects do you think this will have when the students hit series, if they ever do?

August 29, 2007 at 4:15 am |

Very nice. I would be really interested in what you think the disadvantages of this approach are. There must be many, as some sort of balance to the many advantages. Probably the best way to think of them is to teach a course and see what happens.

August 29, 2007 at 5:42 am |

I am very intrigued by the nonstandard approach to calculus. Could you point us to some good introductory books/lecture notes?

August 29, 2007 at 6:23 am |

Is this just a rephrasing of automatic differentiation with dual numbers? It seems that your students may run into trouble later because they don’t understand enough abstract algebra to know what’s going on, and may just resort to manipulating symbols without any real understanding when things get tough.

I’ve never been much of a calculus guy (barely learned enough to get through diffeq and real analysis), so please excuse any ignorance on my part.

Thanks

August 29, 2007 at 6:47 am |

And that’s not what they do now?

August 29, 2007 at 7:21 am |

So does non-nonstandard analysis have anything to say about the integral? It seems nice for differentiation, but I don’t immediately see how to use the dual number approach to integrate.

August 29, 2007 at 9:52 am |

This looks very nice. I’ve been curious for a while whether I could teach calculus using big O notation as the fundamental object instead of limits, and this looks like a similar but easier approach. Question — what do you plan to do when you get to Taylor series? When I do calculus computations, that’s when I start really hauling out the big O’s. (For example, try computing the first three nonzero terms of the Taylor series for sqrt(cos(x)) by repeated differentiation, and then do it again by expanding (1-x^2/2+x^4/24+O(x^6))^{1/2} by the binomial theorem.)

August 29, 2007 at 10:51 am |

This is indeed a nice way to do differentiation, but it may run into some trouble once one leaves the category of smooth or analytic functions. For instance: is differentiable at x=0? What about ? etc. More generally, it is hard to detect

non-differentiability in this setting. (This is the usual phenomenon in algebra that it is easy to prove two things are equal, but difficult to prove that two things are unequal. Analysis, of course, has the opposite problem🙂 .)Also, one may encounter some difficulty with second derivatives. One wants to assert the formula , but you can’t do that within the ring – division by zero error! For similar reasons, it is a little tricky to justify any Taylor expansion beyond first order.

Finally, it’s not particularly easy to show in this framework, which will lead to some problems when one gets to the indefinite integral. Nevertheless, one can at least get the fundamental theorems of calculus by postulating the existence of a definite integral operation which obeys some reasonable axioms (linearity, concatenation, translation-invariance, and most importantly that ).

Things also get rather interesting in several variable calculus: what ring would you use, for instance, to prove Clairaut’s theorem ? (and how do you deal with the fact that this theorem can in fact fail for certain twice differentiable functions?). And Stokes theorem is going to be particularly challenging without using much more of the machinery of infinitesimals than just .

In short, it is a nice way to get students quickly started on single-variable differential calculus, but one should also be prepared to move beyond this approach when one wants to tackle the rest of the calculus.

August 29, 2007 at 10:58 am |

p.s. This approach is excellent for introducing the concept of a tangent bundle, and interpreting the derivative of a map between manifolds as a linear map between two tangent bundles. Indeed, one can argue that this approach is basically the algebraic way of viewing R as a manifold. This also clarifies a bit the difficulties mentioned above; second derivatives on manifolds are indeed a real pain, one needs the machinery of connections. Also, now that one has some non-trivial global cohomology, it’s clearer as to why things like Stokes’ theorem are not going to be trivial.

August 29, 2007 at 11:25 am |

Re: John’s question about bypassing limits. As we got further into the course, and the student’s got more used to approximation schemes (differentials, Newton’s method, Riemann sums for area and Riemann sums for solving a differential equation), I think that their ability to think about limits developed on its own. We talked about limits long enough to discuss L’Hospital’s rule near the end of the course. During this class, we developed a translation dictionary on the board between limits and infinitesimals. Those who had seen the derivative defined before had a “oh, I see where that h thing came from” moment. It was heart-warming. I have trouble believing that there is enough content in the idea of “limit” that a decent students couldn’t pick it up rapidly when they begin to deal with series in Calc 2. This use of “limit” is just “this sequence of approximations is as good as you would like”, after all.

I am planning on emailing the students after another several weeks have passed to see how they are doing in the next calculus course. I think this will be the real test of the idea, and I will be sure to pass the results along.

August 29, 2007 at 11:33 am |

Shaneal, you are right that this is the same idea that is exploited with automatic differentiation. But I don’t think the students saw it as purely formal — I talked about infinitesimals on equal footing with real numbers, we drew chalk-sized infinitesimal things in our diagrams, and most importantly the students began to correctly use the language: “A and B are infinitesimally close” and so forth.

I’m not sure, but I suspect that by using honest equations rather than limits in (for example) the definition of the derivative, the students are more inclined to understand what is going on. The definition of the derivative via dx is, after all, just an equation; the one with limits carries an implicit “for all / there exists” in front of it. The more alterations, the harder it is to understand the underlying idea.

August 29, 2007 at 11:40 am |

p.p.s. There is another slight issue, which is to decide as to whether dx is “positive”, “negative”, or “neither” (the latter is what happens of course with i = sqrt(-1)). This becomes relevant when one wants to show that a function has derivative zero at local extrema. One can work around this issue by having both positive and negative infinitesimals (and thus deal with right and left derivatives), but this approach does not extend well to several variables.

It’s also worth noting that Ito calculus is traditionally presented using this algebraic infinitesimal approach, by adjoining an additional relation .

One final advantage (or disadvantage, depending on your point of view) is that this approach does not really use the structure of the reals and so also applies over arbitrary fields. As such it is highly compatible with algebraic geometry, schemes, etc. (This also explains why it is not so terribly compatible with the definite integral, or the detection of local extrema.)

August 29, 2007 at 11:42 am |

This approach seems to me problematic for three distinct reasons.

First, there is little benefit. Actually computing derivatives can be taught in half-a-day with a few formulas. You need only the product, quotient, chain, and power rules, as well as a couple of memorized formulas (derivatives of exp, ln, cos and sin). Since this is not a problem, there is no reason to modify the way to teach the simplest part of calculus. The hard parts of calculus are limits, series, integrals, extrema, n-dimensionality, discontinuities, differential equations, and word problems. Derivative computing is just not hard for anyone who will be able to master the rest of calculus (those for whom it is hard won’t learn calculus anyway).

Second, the approach does not prepare students to read and understand works written by the remainder of the mathematics, physics and engineering community.

Third and most important, the approach misconstrues the purpose of teaching calculus. This purpose is not solely to teach students to manipulate derivatives. Instead, it is to teach them to understand concepts of rigor, proof and meaning. Calculus is the course where teachers should not lie to students about the true meaning of what they do. This method, even it works to the extent of being useful mnemonically, shortchanges students because they are deceived as to why it works. I don’t see that students have any real understanding of derivatives using this method, even if they can compute them.

That said, the differential notation is nonetheless powerful and useful in its own right, but should only be introduced after students have understood the limit formalism. Thus, I would introduce limits, and them spend some time on the differential notation.

August 29, 2007 at 11:51 am |

Terry: one sharp kid pointed out exactly your point about second derivatives in one of the first classes: “but don’t we write for second derivatives?” The problem is really that when you need to do things like take second derivatives, you need to keep track of two potentially incommensurate infinitesimals at once. Some of our early posts on the blog mentioned how even if dx and dy are nilsquare, unless you know that dy/dx is finite then the sum (dx + dy) is only nilcube. (dx + dy) becomes nilsquare again if you propose that dx dy = -dy dx, and suddenly it seems that we are about to start a conversation on with freshman. This is certainly a big drawback to the approach: infinitesimals on different orders of magnitude are too messy, so you have to stick close to situations involving only one infinitesimal scale. For a calculus 1 course, this is not so strict a requirement. For a calculus 3 or analysis course, it is probably not worth the effort.

You can, of course, use nilcube, nil(4th), … infinitesimals to compute second and third derivatives directly, using exactly the formula that you have written. This is also one way to tackle Taylor series in the same spirit.

Detecting non-differentiability is not so bad after all. In fact, it is remarkably easy to do on the class of “high school functions”. On one test, they had to show that was continuous but not differentiable at . To see continuity, you only need to note that the three (and only three!) numbers , and are infinitesimally close. To see that the derivative doesn’t exist, you only need to compute the left and the right derivatives (using and ) to find out if they agree. I want to write much more about this aspect of the class soon, because it was one of the most philosophically confusing aspects of the course to me.

Your description of how we have to approach integration is pretty dead-on. More on that in a future installment😉

August 29, 2007 at 12:30 pm |

fjfjj: Your criticisms are reasonable, but I think misguided. I overemphasized computing derivatives from first principles in this post, but the computation of derivatives was of course only a very small portion of the class. I will be writing about the other aspects of the class in the near future. I believe that for most students, the hard part of calculus is

modeling: they do not know how to translate ideas about the world to equations which they can manipulate, and they don’t know how to take an equation and create a mental model of what it represents. This is manifested in how strongly they avoid word problems of the optimization or related rates types, even though these problems are almost trivial once an actual equation has been written down.The idea of this course was not really “look at this funny trick for computing derivatives” but rather “can I teach calculus so that the symbolic methods we use more closely model the intuitive ones”? It is not clear that these students were better than my previous limit-ed ones at word problems, but they certainly weren’t any worse.

As for your second reason, I simply can’t agree. Even while I was still teaching the class, one student commented that her physics instructor was impressed that she was learning calculus “the right way” [with infinitesimals]. In his work, the infinitesimal displacements were real quantities to be manipulated.

As for your third point, I don’t understand how it deceives the students. It works for the exact same reason limits or O-notation or nonstandard analysis any other formulation of calculus works — it is a good model of the language of calculus. In at least this one case, the students came away with a better understanding of derivatives than usual. They can understand the derivative directly, getting their hands right on it through computation or seeing the tangent line between two infinitesimally close points. I’d like to better understand why this approach seems deceitful to you, since I have received similar reactions from several other people.

August 31, 2007 at 9:56 am |

[…] Non-nonstandard Calculus, I […]

September 3, 2007 at 2:09 pm |

As a physics major with an undergrad math focus, I had a chip on my shoulder about infinitesimals versus limits. In physics classes, we talked about infinitesimal quantities all the time, but I always mentally translated them into limits, believing that this was the only correct way to think about it. I ran into calculational difficulties with this mental model in grad school, finding that my peers could solve problems I couldn’t even get my head around, and, after some discussion, we found that it was because they truly believed in infinitesimals. When I decided to admit them as a useful fiction, statistical mechanics got a lot easier. It would have been great to have seen both tools, and to have known the limitations of each (though that can be hard to teach to students who are looking for The Right Way to Think).

Thanks for the great post, Matt! By the way, what kinds of students were you teaching: engineering, physics, or math majors? Were they honors students or general?

September 10, 2007 at 1:53 pm |

Cos(dx/2) = 1;

Sin(dx/2) = dx/2;

Sin^2(dx) = dx^2=0;

from here:

0 = Sin^2(dx) = 2*Sin(dx/2)*Cos(dx/2) = 2 * dx/2 * 1 = dx > 0

0>0, congratulations.

June 18, 2011 at 1:03 pm |

sin(dx/2) = Sq((1-cosdx)/2) = 0 not dx/2

June 30, 2011 at 11:41 am

The correction above which negates the Sept 10 conclusion of “anonymous” was actually done by zenrin, not the original “anonymous.”

September 10, 2007 at 6:01 pm |

Is “Anonymous” somehow managing to confuse with ?

September 27, 2007 at 1:31 am |

dx is a zero-divisor in the ring R[dx]/(dx^2), so if you invert it it becomes zero in the localization. How then can you algebraically justify dividing by dx?

September 27, 2007 at 8:18 am |

“How then can you algebraically justify dividing by dx?”

You don’t. You instead write f(x + dx) = f(x) + f'(x)dx for a unique scalar f'(x). This points to the difference between infinitesimal analysis using invertible infinitesimals (as in Robinson’s Nonstandard Analysis) and analysis which uses nilpotent infinitesimals (as in “Synthetic Differential Geometry”; see e.g. Mike Shulman’s paper here).

October 1, 2007 at 5:46 am |

[…] Non-nonstandard Calculus, I « The Everything Seminar […]

January 13, 2008 at 9:02 am |

No, I didn’t get it mixed up – you got your . . . face . . . mixed up with . . . something stupid . . . I . . . . . . . .am a secret genius that the world has yet to discover! And I prove it by leaving obviously incorrect proofs misusing high school math on message boards without having the courage to leave my name hoping to score points from a safe distance that only I will keep track of on my mental scorecard (of course erasing this instance and most likely many others like it out of memory so as to maintain my delusion of superior intelligence . . . .(shhhhhhh!))

March 2, 2008 at 11:03 am |

Congratulations. You seem to have discovered the main features of smooth infinitesimal analysis (SIA). This is a version of analysis based on the repudiation of the general applicability of the law of excluded middle (LEM) and the principle of microstraightness of smooth functions. Nilsquare infinitesimals emerge naturally from these principles. SIA is entirely rigourous (its foundations are in Category theory) and it has at least four major advantages over Limit theory and non-standard analysis (NSA – which is just Limit theory in disguise). These are:

1. In SIA the differential calculus is reduced to simple algebra.

2. SIA does not lead to contradictions such as the Banach-Tarski

paradox as does Limit theory/NSA.

3. The method of microadditivity found in physical derivations is a

natural application of SIA but not of Limit theory/NSA.

4. The ‘taking the standard part’ fraud of NSA is unnecessary in SIA

because the infinitesimals cancel eachother out.

The best book on SIA is A Primer of Infinitesimal Analysis by J L Bell. Many of the criticisms of your approach given above are addressed in his book. I personally found it useful to compare Bell’s book with The Foundations of Mathematics by Stewart and Tall. In their book Stewart and Tall use the quest to explain calculus to justify the Completeness axiom and classical Real analysis – a justification which falls apart with SIA. They also use the Axiom of Choice in their coverage of Cardinal numbers; but this axiom also implies the LEM thereby disallowing nilsquare (that is, genuine) infinitesimals. Consequently, you can believe in infinite numbers or infinitesimals but not both, or at least not both at the same time. This may explain Cantor’s objection to infinitesimals! It is interesting to note that Fuzzy Logic also depends on the repudiation of the LEM. Calculus and Fuzzy Logic are perhaps the two branchs of mathematics which are the most useful in modelling reality. Perhaps Cardinals, ‘Real’ analysis, and the LEM should be banished to the fringes of philosophy.

May 15, 2008 at 11:28 pm |

I recently wrote up some notes on Smooth Infinitesimal Analysis, the system for which the immediately preceding commenter is an aggressive advocate.

They’re at:

http://www.math.cornell.edu/~oconnor/sia.pdf

The biggest contrast between Smooth Infinitesimal Analysis and Matt’s system (and non-standard analysis as well) is that while in both Matt’s system and non-standard analysis the object representing reals+infinitesimals is explicitly constructed and can be manipulated directly with the usual classical logic that mathematicians are used to, in Smooth Infinitesimal Analysis, the object representing reals+infinitesimals is presented axiomatically, and you must reason about it using intuitionistic logic in order to make the existence of infinitesimals consistent.

It sounds forbidding, but you can do a lot of pretty neat things with it.

May 23, 2008 at 6:00 pm |

I fear that some of the above comments may be slightly misleading. The objects that SIA deals with do not reside merely at the level of axiomatic description; they may be embodied in models which admit explicit constructions by taking categories of sheaves on appropriate sites. Externally speaking, one uses ordinary logic to describe these sites. It is of course true that the “internal logic” in such sheaf toposes is intuitionistic (meaning that lattices of subobjects of objects are not Boolean algebras; they are Heyting algebras), and it is in that sense that the inclination to manipulate the relevant objects directly as “smooth sets” must take intuitionistic logic into account.

May 23, 2008 at 10:36 pm |

You’re right, of course. I should have mentioned the models in my summary.

My reasoning for not doing so is that if one were to teach calculus via SIA, one would do so axiomatically and not via the models, just like we teach math majors to reason axiomatically in ZFC before we teach them (if we ever do) how to construct models of ZFC with set-theoretic forcing.

However, for people who are not learning calculus for the first time, learning how SIA works by going through the construction of the models may be the most enlightening way to do it. It could certainly help a classical mathematician who may be suspicious of or not fully understand intuitionistic logic to figure out what exactly is going on.

July 8, 2008 at 6:41 am |

Hi Folks, here is the “true” about the way to make pratical calculus, for Year when I’m at the University, the teacher tell me, “You cannoit semplify” the derivates, but in any kind of calculation they use the derivate as Fraction😉

October 30, 2009 at 9:12 pm |

I have the book by Bell, but I do not like SIA because it denies the Law of the Excluded Middle without making it clear just when you can make a number both equal to, and not equal to, 0. On the other hand, those ultrafilters require the heavy logical machinery of the Axiom of Choice.

I prefer to use a proper subset of Robinson’s hyperreals (his R*) only requires the cofinite (Fréchet) filter on N. Call my subset R†, the set of ratios of real polynomials (the familiar “rational functions”) of the index variable, where two pairs of polynomials have the same ratio (belong to the same equivalence class) if the set of values of the index for which the ratios are not equal is finite.

This is the classic example of a non-Archimedean ordered set, and, if statements about polynomial ratios are considered true only the same conditions for equality hold, the transfer principle holds for all first-order statements. The proof requires only the properties of filters in general, and not the property of an ultrafilter that EVERY set of natural numbers or its complement has to be a member of the ultrafilter.

I am writing an appendix for Thompson’s _Calculus_made_Easy_ that uses my R† to justify the infinitesimals.

The great triumph of standard (real number) math is Weierstrass epsilon-delta definition of a limit, which tells one when one has found a limit (using only the real numbers), but leaves no clue as to how to find it.

In effect, nonstandard methods allow one to calculate beyond the infinite decimal precision of standard reals, and round off directly to the nearest standard real, the limit sought, Robinson’s “standard part”, without all that tedious mucking about in shrinking deltas and epsilons.

May 27, 2011 at 1:10 pm |

I recently stumbled across “dual numbers” from wikipedia and wondered if you could use it build calculus for first-year students without using limits. Polynomial derivatives, product rule, chain rule, quotient, fractional power rule all appeared quite trivial compared to the limit approach.

You could also define continuity by saying that f is continous if f(x+ydx) is inifinitesmally close to f(x) for every value of y and then show that you only need to consider y = -1 and y = +1. You can easily also show that every function with derivative is continuous but the reverse is not necessarily true. |x| is an easy counterexample.

But then I ran into trouble. How do you show that:

– every continuous function has the intermediate value property

– every continuous function on a closed and bounded interval achieves its maximum

– Rolle’s Theorem, the Mean Value Theorem

– if a function’s derivative is everywhere zero, then that function must be a constant function

– every continuous function has an anti-deritative

– how do you build Riemann Sums and integrals

– for every function with a derivative, it is increasing on an interval if and only if its derivative is greater than or equal to zero everywhere.

I ran into these problems when trying to define ln(x) by saying it is the anti-derivative of 1/x and ln(1) = 0 and then defining exp(x) as the inverse of ln(x)

December 6, 2011 at 8:07 pm |

I’m looking for more information about correspondence courses…[…]Non-nonstandard Calculus, I « The Everything Seminar[…]…

October 4, 2012 at 11:58 pm |

\textbf{The new axiom system}

The view of the mainstream in the mathematical world from K. Weierstrass up to bevor A. Robinson to the quantities of infinity (large or small) is that they are only a process, denoted and etablished in form of limit. Infinite quantities were not allowed to exist as free, independent variables. As $ latex x\rightarrow 0$, $ latex x$ is always a variable of a function $ latex f(x)$, or as $ latex n\rightarrow\infty$, $ latex n$ must be an index of a sequence $ latex (a_n)$. Paul Du Bois-Reymond has an effort to treat infinite quantities by giving them a form of functions having the same limit of one unique variable. This approach has natural constriction and it’s acceptance by the other mathamaticians is only very limitted. We will discuss a theorem of him, which is true and according Hardy very important, but, for us, it leads us to misconception of the substance of infinity. Since Robinson, the existence of infinity quantities is assured, but in form of new numbers, the so called hyperreal numbers. The infinity quantities are now free and independent numbers, and it works very well, as it must be. But by his influence and that of Nelson, we must see the world as two parts: the standard one and the extended non-standard one. One fact is not changed at every aproach, that the infinity quantity is a problem of the conception. We will say more, infinity quantity can be used to study matters of perception.

We now introduce a new approach to the non-standard analysis and the basic concept is to introduce not new numbers to the real numbers, but a new order to the basic order $ latex <$ in the body of the real numbers.

The new order is a binary relation, called as infinity order or infinity relation, denoted as $ latex \sqsubset$. $ latex a \sqsubset b$ means $ latex a$ is infinitely small to $ latex b$. This order is definited by 6 axioms.

(U1) $ latex \sqcup(a\sqsubset b,a\sim b, a\sqsupset b)$

(U2) $ latex a\sqsubset b\wedge c\neq 0\;\rightarrow\; ac\sqsubset bc$

(U3) $ latex \sqsubset$ is arbitrarily transitive.

(U4) $ latex \sim$ is finitely transitive.

(U5) $ latex a_1,a_2\sim b\;\rightarrow \;|a_1|+|a_2|\sim b$

(U6) $ latex a\sqsubset b\;\rightarrow \;a<|b|$

Notes and explanations to the axioms:

– (U1) is the trichotomy law of the infinity like the analogous of the order $ latex <$ :

$ latex \sqcup(a

b)$I.e. it is true exactly either $ latex a

b$.The relations $ latex =,\ge,\le$ are only secondary relations of the basic relation $ latex <$. The equality = is logically definited by this trichotomy law of $ latex <$.

In this spirit we can define some secondary relations of the infinity order as follow.

$ latex a$ ist infinitely large to $ latex b$ :

$ latex a\sqsupset b:\Leftrightarrow b\sqsubset a$

$ latex a$ is equivalent or comparable with $ latex b$ :

$ latex a\sim b:\Leftrightarrow \neg(a\sqsubset b)\wedge\neg(a\sqsupset b)$

$ latex a$ is not infinitely small to $ latex b$ :

$ latex a\sqsupseteq b:\Leftrightarrow\neg (a\sqsubset b)

$ latex a$ is not infinitely large to $ latex b$ :

$ latex a\sqsubseteq b:\Leftrightarrow\neg (a\sqsupset b)$

$ latex a$ is approximate or infinitely close to $ latex b$ :

$ latex a\approx b:\Leftrightarrow a-b\sqsubset 1$

If $ latex a\approx 0$ i.e. $ latex a\sqsubset 1$, we say, $ latex a$ is infinitely small, or infinitesimal.

If $ latex a \sqsupset 1$, we say, $ latex a$ is infinitely large.

If $ latex a\sim 1$, we say, $ latex a$ is finite.

The order $ latex \sqsubset$ is not really new. In physics we now it as the relation $ latex \ll$, and on the other hand we know the notions of Landau

$ latex a\sqsubset b\leftrightarrow a=o(b),$

$ latex a\sqsubseteq b\leftrightarrow a=O(b).$

But now we introduce it as a complete \textbf{\textit{basic relation among the free real numbers}} with 6 Axioms. (Landau-notation is often defined as relation between 2 functions.)

The half-order $ latex \sqsubseteq$ is not antisymmetric. The body of the real numbers within the meaning of the standard analysis is

$ latex \mathcal{R}=(\mathbb{R},+,.,<),$

where $ latex \mathbb{R}$ is the set of the real numbers.

The body of the real numbers within the new meaning is

$ latex \mathcal{R^*}=(\mathbb{R},+,.,<,\sqsubset).$

– (U2) means, infinity order is invariant regarding the Multiplication.

– (U3) means

$ latex a_1\sqsubset a_2\sqsubset … \sqsubset a_n\quad \rightarrow\quad a_1\sqsubset a_n,$

where $ latex n$ can be arbitrarily infinite or finite.

– (U4) means

$ latex a_1\sim a_2\sim … \sim a_n\wedge n\sim 1\quad \rightarrow\quad a_1\sim a_n.$

– (U5) is equivalent with

$ latex a_1,a_2,…,a_n\sim b\wedge n\sim 1\quad \rightarrow\quad|a_1|+|a_2|+…+|a_n|\sim b$

– (U6) implies $ latex 1\sim 1$ and (U5) secures us, that $ latex 2,3,1000, 10^32,…$ are finite natural numbers.

In the next topic we show a small part of the huge arithmetics of the infinity order.March 22, 2013 at 6:59 pm |

I am extremely impressed with your writing skills and also with the layout on your blog.

Is this a paid theme or did you modify it yourself?

Either way keep up the nice quality writing, it is rare to see a nice blog like this

one today.

March 25, 2013 at 1:37 pm |

It is really a great and helpful piece of information. I’m happy that you just shared this useful info with us. Please stay us up to date like this. Thanks for sharing.

March 26, 2013 at 5:46 pm |

Hey I am so glad I found your blog, I really found you

by mistake, while I was searching on Digg for something

else, Regardless I am here now and would just like to say kudos

for a marvelous post and a all round exciting blog (I also love the theme/design), I

don’t have time to go through it all at the moment

but I have saved it and also added your RSS feeds, so when I

have time I will be back to read a lot more, Please do keep up the fantastic jo.

April 12, 2013 at 3:16 pm |

Excellent beat ! I wish to apprentice while

you amend your site, how could i subscribe for a blog site?

The account aided me a applicable deal. I had been

a

little bit familiar of this your broadcast provided vivid

transparent concept

May 10, 2013 at 10:35 pm |

Dodatak zamišljen sugestija iskreno prognoza tako.

Me sanovnik ubistvo alge nespretno.Sanjati brojeve val zadovoljni sanjati

ujed vuka. Pijuk oblačan grudnjak ozbiljno povezati vam. Ništa sta znaci sanjati ratluk iz prkosno.

Velika hrvatska sanjarica žvakati njega sanovnik lopov.

July 10, 2013 at 5:07 pm |

Hi, I just read almost the entire blog in one weekend, and I liked it very much. I was wandering if there any way to get notes or some slide-shows of your calculus lectures?

July 26, 2013 at 1:08 pm |

Terrific work! This is the type of info that are

supposed to be shared across the net. Shame on the seek

engines for not positioning this put up upper!

Come on over and discuss with my site . Thanks =)

July 28, 2013 at 2:42 pm |

Hello to every single one, it’s really a fastidious for me to pay a visit this site, it consists of helpful Information.

August 3, 2013 at 3:29 pm |

What’s Going down i’m new to this, I stumbled upon this I have found

It absolutely useful and it has aided me out loads.

I hope to contribute & assist other customers like its helped me.

Great job.

August 4, 2013 at 12:30 pm |

Hey there! Would you mind if I share your blog with my

myspace group? There’s a lot of people that I think would really appreciate your content. Please let me know. Thank you

August 11, 2013 at 12:03 am |

Unquestionably consider that which you stated.

Your favourite justification seemed to be on the internet the simplest factor to have in mind of.

I say to you, I certainly get annoyed even as folks think about issues

that they just don’t recognize about. You managed to hit the nail upon the highest and defined out the entire thing with no need side effect , other people could take a signal. Will probably be back to get more. Thank you

August 23, 2013 at 10:24 am |

Perfect job! However this is the kind of details that are meant to be provided round the world-wide-web.

Disgrace on the seek engines for no longer placing this particular content upper!

Come on over and visit my internet site . Thank you very much =)

October 9, 2013 at 6:59 am |

I have leqrn a few good stuff here. Certainly price

bookmarking for revisiting. I surprise howw so much effort you put tto create this kind

of magnificent informative web site.

January 22, 2014 at 7:51 am |

I just like the valuable info you provide on your articles.

I’ll bookmark your blog and check again here regularly. I am

slighly certain I will be told lots of new stuff proper right here!

Beest of luck for the following!

February 13, 2014 at 9:57 am |

Fantastic beat ! I wish to apprentice whilst you

amend your website, how can i subscribe for a

blog web site? The account aided me a appropriate deal. I

have been a

little bit familiar of this your broadcast offered shiny

clear idea

September 13, 2016 at 9:48 pm |

If you are interested in topic: earn online philippines breaking new today – you should read

about Bucksflooder first

October 30, 2016 at 3:40 pm |

Democritus of Abdera knew about non-standard analysis thousands of years ago. If a cone or pyramid is mathematically cut in a horizontal plane parallel to its base, what are we to make of the surfaces it has produced? Are they equal in surface area or not? If they be unequal, then the side of a cone or pyramid is jagged like a series of steps (think of the Great Pyramid in Giza). However, if they be equal, that would imply that two adjacent intersecting planes are equal, which would mean that the cone or pyramid, being made up of equal rather than unequal circles or squares respectively, must have the same appearance as a circular cylinder or square cylinder; which is utterly absurd.