[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Axiom musings...
From: |
Tim Daly |
Subject: |
Re: Axiom musings... |
Date: |
Thu, 3 Dec 2020 02:06:11 -0500 |
I just ran across an interesting example about the difference
between testing and proving (not that I need to be convinced).
http://www.lix.polytechnique.fr/Labo/Samuel.Mimram/teaching/INF551/course.pdf
(page 10):
On day one, our mathematician finds out using a formal computation software
that
\int{\sin(t)/t}{dt} = \pt/2
On day two, he tries to play around a bit with such formulas and finds out that
\int{(\sin(t)/t) (\sin(t/101)/(t/101)}{dt} = \p2/2
On day three, he thinks maybe a pattern could emerge and discovers that
\int{(\sin(t)/t) (\sin(t/101)/(t/101) (\sin(t/201)/(t/201))}{dt} = \p2/2
On day four, he gets quite confident and conjectures that, for every n∈N,
\int{\prod{ /sin(t/(100n+1)) / (t/(100n+1))}{dt} = \pi/2
In fact, the conjecture breaks starting at
n= 15 341 178 777 673 149 429 167 740 440 969 249 338 310 889
and none of the usual tests would have found this out.
Tim
On 11/29/20, Tim Daly <axiomcas@gmail.com> wrote:
> Axiom, as it was released to me long ago, is being
> overtaken by current technology.
>
> I'm taking an MIT course on "Underactuated Robots" [0]
> (I spent a large portion of my life doing robotics).
>
> The instructor is using Jupyter notebooks to illustrate
> control algorithms. He is doing computation and
> showing the graphics in a single notebook. Axiom
> could "almost" do that in the 1980s.
>
> It is clear that, with a sufficiently rich set of algorithms,
> such as one finds in CoCalc, Axiom is showing its age.
>
> This is the primary reason why I consider the SANE
> research effort vital. An Axiom system that has
> proven algorithms, follows knowledge from leading
> mathematics (e.g. Category Theory), and pushes the
> state of the art will keep Axiom alive and relevant.
>
> Axiom was, and is, research software. Just doing
> "more of the same" goes nowhere slowly. Following
> that path leads to the dustbin of history.
>
> Tim
>
> [0]
> https://www.youtube.com/watch?v=GPvw92IKO44&list=UUhfUOAhz7ynELF-s_1LPpWg&index=28
>
>
>
> On 11/27/20, Tim Daly <axiomcas@gmail.com> wrote:
>> The current mental struggle involves properly re-constructing
>> Axiom so Category Theory (e.g. objects, morphisms, identity,
>> and composition) is respected between the domains. Done
>> properly, this should make resolving type coercion and
>> conversion less ad-hoc. This is especially important when
>> dealing with first-class dependent types.
>>
>> Axiom categories and domains, in the SANE model, compile
>> to lisp classes. These classes are "objects" in the Category
>> Theory sense. Within a given category or domain, the
>> representation (aka, the "carrier" in type theory) is an
>> "object" (in Category Theory) and most functions are
>> morphisms from Rep to Rep. Coercions are functors,
>> which need to conform to the Category Theory properties.
>>
>> It also raises the interesting question of making Rep
>> be its own class. This allows, for example, constructing
>> the polynomial domain with the Rep as a parameter.
>> Thus you get sparse, recursive, or dense polynomials
>> by specifying different Rep classes. This is even more
>> interesting when playing with Rep for things like
>> Clifford algebras.
>>
>> It is quite a struggle to unify Category Theory, Type
>> Theory, Programming, Computer Algebra, and Proof
>> Theory so "it all just works as expected". Progress is
>> being made, although not quickly.
>>
>> For those who want to play along I can recommend the
>> MIT course in Category Theory [0] and the paper on
>> The Type Theory of Lean [1]. For the programming
>> aspects, look at The Art of the Metaobject Protocol [2].
>>
>> [0] https://www.youtube.com/user/youdsp/playlists
>> [1]
>> https://github.com/digama0/lean-type-theory/releases/download/v1.0/main.pdf
>> [2] https://mitpress.mit.edu/books/art-metaobject-protocol
>>
>> Tim
>>
>>
>> On 11/10/20, Tim Daly <axiomcas@gmail.com> wrote:
>>> Another area that is taking a great deal of time and effort
>>> is choosing a code architecture that has a solid background
>>> in research.
>>>
>>> Axiom used CLU and Group Theory as two scaffolds to
>>> guide design choices, making it possible to argue whether
>>> and where a domain should be structured and where it
>>> should occur in the system.
>>>
>>> Axiom uses a Pratt-parser scheme. This allows the
>>> ability to define and change syntax by playing with the
>>> LED and NUD numeric values. It works but it is not
>>> very clear or easy to maintain.
>>>
>>> An alternative under consideration is Hutton and Meijer's
>>> Monadic Parser Combinators. This constructs parser
>>> functions and combines them in higher order functions.
>>> It is strongly typed. It provides a hierarchy of functions
>>> that would allow the parser to be abstracted at many
>>> levels, making explanations clearer.
>>>
>>> Ideally the structure of the new parser should be clear,
>>> easy to maintain, and robust under changes. Since SANE
>>> is a research effort, maximum flexibility is ideal.
>>>
>>> Since SANE is intended to be easily maintained, it is
>>> important that the compiler / interpreter structure be
>>> clear and consistent. This is especially important as
>>> questions of which proof language syntax and a
>>> specification language syntax is not yet decided.
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>> On 10/9/20, Tim Daly <axiomcas@gmail.com> wrote:
>>>> A great deal of thought has gone into the representation
>>>> of functions in the SANE version.
>>>>
>>>> It is important that a function lives within an environment.
>>>> There are no "stand alone" functions. Given a function
>>>> its environment contains the representation which itself
>>>> is a function type with accessor functions. Wrapped
>>>> around these is the domain type containing other
>>>> functions. The domain type lives within several
>>>> preorders of environments, some dictated by the
>>>> structure of group theory, some dictated by the structure
>>>> of the logic.
>>>>
>>>> Lisp has the ability to construct arbitrary structures
>>>> which can be nested or non-nested, and even potentially
>>>> circular.
>>>>
>>>> Even more interesting is that these structures can be
>>>> "self modified" so that one could have a domain (e.g.
>>>> genetic algorithms) that self-adapt, based on input.
>>>> Or "AI" domains, representated as weight matrices,
>>>> that self-adapt to input by changing weights.
>>>>
>>>> Ultimately the goal of the representation should be that,
>>>> given a function, it should be possible to traverse the
>>>> whole of the environment using a small, well-defined
>>>> set of well-typed structure walkers.
>>>>
>>>> All of this is easy to write down in Lisp.
>>>> The puzzle is how to reflect these ideas in Spad.
>>>>
>>>> Currently the compiler translates all of the Spad
>>>> code to Lisp objects so the Spad->Lisp conversion
>>>> is easy. Lisp->Spad is also easy for things that have
>>>> a surface representation in Spad. But, in general,
>>>> Lisp->Spad is only a partial function, and somewhat
>>>> limited at that.
>>>>
>>>> I suspect that the solution will allow some domains
>>>> to be pure Lisp and others will allow in-line Lisp.
>>>>
>>>> Most of these ideas are nearly impossible to even
>>>> think about in other languages or, if attempted,
>>>> fall afoul of Greenspun's Tenth Rule, to wit:
>>>>
>>>> "Any sufficiently complicated C or Fortran program
>>>> contains an ad hoc, informally-specified, bug-ridden,
>>>> slow implementation of half of Common Lisp."
>>>>
>>>> Fortunately Spad is only a domain specific language
>>>> on top of Lisp. Once past the syntax level it is Lisp
>>>> all the way down.
>>>>
>>>> Tim
>>>>
>>>
>>
>
- Re: Axiom musings...,
Tim Daly <=