Tau Developer Update (July 2021)

The development effort, as you all know, is mostly centered around enhancing the performance of TML. So a couple of months ago, Ohad suggested we implement or show how we can implement Sudoku in TML, and that has turned out to be a very useful exercise. It led us to make several improvements in the core TML engine.

So Tomas implemented the Sudoku solution in TML, but he’s still working on improving the performance of that solution. The best solvers out there take only a fraction of a second to do it so we want to see TML be as performant as the best engines out there. He has a solution, but we’re still looking for a quicker performance solution. Tomas has also been implementing a very useful feature for TML developers; which is syntax highlighting in VS code and Vim editors and TML; it makes the life of a TML coder very nice.

He’s also added some improvements to the debugging capabilities of TML in the form of a CLI option. He’s continued his work on state blocks as well. Lucca is also working on another important optimization of TML, what we call the two CNF extraction.

And the objective of CNF extraction is to minimize the size of BDD’s and avoid the exponential explosions. So when extracting as many terms as possible from the BDD it allows a reduction in the size. Ohad has been able to guide him in the best way of implementing the CNF, but he’s still working on that. Murisi, in the same way, he’s working on some TML optimizations. The main one is the squaring transformation, which is another idea of Ohad’s that allows the TML program to execute more than one rule in a step.

The idea is that it could be applied recursively so that a TML program can execute many more rules in one step. Murisi has implemented the first version of that, but it still requires some further optimizations.

He’s also implemented what he calls a dead variable elimination algorithm that is useful in certain TML programs. He’s also continued the integration with the z3 engine, with the CQC optimizations or CQC extraction for the logic. Umar continued his universe of size two program transformation, as well as another essential transformation. That will probably be enabled by default because it’s so good. He’s also been working on a type system, where he implemented the ability through a lot of code refactoring, to rearrange the variables in the BDD, which allows us to find or try to find the best variable order for BDDs because, as we all know, BDDs are very sensitive to variable order.

That required a lot of internal factorization, but it becomes very transparent using a function he calls the pause function. He’s also started implementing a special variable type that can count to infinity by increasing its bit size as needed. A huge capability in addition to TML. Juan continued his work on second-order logic, but also he’s been instrumental in helping Tomas with the Sudoku implementation, where he was able to extend the implementation of first-order logic and TML to help it handle the Sudoku problem robustly.

On the Agoras Live platform, Andre again worked full time on that. He revamped the entire payment system actually in Agoras Live and made it more robust and integrated it tightly within the progress of the video. Where he checks for several error conditions and also enables the payment within the video for fractional token payments,depending on the duration.

I’ve been focusing on developing the bit transformations in size two universe. And it relies on the type systems that we developed earlier. That allows us to control the size of the variable up to the extent of a single bit, and that gets reflected into the BDDs and their corresponding sizes. So we did have a translation from the typed program to the bit size program, down to the BDDs. However, now we intend to develop the capability to be able to permute the ordering of variables within the BDD but doing it at a higher level first.

So that required a generic function pointer-like capability that can take whatever ordering, and it can seamlessly be applied within the code, both in a direction from transforming from the original program to the bit universe.

And then recovering from the BDDs back to the original program. So the same piece of code can be used in a way that it can allow different types of orderings. So that capability was built in during this month, there was testing, some refactoring, and some bugs were also discovered. That took some time to analyze and make fixes. During the process, we did discover some better ways of doing this. Some effort went in changing the previous implementation and improving it with the new implementation we had.

On the roadmap, the next thing is that we have the ability to grow the type. So that requires a thought on our new design choices around the type system, and also in the transformation. So we’ll be working on that.

Also, the ability to change the types of the program on the fly and to be able to reorder the bits according to the new types, as with the goal to do a better optimization.

The main breakthrough of the month was that I’ve realized under the guidance of Ohad how to theoretically do the two CNF extraction and how this can be placed in the universe on the framework of using BDDs. The first steps were to figure out what part of the implementation to start with and to have a rough road map on what features should be tackled one after other. So that at the very end, we have a very efficient CNF extraction.

And as Karim mentioned, the goal is to reduce the explosion of the BDDs in memory so that we get faster overall performance. This especially will be helpful in queries which use constraints.

So, for example, the Sudoku in a naive first-order implementation will run a lot faster under this optimization once it’s there.

The first steps then were to figure out how the basic algorithms will work, and I’ve now started to implement all of this. I’m looking forward to getting this feature ready as fast as possible. Thank you.

Last month I was finally able to finish our Agoras live payment system. I did necessary improvements to it and was finally able to fully integrate the payment system inside the video conferencing platform.

Some of the additions are under the surface; I can’t demonstrate it. But some of the additions are on the surface, and let me demonstrate to you how it works.

What’s left for the next month? Testing first, I finally will send the website for testing to gather all necessary additions or glitches I haven’t noticed. Next, I need to check the review system as I haven’t tested it yet. Once done we can switch to production.

This month, I’ve continued on finding a faster way of solving Sudoku in TML, the most straightforward way by using first-order logic, make BDD to explode in size.

There are some DML optimizations in progress which should help. In the meantime, I and Juan we started to look into combining the current approach with backtracking to effectively reduce the size required. I’ve also made progress on implementing state blocks feature I was referring to last month, and it will be finished soon. I’ve also added a new command-line option, which helps debugging TML programs.

There’s already an option print updates, which says DML to print all newly edited and deleted facts each step. And this debug output can be very long, so I’ve added an option called “print updates if”, with the name of a status argument.

This option is limiting printing of updates only, two steps where a provided state exists in the program’s database. And states in TML are, you can imagine it as a zero error term, which is basically a table without any columns.

Last and not least, I’ve finished initial versions of TML syntax highlighting support for the code editor and also for vim editor.

Professor Benzmüller:
I’m working on three different directions. One is that I’m part of the academic panel, as everybody knows, and we have made some further efforts towards the knowledge representation language to be used in that context. There are concrete proposals now. Language set up for law of changing the law, that has indecent expressive power. But especially then also some computationally well-behaved territory it’s in. We have also discussed in the academic panel what kind of additional challenges we might have to address. I’ve also pointed out some orthogonal issues that we might pay attention to, and maybe that restriction in the language also means that we may eventually have problems with some proofs that require a bit more expressivity.

There’s an ongoing discussion on how to balance well between the two parts. That is one part of the story, and I think there’s more to say from the side of the other academics on board. Ie. Enrico Franconi and contributions by Ohad which are very essential here in particular to address the law of changing the law. There’s further work I’m involved in as I gave this kind of workshop. They focus partly on automated theorem proving, particular resolution prover, I showed stepping from propositional reasoning to first-order reasoning and higher other reasoning, I pointed to the different challenges that come into the play. But also the different new opportunities you get at the different exclusivity levels. That was in the workshops in the last two weeks. Today, we also talked a bit about orthogonal aspects; the challenges for the project that have to do also with user interaction.

On the implementation side, I’m working on this model finder. Also intended to be a theorem prover and actually made further progress in the ladder direction, but also in the model finding sense.

What I can do now is I can end a proposition of first-order problems, even partly higher-order meanwhile. And ask the Q improver to find me a satisfying model for a given domain size.

So given condition constraints on a domain, I can even give it a list of such constraints, and it will then go through the list and find me the smallest one. The smallest domain sites in which it can satisfy the formula.

Right now, I’m extending that also to infinite domains by using columns, skolemization, and free variables. It’s a tableau approach, it’s running partly concurrently, and it’s running external so far.

I think in the next month, I should have everything together so that will be at the same time the theorem prover and the countermodel finder or model finder. So we give it the problem, and it will tell you the theorem or counter satisfiable, and it will spit you out some model information.

My expectation is next month or eventually a few weeks later that I have the entire thing together so that you get a relatively robust behavior.

Some people make tough decisions, they can be improved on, and there are months of room as usual if you start such a project for further implementations in regard to gaining efficiency. That’s quite usually the first thing; to get an entire loop together.

This month, I extended the z3 containment check to support tools containing First order logic and arithmetic. So before, you could only kind of check whether two queries, which are conjunctive containing each other.

But now, you can check a larger range of queries. This will help in some data optimizations, which I have to do for the TML engine. Another thing which I did was I implemented the minimization of queries containing logic using the containment queries just mentioned.

So if you import a large query in TML, this algorithm uses Z3 in order to make it smaller and to remove the redundant parts of the query. By doing that, hopefully in the end, will result in a fast execution of the query because it is smaller.

I worked on what we call the square transformation, which essentially as Karim mentioned, it takes a program and takes the TML program and makes another one which for every step of its execution, executes two steps of the original program’s execution.

You could also call that in another programming language [...]. We’re hoping that it will create opportunities for us to minimize the queries and to kind of eliminate certain queries. But it’s not yet performant, so I’ll have to continue working on it next month. Another optimization which I implemented is the dead variable elimination algorithm. I saw the opportunity for the dead variable elimination algorithm when I realized that certain variables weren’t being used and were making certain parts of the TML engine slow because it was causing big data structures. Once I implemented the dead variable elimination algorithm, I was able to make some programs that used to take forever to execute go down to 10 seconds or 20 seconds. So that was a very effective transformation. I’ve discovered some latent bugs in the back end, which Juan and some of the other programmers in the team have helped resolve.

This month, I continue working on first order logic especially, because I’m looking forward to get a robust implementation in order to really to wrap up the approach that I’ve been mentioning about second-order logic.

So I completed the uniqueness quantifier, which was remaining there to do. I am also really doing the more comprehensive testing of the feature in order to clear out any pending bugs inside first of the logic.

Regarding Sudoku, I joined forces to try to figure out what was going on, and in particular, using Sudoku as a benchmark; it’s sort of a not very friendly test case, or really a worst-case scenario for us. Because Sudoku has a single solution, and in TML, we are looking forward to getting every variable assignment that satisfies or solves a problem. Well, in Sudoku, the purpose is to get the problem statement; it’s about a single solution.

So metaphorically, it’s like fishing with a harpoon versus fishing with a net, which would be what we do in TML. We are aware that we have some performance penalties so we have been reviewing the first-order algorithm or resolution algorithm that we have there, in order to improve our heuristics or search of the combinatorial space. We are not looking to tackle any of the state-of-the-art solvers but we will need to find an improved performance solution there. I’m looking forward to wrapping up this second-order feature that I’ve been describing for some time already.

Last time I mentioned that I worked on solving Boolean equations, and I solved it for the atomless case and and for the atomic case. I was stuck, but I managed to find a solution for the general case for all Boolean algebras.

It was a problem for many years, maybe 100 years, and it’s the first time that there is a solution that completely characterizes all possible solutions. All possible solutions are not a good way to say but it completely characterizes the case where the problem is solvable.

Using this, I continued to develop my new logic that solves completely the problem of self-referential laws of changing the laws. It can be seen to some extent, to some limited extent, I must warn you, a self-aware kind of AI.

And I started to write a paper that outlines all this new logic, which is an extension to any logic. You can take any logic and enhance it with those constructions in order to allow it to include self-referential methodological statements.

This paper is already 30 pages long, and I still have a lot of work. I’m not sure if and how I will publish it, but that’s indeed my main focus this month and for the next month as well.

Professor Franconi:
This month, the most relevant activity that we did as a bunch of logicians and scientists discussing the foundation of Tau has been basically the proposal by Ohad to have a special formalization for stating the law, for changing the laws.

He proposed a very interesting idea based on Boolean algebras, which basically means that we can define rules on how to change laws on arbitrarily defined basic laws defined in any other logic, which is interesting.

Moreover, these rules for changing can apply to the these rules of changing themselves, so they can have they can be self-referential. And that’s a major breakthrough, I believe, and the foundation is very solid and very interesting and very new, I would say, applied in this particular field.

So I guess while Ohad is progressing and we are progressing and finalizing this level, we’re already starting to think about the basic logic that would serve the purpose of expressing regular laws.

And we are focusing on probably using description logics in the form of the owl standard from the W3C.

So this would be an advantage to be a standard, and to already provide a lot of research about it, even though in this context probably novel techniques for implementation, but also extensions to capture phenomenon that would be interesting for the users of Tau.

Maybe this extension would have to be studied and researched. It has been a very interesting month, and where mainly Ohad was the main actor.