Tau Developer Update (June 2021)

Andrei continued his work on the Agoras Live application including features of multiple knowledge areas per user and that necessitated several new dialogues and queries he’s implemented.His big task this month was updating to the latest video conferencing platform Big Blue Button. He’s also fixed numerous issues that have shown up as a result of further testing. On the TML side, Lucca and Murisi continued their cooperation on the integration of the Z3 engine for Conjunctive Query Containment and minimisation. Lucca implemented the feedback that we gave him during our last code review. He’s started implementing a new feature called 2 CNF extraction optimization assigned by Ohad. Ohad has also recommended he starts looking at solving systems of Boolean equations so he has started on this as well. Murisi has continued on the Z3 implementation, further advancing that optimization. He continued work on his p data like backward chaining solver but he also brought that forward to TML. We now have the beginnings of a backwards chainer in TML which Ohad set out as a main goal for a long time. In doing so he identified some memory and performance issues as usual so he worked on resolving those. Umar finished a major milestone this month which is program conversion to a bit universe size two. Next month Umar is going to be working on additional features that will come of that conversion which is dynamic bit assignments on a per variable basis and potentially even infinite variables which is a big enhancement for TML. Another big feature he finished was the static type system with local type inference. Juan has been working on second order logic. He continued to inch forward on that with some revisions of the sol algorithm and has started looking at bit universe conversion as well. On the IDE front Tomas continued to implement the Sudoku demo. He’s implementing the classic algorithm called the dancing links algorithms in TML and made some syntax enhancement called state blocks which will greatly enhance the readability of TML Another exciting development is he’s adding support to TML to Microsoft Visual Studio code which will allow you to have syntax highlighting of TML within Visual Studio.

This month, within the TML codebase, I designed, implemented and optimised a backwards chaining solver. I started in python and moved it to C++ and optimised it to the extent that it will match the performance of SWI prolog for programs like parsers. Not a full comparison just for programs we are interested in general. I also implemented a backwards chaining TML interpreter which allows for rules which delete facts. I also implemented a TML to prolog translation to see how this interpreter stacks against other prolog interpreters in the industry. Both implementations of TML were slow so I think it has something to do with the fact that retraction makes it harder to reason backwards in the context of TML. More investigation shall be done to confirm. I also did some extension to Lucca’s work to support Conjunctive Query minimisation. It’s a dynamic optimisation by taking a certain rule and keeps removing parts of the body from it causing the program to change. I did some work to allow it to support programs that are changing during the optimisation process. The last thing I did was contribute to the Tau research efforts. We discovered a certain self referential logic and wanted to see if it was quick to implement. It’s going to be intractable but not impossible. We might have to do it in a slightly different way.

I started the month by implementing feedback on my previous work. I then looked into Ohad’s work in his solving system of Boolean equations containing inequalities in order to stay up to date. I then started my work on the two CNF extraction optimisation for BDDs which aims to make BDDs smaller by treating two CNF parts of logical formulas differently. These can be seen as constraints for a BDD. This should make BDD end faster and shrink them in size.

Last month I made multiple changes to our Agoras Live platform regarding the feature of adding multiple knowledge topics per user. Lots of dialogue and UI changes because of it. Now when you click on a group session you get a list of the people attending. I also fixed lots of small issues around the platform like fots, incorrect dates, layouts, etc. Our video conferencing platform Big Blue Button had a major upgrade so I implemented this. I had to reimplement our payment system because of it and am on the cusp of finishing this. In the coming months I’m going to finish the payment system testing and that should be it for all major functions and testing for Agoras LIve to then move it to closed Beta testing.

I’ve continued to work on solving Sudoku in TML. I’ve implemented the dancing links algorithm from Donald Knuth. This algorithm uses backtracking with a circular doubly linked list and right now I’m trying to solve harder games efficiently. I’ve been working on implementing a new syntax feature called state blocks. This should enhance readability of TML programs. It allows group rules for state into blocks and reduces the need to repeatedly add a state body to each rule. FInally, I’m working on adding TML syntax highlighting support for VS code editor.

I keep moving forward with the second order logic feature which turned out to be a harder problem than envisioned originally but I’m still thinking I’m on the right track to get both algorithm and implementation for a BDD based solver for the second order variables. I have elaborated the structures to synthesise the variables. As mentioned before it’s not an explicit representation, they are being represented as a set of constraints for what makes the method of enhancement very similar to constraint solving problems. I’m looking to wrap up and reach an initial working prototype for a given set of formulas, not just existential form in the short term. I’ve also been looking at some blockchain things and how we interact with Ethereum nowadays. We are starting to look at the proof of execution algorithm that’s behind it.

Prof. Umar Mohammad:

The initial weeks I spent improving the type checking rules. Ensuring that some of the typing related errors are properly handled. I also tested the local type inference so now if you have arithmetic terms that don’t have type specified you can still infer them. There were also improvements made to ensure that the fixed point can converge faster. In addition some of the previous code was removed. Now we only have one flag to spit out bit transformation of the input program. Further improvements were made in respect to selecting the order of bit variables for the BDD at a higher level. Now within the bit universe world you can specify the permutation order of the variables for the underlying BDDs. The code of which is being tested and reviewed. I think it’s important to try and use this feature and have feedback on it so that it’s working as expected. It’s also worth comparing it to the case when we do not have a bit transformation. What advantages we are getting. In future we are looking into some indefinite types and also improving the bit transformations a bit more and making them faster and more compact.

Prof. Benzmüller:
I continue to contribute to the academic discussions. In the last months I think we have gained a novel perspective for a moment by briefly turning away from the law of changing the law. Then questioning additional relevant aspects of what our language should also support. That’s our orthogonal perspective then comes back to the challenges with the law of changing the law. There is an interesting interaction being seen now in our discussions and I think that was a fruitful step for us. We additionally commented on a project presentation under preparation; this also led to an interesting discussion relating back to the project. For instance this week it was very interesting to discuss these notions of logic AI that’s the core of the project and to discuss it in comparison to the notion of hybrid AI or symbiosis of sub symbolic and symbolic technology. Where we are standing with our opinions and positions on the project. We have dived into the challenges and questions related to controlling natural language for Tau. Also here one can see further progress; what we are gaining in terms of a mutual understanding but there are also enough controversies being discussed so it’s clear this is a big topic to be addressed in future discussions. We’ve made progress on Tau’s working hypothesis that could now soon enable further concrete studies like the hypothesis to look at the description logic choice plus an extension towards the law of changing the law that has been worked out and proposed by Ohad. To me it looks like an interesting hypothesis point to challenge now with our questions. For instance today, in the tutorial we just had on how to treat modalities and so on. I’m personally still wondering whether this entire setup could benefit from a parallel modelling or an early on modelling like Isabelle Hall in order to inform it back, eventually rapid prototyping experiments. Additionally I’ve been working on an implementation of a concurrent prover and model finder. They are concurrent and sequential to make some experience with concurrency. This is still outside of the TML language but the idea is that the experiences here actually flow back into an implementation in the TML language. I also spent some time in the Isabelle tutorials.

I’ve always said that TML’s passing abilities are too slow. However, it was surprising and nice to see when Murisi translated the parsing task to state-of-the-art prolog interpreters like sweeper and XSB they were much slower than TML. Even though TML is not fast enough it is still much faster than existing state-of-the-art logical programming languages. We’ve become more focused on knowledge representation and controlled natural language. Professor Franconi devised a knowledge representation language for the system and for us to focus on. We can have construction that allows self referential methodological statements based on Boolean algebra and I was working on enhancing this formalism from the so called atomless case to atomic case without much success. It is a difficult problem. Luckily we don’t really need it, We are more interested in the atmoless case. I’ve considered giving up on the atomic case but we will see.