Andrei continued his work on the Agoras Live application. He added support for multiple areas of interest per user. A major feature allowing publishers to have multiple areas of interest. For example, you could be a lawyer, tennis coach and a cook. Each one of those areas of interest would carry a different rating. He continued further testing, fixing a few bugs and general usability issues. Lucca continued the integration of the Z3 solver into the TML optimiser. We are thinking of splitting TML into two run times. An optimiser compiler and a runtime. A final decision to do so is still to be made as it’s still being considered. Z3 has given us a lot of performance improvements in terms of query subsumption. Lucca’s code has been reviewed and it was super excellent for somebody of his level of experience. He’s getting ready to integrate that into the main codebase. Murisi continued his work on TML optimisations. He helped Lucca with the Z3 implementation and identified several other avenues of optimisation. He improved the hashing algorithms for bdd because we found out our existing algo was not good enough. He improved the same resulting in some very general performance in TML. He also identified that even within a certain rule the ordering of the rules within the body makes a big difference in the TML performance. Juan continued refactoring the tables with an eye on potentially splitting the execution and the compilation steps. He is also in the process of redesigning the arithmetic built-ins as well as the built-ins for the log base 2 and the summation. He’s also continued working on Second Order Logic. Tomas has continued with his implementation of the Sudoku puzzle in TML. Working on which uncovered some bugs in TML he has managed to fix. Prof. Saul is quickly coming up to speed on the TML codebase and ohad had him implement his own BDD implementation which he did really quickly. We all reviewed it and it looks really good.
I have been working as one line of focus work on a layered approach to work with the law. They were quite successful. I was able to simplify the ideas I had here. Even using a kind of very simple controlled natural language with some illustrating examples that showed basically it would be possible to take when we decide to go for such a layered approach. It’s not the preferred solution at the moment but it would be an alternative. We could start with a layer in which the law is controlled, so the law is formalised. A layer above in which we can talk about rules of changing the law. A third layer in which these rules are governed. Such a layered approach should be relatively straightforward but it wouldn’t give us the ultimate goal of talking about the law of changing the law in the law itself which is still the ultimate goal on the table. It was an interesting line of research and created this alternative view for the project. We’ve looked into experiments with Boolean algebras to see how they perform. There are questions on the table about the decidability of these problems. A third line of activities I’m working on a draft paper for a model finder approach for the project.
Lucca: I started the month exploring the V3 api for C++ language. Fortunately, there is a big example file on Git From that I learned how to handle the api. I integrated Z3 to TML so we can use it in our project. I also started implementation of query containment that takes a different route. It takes a query, encodes it as a logical formula and then we can use Z3 in order to check whether two rules converted to logical formulas imply each other or not. If they do, we know one rule is contained in another. With this mechanism we can now move to existing optimisations which Murisi has already implemented. We incorporate these new functions to check the containment between tools and these optimisations. We should see a speedup in the system. I will also see if this speeds up in all instances of problems or if we have to do some more careful treatment. This is somewhat of an outline for work into next month .
We’ve had a very interesting month of discussions on the flexibility of the language to express Tau; to be able to capture the notion of self-reference. Basically, we have to be able to write rules that are able to modify themselves or to be able to check whether they are true or not and then decide a cause of action. This notion of self reference is in logic quite important and easily leads to well known paradoxes. So, first of all we are discussing all the use cases that in tau should be relevant for applying this kind of reasoning. Then we are trying to understand if there is a general way to capture this phenomena in a logic that would be feasible in practice but expressive in doing what needs to be done. I’ve found these ideas quite challenging but it’s really along the lines of the vision of Ohad and IDNI as a whole. The idea is to build this system that evolves in exactly the way the user wants it to evolve including the possibility to change the rules that govern the evolution of Tau itself. I think it’s an interesting idea and from the research and foundational point of view it’s rather novel and the kind of solution we are working on as a team of scientists is quite non-trivial but we continue to work on it.
I’ve been trying to implement combinatorial problems in TML like Sudoku or Latin square. In doing so I’ve revised the count built-in; I’ve removed its input argument. I was considering creating built-ins for processing records in a table one by one but I’ve managed to do it with the current TML features without adding anything. I’m also testing and debugging TML. I found a bug in a variable alignment for the first order formulas and also found a performance difference between inequality when using inequality in and outside of First Order formulas which is suspicious.
Prof Umar Mohammad:
I have mostly been working on improving the algorithm for type checking and type inference. Now we have a fixed point based iterative type inference algorithm. It required the changes at different places using these infertypes and static types. We can now convert arithmetic terms equality and less than equal terms also into the bit size universe Most of the testing is still underway, I will continue to work on finalising the completion of this. I’ve also added some tests and made these changes in the master branch.
Last month i’ve continued testing Agoras Live. I’ve implemented the necessary improvements to user experience. I have fixed the infinite scroll. Let’s go into more significant changes
This month I implemented a performance increase. I was playing around with the rule order and I found that when each body of a rule is evaluated it causes significant performance increases. I think it’s because depending on the body ordering of a rule it changes the sizes of intermediate bdds during the computation. Now I know that body ording in addition to variable count is important for TML performance and is something we need to pay attention to. Alongside this I notice that if body ordering is important I could turn this into an optimisation which measures how much time each takes to move and permutes the body ordering depending on the runtime information. This optimisation would only improve longer running programs. I also wrote a parser generator which takes TML grammars and produces parsers in datalog like languages. It produces a parser in both TML and prologue. I wanted to gauge the performance of the TML grammar in other similar datalog-like languages. TML takes about 3.7 seconds to pass itself and prologues take about 1 second to pass TML. It’s important to note that Prologue is backward chaining whereas TML is forward chaining. I need to compare TML to other forward chaining datalog like languages. I have a framework for doing so. I also help Lukka with his engine and participate in his code review for his degree integration. I wrote a top-down parser for and in TML. As you would expect from a top-down parser it produced far less productions. It produced 5% less productions of the present bottom up parser we currently have but unfortunately it didn;t improve performance. Reducing productions doesn;t necessarily correspond to a speed increase in a pacer as even if the parses take 5% less productions it takes 5 times longer. The last thing I did was design and implement a backward chaining interpreter for P Datalog with support for built-ins. So TML is currently forward changing which means it starts off with an initial database into the libraries upon a database. Whereas with backward chaining it starts off with a goal and it figures out if that call can be reached from the initial database. It figures out what conditions are necessary for that call to be reached. I implemented that for P Datalog and with P datalog being equivalent to TML it was an encouraging result I’ve also realised how you can implement a backwards chaining interpreter for TML now so I hope to get into that more next month.
My focus has been really on the TML core codebase. One major step, that is close to being commited, is the bisection of the tables class which has incorporated many transforms and features that may not be related to the actual main purpose of tables class. However, It’s very aligned to the proposed split between the compiler or transformed binary versus our dynamic environment to execute or to compute TML programs. Also to simplify the TML class to keep analysing its design to optimise its performance. In this regard we’ve been analysing body ordergings and how we assemble the bdds with regards to the rules plus the first order formulas. I look forward to minimising the number of permutations to improve performance. My focus is very much on the deep internals of TML which bond rules and formulas to bdds which of course in both variables maps are variable orderdirings there. Testing different approaches in order to improve performance. I;m looking forward to wrapping up the arithmetic feature with everything as a builtin; aligning with Tomas’ built-ins. To provide new arithmetic built-ins for logarithm and summations. I also will continue to work on wrapping up the second order features as well. This will be my focus for next month and possibly several months ahead.
This month I worked on our ongoing quest of finding a language that has self-reference, that is able to reason about the consistency of sentences in the same language, in a self-referential way. I’ve made significant progress. I’ve discovered some new ideas from boolean algebra and started implementing them. Soon we will have a solver for this logic and we will see where it takes us.
I’ve been working on some pitch deck refinements. We are at the stage to hand the pitch deck to the designers to give it a revamp. We have pitched our project to the Fraunhofer institute who were quite interested in the project. We will see where this leads to in future. I’ve been working on getting a asian Marketing partners and a Tier 1 exchange listing. I will continue to push on this front. We’ve started collecting testimonials from both team and community members. If you’d like to leave a testimonial about what the project means to you we’d love to hear from you. Submit your testimonial here: https://docs.google.com/forms/d/e/1FAIpQLSdTkadBwRB0gjeAxUT1afD2BMB9syhaNWNUeCqJ_bczAoYMYQ/viewform
It helps drive more attention to our project. I’ve been following up with some Youtubers we’ve been having discussions with. I’ve created a tiktoker list of crypto focus tiktokers and will see if we can establish some relationships there. I’ve also created a list of crypto related discord and Telegram groups with intentions to do the same once we have the resources to do so. Community member of the moth is Andrew with his Testimonial provided:
The only project I know in which the team shows themselves whilst reporting the status of the project in a monthly video. There are githubs updates each month. The team keeps expanding and hiring more people every month to help the project succeed. Tau, the david amids the many goliaths. This small project with big goals attempts to solve the bottleneck for human progress. It attempts to solve the problem with discussions as discussions do not scale. Have you heard about big companies such as Google, Facebook or Amazon attempting to solve the same problem? Only this one attempts to solve it publicly. Agoras I would love to see how we can use our smartphones as computational resources and receive rewards while sleeping.
I’ve been working with most of the team in describing use cases for Tau to create a branding brief for our designers. We’ve received some really interesting designs back from them with some feedback from a couple of different teams. We have lots of ideas internally and we want to go back to the designers with something that’s structured and agreed on. Putting a brand brief together accomplishes this. I’ve been working alongside Kilian in trying to get Agoras listed. Tried a couple of different methods. Firstly, using listing partners. I found out quickly that a lot of the linkedin listing partners are fake. Secondly, I applied to a few exchanges as well, awaiting feedback from them in regards to our application. I continue to look for further contacts. I’ve been working with Ethan, our community manager in asia. He’s given us some great advice in regards to publications we should be featured in. I’ve been working on internal contracts and agreements for the company. I’ve also been working on finding a new UX / Product designer who can create a visual for Tau to help explain what Tau could be like. Alongside that I have created an explaining Tau video in a visual way. I hope to get feedback on that and create a public version to showcase what features are possible. A visual view of Tau. The marketing agency has also asked for similar things so it will help us move forward with them as well. On the marketing front we’ve gone from using the message of using logical/ hybrid AI to understanding each other. This is the direction we wish to go with. We talk about discussion and having the first one million person discussion. It is based on hybrid AI but we want to focus on the human to human communication aspect.
This week we have finalised the Tau branding brief and it will include five sample use cases. A great starting point for understanding the endless possibilities of the Tau platform and I;m very excited to use this brief to brief the designers in a couple of days. You will be seeing the vision for Tau coming to life pretty soon. The next step for me is going to be articles on Tau and some of its most prominent features.
Q: How do you plan to encourage people to publish on Agoras Live. Will there be a “Free first course offered” or something like this? WIll Tau members create courses too? For example on the subject of Tau?
Andrei: Agoras Live initially is not a platform to publish content. It’s a platform for sharing knowledge so you can host groups and one on one sessions. The platform also offers cryptocurrency micro payments without a 3rd party intermediary. It will, in future, have integration with TML which will significantly improve capabilities of the platform. We are also aiming to add verification on the platform. For example a doctor can share their diplomas which will be shared only with a verification centre to be verified as a real doctor. WIth that verified state the doctor would have access to millions of people around the world. As for courses on Tau, Ohad may find time to do so. However, a software developer can create courses on software development and make their first sessions free as a promotional introduction. The platform does not limit you in this regard.
Q: How will the development of Quantum computing challenge Tauchain. Has Ohad some sort of future Quantum implementation in mind?
Ohad: If you refer to cryptography then there are works about Quantum resistant cryptography. Tau can change itself so we will be able to adapt. Maybe from the first and to adapt the Quantum resistant cryptography is not a bad idea. We will reach there and see. I do not know the future of Quantum computers. I’m personally not a great believer in them but they need to be prepared.
Q: Can you tell us more about Sergui Rudeanu and his significance to TML?
Ohad: He wrote almost the only serious books on Boolean algebra and there are many people who think they know Boolean algebra. I also thought that I knew until I went into those books and I saw that this is an amazing, fascinating world that is somehow a forgotten wisdom. It was always the foundation of Mathematical logic but the true Boolean algebra can give many more logical tools that are not common anymore somehow. I’ve spoken about it alot in the past when I spoke about second order logic because it was my only attempt and with some success to deal with the second order formulas. Now I revisit this in order to have a language that supports a so-called self-referential consistency that can speak about consistency of other statements in the language including when it refers to itself. This is a very serious logical problem but I found a solution for that in the boolean algebra world and that’s why I’m revisiting.
Q: How are the discussions with academics working in terms of enhancing formulas instead of a finite set of facts?
Prof. Benzmüller: We are continuing to debate on how we actually deal with the law of changing the law. Ohad has a good proposal in this direction. As an alternative I actually worked on a proper logic solution that follows a layered approach and showed how it could eventually be implemented that would give another form of dealing with agreement on the level of the law. On the level above and on the level of changing the law and eventually the level on top. I wouldn’t give us all at the same time in one big bag. There is some certain progress on having a vision on possible roads to take. The optimal road is being debated on the table and also there I think there is a promising suggestion being made by Ohad