Murisi has been working on his optimisations for TML. One of them is called BBD caching. He’s been experimenting with cache hit ratio. So far the results are not as good as we hoped, there are things to learn there and you don’t always get what you want from optimisations but we are working on it and hopefully the BDD algorithm will one day become a core part of our technology. We have confidence it will be a very good optimisation. Tomas has continued work on the TML IDE [https://tml.idni.org]. Most of his time has been spent on tml built-ins bugs and optimisations. Juan has been working on the bridge between Binance and the Eth blockchains. That bridge is necessary because what we’d like to do is give our users an option not to pay very high gas fees on the Eth blockchain. We’d like to set up a parallel chain with Binance system allowing users to pay much lower transaction fees whilst transferring Agoras tokens. He has also continued work on the Second Order Logic that he was the original developer of at the beginning. Him and Luca are basically working on a new algorithm that Ohad has given them, working in parallel, each with his own implementation. He’s also gone back to working on the arithmetic feature he implemented before re-implementing some new optimisations for that. On the Agoras Live side of things there has been some very good progress. Andrei has managed to integrate a cryptocurrency wallet into the Agoras live application so is able to process payments, buy and sell agoras and also a fiat on and off ramp. You’ll be able to use your credit card to buy Agoras and others eventually using fiat so that’s a major development. Mo’az has been working on a complete redesign of Agoras Live. What we are internally calling Agoras Live 2.0 and to that end we are onboarding two new developers. One developer for the web client side to join Andrei with development of Agoras Live. The other, a blockchain developer, for mainly our mainnet but also some cool new features of TML that we’ll tell you about next month.
Here is my CV. [ http://christoph-benzmueller.de ] I’ve been in the Field of Ai since the beginning of the 90s. I studied in Saarbrücken which is one of the major places for computer science and artificial intelligence in Germany. I got interested in the big questions. In a course of AI when challenged with the vision that there is a new branch arising in evolution which is built on different biological foundations as we are used to, meaning computers, which then eventually will even be able to reproduce themselves and so on. I said I don’t believe that but I got very much interested in exactly studying the boundaries of what can be achieved on computers and what makes us humans very special, or not. I dived into the area of automated theorem proving in mathematical context because i thought that is one of the challenge areas where humans are particularly good at and if i could make progress in how far one could get with computer technology then this is also one way to approach these big questions. Whether there are fundamental differences between humans and machines. This triggered this research already as part of my PHD thesis on automating Higher Order Logic for applications in mathematics mainly at the time. This was the reason why I was developing these theorem provers starting with LEO I and later LEO II in cambridge then and now more by my students in particular Alexander Steen, LEO III Prover. So initial motivation was automation or interactive and automated proof in Mathematics. Then I found out that mathematicians are not really interested in applying such a technology for many good reasons. It will eventually be the last class of persons out there who are open mindedly switching to this new form of interaction. You see now many people doing interesting work in the area like Carnegie Mellon university, Kevin Buzzard in london and Thomas Hayes’ impressive work. I hope to focus more on this challenge of expressive non-classical reasoning and that is why I also was very much inspired by the Tauchain project. In particular, the Tau Meta Language. The applications in areas like legal ethical reasoning applications and direction of contract languages in general smart contracts. I think we are very much aligned with mutual interests. I think that from this Higher Order perspective we have developed an environment where we can embed many interesting non-classical logics as you need for many of these applications. Now the question is from the other side, What is actually needed if we want to keep them decidable. How much, for instance, of the ideas we have studied can be converted into a context in which decidability is a main criterion because for me that hasn’t been so much so far in my work. I was just interested in whether I can do something with these techniques in practice to do practical examples right. This is now an interesting challenge question to look at and whether some of the machinery we have built up can be fruitfully employed in exactly the context of the projects here.
In the beginning of the month I familiarised myself with TML and wrote a little program that decides whether a given relation carries a group structure or not. From there on I started exploring algorithms where the idea which emerged from Ohad is that the Second Order Quantifier can actually be seen as an interval of relations and then the instances of the relation that is quantified over in a formula can actually be seen as constrained on this interval. I was trying to get a good grip on this idea which included studying algebra quite a bit in order to see whether we can use this perspective to actually come up with an algorithm to efficiently solve Second Order formulas. One of the steps that I took was to spell the basic BDD operations like end or negation on intervals of relations instead of just on one Boolean function Then the next step, which is where we are at the moment, is the interaction between the First Order and Second Order quantifiers. All of these processes were accompanied by discussions with Prof. Benzmüller and Ohad. This is roughly where we are now, we haven’t solved it but we have got a good deal done I would say. I also started looking into how we can use SMT technology to solve the query containment problem, especially with an emphasis on Conjunctive Query Containment with Negation. Because algorithms are quite hard and slow we want to see if theory proving in that sense can help us here so am currently investigating.
This month I was working on built-ins and TML IDE. Let me show you a glimpse of what you can expect soon. [ https://youtu.be/M5dup0NoOu4?t=772 ]
This month I did a mix of things. The first was improving the performance of the transitive closure algorithm. Basically this involved identifying certain data which we thought would improve the performance of the program. So I spent the beginning of this month trying to see if I could find better subsets of data which you could memorise to improve performance. I identified some effective subsets where I could reduce the amount of cache misses by 20%. After that I identified some appropriate sets for caching. I spent some time looking into algorithms to update the cache faster to achieve faster performance overall for TML programs. Another thing I worked on was improving the hashing performance for a different cache which stores vectors of BDDs. One of these kinds of performance bottlenecks was that these hashing vectors were slow. So I worked on Algorithms to memorise the hashes of these vectors so not so much time is spent recomputing. I disabled the support for unsafe negation. When measuring the performance of these TML programs Ohad identified that unsafe negation was causing some performance slowdowns. Disabling this causes quite some speedups for certain programs, especially ones including Eval. These investigations cause us to research into subsets of Datalog with First Order Logic. Fragments of Datalog that are safe and this is a bit of a hard problem to identify whether a certain Datalog program is safe or not. “Safe” implies that the queries are domain independent and the results of the queries are finite. Apart from general bug fixes I’ve started work on variable shifting optimisation which basically involves looking at various BDDs and seeing whether they are kind of similar to each other. So, I was working on algorithms for that optimisation and hope to see if this causes greater performance improvements for next month.
My background is in physics. I did some space science work and encountered Blockchain when I was working on the Interstellar Boundary Explorer project. A few years later thanks to Kilian I met Ohad in an IRC chat and over the last couple of weeks I’ve been getting up to speed with the IDNI projects and there is lots of interesting research going on here. I’ve been catching up with some of the Logic I’ll need to know to understand TML and how we can put this on a Blockchain. I’ve been looking a little bit at some consensus methods, at Proof of Stake because of the types of voting we’ll need to do so there will be some sort of staking going on. I have some ideas to put forward. Some of the parts that excite me about this project are some of the ways that the code can manage itself , so to speak. There is a versioning system which is inherent in the system itself and it’s a compelling idea and will grow in interesting ways. A little bit more about my background; I’ve worked on a few different blockchain projects. I’ve developed the Woodcoin chain and worked with a stable coin project called Vault briefly. I’ve worked with Vermont secure computing for a couple of projects; including developing a hardware wallet. My background is in C++, java. It’ll take me some time to get up to speed on the TML project but I’m excited to be here and work with you guys to move this thing forward.
Last month I was testing the Agoras Live platform, fixing minor issues like Calendar but now it is done. Prior to giving way to professional testers we lack one very significant part, the payment system. While we have micro payments and our mainnet in the works I’ve implemented a payment system based on a web wallet. Let me demonstrate how it works:
Next month I plan to move the payment system to production mode and hand over my work to a professional tester.
So this month I’ve focused on 3 different parts of the project. The first one is on the blockchain side. As Andrei mentioned we are moving forward with integration with the finance chain. We also have a micropayment payment channel being tested which is pretty much the same payment channel we have been working on the Ethereum blockchain. What I mainly reviewed was the cross chain mechanism in order to make Ethereum and Binance chain interoperable. We continue to review the options as we may provide our own interpretation of an interconnectivity mechanism. On TML I’m also pushing forward with the Second Order Logic feature. I’m walking a parallel path with Luca. My understanding is that everything is really about constraining the power set or the domain for the second variables. So, from the matrix of the formula we can infer constraints that must be part of a variable of the solution, of the one of the variables that must not be part of that solution. So, dual or pair constraints may need to be negated. I’m exploring the algebra concepts in order to model and to deal with these operations. We are already one step forward, we don’t have a systematic approach to constrained variables but my understanding is that we have an additional task(s) in order to solve the variable assignments, the constraint properly and the second order variables there. This does not yet address the quantification step, it may need a scenario to handle the existential quantification but the interleaving of universal and existential quantification over the Second Order formulas. I’m also returning to arithmetic to do the many possible optimisations, improving it, according to our standards for release.
This month I’ve been working with Fola in scouting a suitable copywriter for our project who can give a voice to the project. We’ve found a strong candidate but have further interviews before making a final decision to onboard. I’ve been growing the list of Youtubers, did some outreach and follow ups in that regard but the response wasn’t great. When they have responded they have been highly overpriced. There is one candidate that stood out but he is currently very busy so we are discussing a suitable appointment that suits to introduce the project and ourselves. I’ve also been working on the presentation of the project. It’s getting closer to a final version. In this interaction we’ve narrowed down the vision of what we are doing. You can find many angles to explaining our project but what we have agreed upon is that basically one can say that “Our vision is to advance humanity by scaling collaboration without compromises though logic based artificial intelligence.” In terms of the coin swap we also updated the information on Etherscan, Coingecko, etc. We had the AMA with Bittrex where Ohad and I participated. The AMA was a success with a lot of relevant information going out, However, it was quite bot driven and not as organic as we had hoped for. However, I transcribed the AMA into our medium so if you like to look at that there is a lot of valuable info on the project [ https://tauchain.medium.com ]. We’ve launched the Tau supporter program with over 20 applicants so far.
Tau Supporter Program Info:
Related to that I’ve been doing community support. Miau has created a German telegram group and Fola and I helped him with that. He’s engaged with the community and has transcribed the Bittrex AMA to German. In light of his efforts we’d like to make them the community member of the month for their community engagement! Thank You!
Working alongside Kilian. Negotiating with Bittrex regarding how this ongoing swap will function. There are still some details to hash out there and we hope to have that concluded quite soon. For now, whitebit exchange is still open to do the swap from Omni to ERC-20. To store your ERC-20 tokens you can use Metamask or any other ERC-20 compatible wallet if you want. ALl you have to do is add the contract address found here [ https://www.coingecko.com/en/coins/agoras-token ]
I’ve been working with Andrei and Karim to find a front end developer for Agoras Live. That’s been going well narrowing down to one person who has given us a quote to get it done. There will be some changes to Agoras Live that will be pushed on with a quote soon. We are bringing on 2 designers to work on the rebrand and an additional website designer as well, separately. All high quality and I look forward to working with them. We’ve started working on our marketing as well. We have engaged with a marketing company and are explaining the project, getting them to understand the project in detail. We’ve been really surprised with the company we are working with. They have an impressive background within the space. Ohad made a great comment to them that “you’re not going to be ready to market for us until you’re convinced this project is the greatest project in the world right now”. I’m still working on onboarding Dr Kramer, pushing forward with that and the day to day administrative duties.
I have been working on mostly everything this month but to highlight two things:
With the discussions with Prof. Franconi we hesitate between logics. Guarded First Order with least fixed point and Monadic Second Order. Once we settle on those we will need to write solvers for them. For writing solvers I’m completely confident because one of the best things that ever happened to the project is the joining of Prof. Benzmüller, the best man in the world for the mission of writing solvers so I’m completely confident we will nail it down very well. Another big thing that happened this month which is really big news is the realisation of how we give a short certificate for cryptographic level through a correct running of a TML program. To take an example, in Ethereum everyone needs to hide smart contracts, why is it not the case that one can run it and provide a certificate that it runs correctly. Well we are able to do so in TML and this is because TML is based on BDDs and therefore the running of TML can very straightforwardly be converted to a QBF formula and that says the running is correct. Using the subject protocol and the proof that IP=PSPACE the interactive proofs in polynomial time because polynomial space without cryptographic assumptions or probabilistic weakening of the problem, polynomial proofs are NP. But if we allow probabilistic proofs we can go up to P SPACE . This is a really big thing. One can run a TML program and convince all the others efficiently and in a short proof that it runs correctly. This we will do with the help of Lukas. He’s another great addition to the team and knows a lot about cryptography and decentralised networks. He will be leading this effort in creating a network of proven computations with again short efficient proof which are probabilistic but are good enough from cryptographic point of view.
Q: Will Tau eventually be able to use natural language processing to turn unstructured data (Speech) into structured data (Speech) so that we can grow the knowledge base quicker?
Ohad: Currently no one knows how to do it but due to the design of the Tau’s Internet of Languages and overall design of being self amendable, the day someone does know how to do it we will be able to incorporate it into Tau.
Q: Is it meaningful to follow the majority option? An individual can also be right and the majority is not right with their opinion. Where is the individuality of the individual option? (See outstanding individual performances of single individuals) Has something like that been taken into account? Ergo, One side must be right or is there a dualism? How does valuation of right and wrong take place?
Ohad: There is no right over wrong on Tau. It’s all opinions. Therefore there can be contradicting opinions. Now for the case that the majority is wrong, this happens in real life of course but on Tau we don’t have a full solution but we do have a better situation than in real life. Because Logical engine is able to detect contradictions sooner or later a wrong opinion will run into a contradiction and that can be discarded. In this sense we are in a better situation than real life.
Q: How will human intelligence be used first in usefulness? AI seems to be running ahead. So, which is more valid and insightful?
Prof. Benzmüller: I guess machines will get closer and closer to solving instances of the Turing Test over time. In some way you could say that in some way machines are getting closer to human level intelligence but on the other hand I’m not personally convinced of that. My personal opinion is that there will be, for a long time, quite a gap between Human level intelligence and what machine can actually achieve. Time will tell us but I think there is eventually still some fundamental difference. That doesn;t mean that we can get in many single particular domains very close to human level performances or even dramatically exceed it. I think that also includes applications in the area of checking arguments with technology as we just discussed in previous questions. It’s a fascinating area.
Q: There are so many human languages. Why are they different to achieve understanding. Can Tau answer that? What is the main feature you’re looking at for the use case of the Tau Language? What does a meta Language mean? Can you humanise your goal or compare it from a computer language purpose to a parable for human languages.
Ohad: I will only comment about the Meta Language. It’s only intended to be able to define more languages. So many languages will exist and evolve over the system. For the rest of the question Prof. Benzmüller has interesting things to say.
Prof. Benzmüller: What we’ve also done in our works is to try and get arguments stated in Natural language formalised in the computer. This is, in some sense, quite independent. This process of the language you start from. So whether this is now a German, English or Hebrew text, what we want to get out of such a process is a characterisation of the essence of the argument, in terms of an independent logical language. Then check whether this transformed the essence of the argument and makes sense. Making sense in terms of checking for rationality for freeness with respect to contradictions so no paradoxical argumentation involved here. This is something you can do with the machinery and in this way you get independent of the natural languages you start with and you achieve an interesting independence and feedback in the different languages; in terms of pinpointing problems of an original argument after a rational reconstruction on a computer.
Q: if I ask Tau language — After a huge collection of data. Is “Covid a scam/plandemic” what might be its answer? How could it truth us out of the programming of TV?
Ohad: The best Tau can do, and philosophically, the best Humans can do is detect contradictions. We don’t have direct access to truth but sometimes we can discover what is not true by detecting contradictions.
Q: What do you think about GPT-3 and could Tauchain benefit or compete with it? Would it make sense to request GTP-3 access to aid Tauchain development?
Ohad: Machine learning is quantitative in nature while logic based AI is both qualitative and quantitative. Even things that look qualitative that are done in machine learning like certain language processing tasks are being forced into a qualitative model and that’s yet another source of their accuracy.
Prof. Benzmüller: My viewpoint to this very interesting question consists of three aspects:
1. The two areas of machine learning and logic based Ai might fruitfully interact and there should be more eventually more people who look into that. One way they could interact fruitfully is that with logic, as discussed, should actually be a harness for machine learning approaches let’s say when it comes to really critical applications. Logic can be precise and this could provide a basis for verification and for making sure things don’t go in the wrong direction even if it’s more likely.
2. On the other hand the Logic area might benefit from machine learning in a way that when you integrate it that you eventually can benefit from learning decisions. Theorem improving for example. There’s a community starting in this direction. Work that is relatively fresh that looks into that area and that’s interesting to follow. For instance, strategies for proving things can be learned using a machine learning technique.
3. The interplay between the two from the perspective of strong Ai. So, for me the clue actually to strong AI; the big questions that motivated me to get into AI personally are related with the questions on how we actually can get from trained models (from statistical level) to sharp theories (on logical level) and eventually have a nice interplay between the two layers and I think in this interaction lies potential answers to the really big questions out there. How far you can go with AI based approaches when you integrate it to words of machine learning and logic based approaches.
For the moment I think we should pay attention to Logic based side. It is the one that can give us really secure security, verification aspects and so on. This area at the moment needs to be fostered. At the same time eventually we could keep our eyes open for potential inspirations.
Q: WiIl Agoras Live be implementing any gamification elements to take advantage of tokenomics? What is the strategy to increase stickiness/long term user engagement and how can we use the Agoras token.
Ohad: On Agoras Live we don’t have any gamification for now. But, in general, one aspect that can be seen as gamification on Tau is debates. People are able to have debates with winners and losers, challengers and maybe even money. That’s one form of stickiness. For another; because the system can detect when you contradict yourself. There is a well known result in psychology called Festinger’s cognitive dissonance that shows that people have a deep psychological need to remain consistent and get into psychological stress when they say contradictions. So, I’m sure you can imagine that whenever people see that what they have said contradicts itself they may obsessively try to fix everything to make everything consistent.