TLAiBench[0]: A dataset and benchmark suite for evaluating Large Language Models (LLMs) on TLA+ formal specification tasks, featuring logic puzzles and real-world scenarios.
I feel LLMs are indeed getting better at writing models. But, in my experience, they struggle to come up with correct safety and liveness properties unless you closely work with them. And of these two, they struggle the most with correct liveness properties.
Also for some problems I observe that models produced by LLMs often cause state space explosion. For simpler models they can fix this when you guide them though.
I’m sure LLMs will get even better.
That said, I take slightly different approach. Lamport said “If you're thinking without writing, you only think you're thinking.” So taking that advice I always try to write the first draft with hand and once I have the final shape in place I then turn to an LLM for further exploration and experimentation if I have to.
Claude has certainly been getting better with TLA+. It's not perfect yet but for laughs I got it to model the rules of Monopoly last night [1]. I haven't done any exhaustive checking on it yet, but it certainly looks passable.
It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.
I don't use tla+ to model real-world systems anymore, Claude is able to model systems in Lean 4 and the binary executable can handle real input or I can directly generate c / rust on proofs with numeric types that have ring structure (integers, rationals, bits).
This post reads like an accidental advertisement for approaches like Verus [1], which couple the implementation and verification so you can't end up with a model that diverges from the actual implementation. I'm personally much more optimistic about the verus approach, but I freely admit that's my builder bias speaking.
Just a question to people who may know better than me about this.
I thought the whole point of trying to write out TLA+ is so that you get a better idea of what you want and put it into formal language?
I get that an LLM can assist/help with expressing what we want in formal language a bit, but if one automates all this there is no human intent/design anymore.
If the LLM generates both the design (TLA+) and writes an arbitrary program that satisfies said design -- what exactly have we proved?
What assurance do humans get since human doesn't know or cannot specify what they want.
Sorry, must be a very naive question, but what if you give LLM just a source code (maybe even obfuscate the names like Raft and Etcd) and ask it to create a TLA+ spec of that?
Can LLMs model real-world systems in TLA+?
(sigops.org)96 points by mad 20 hours ago | 25 comments
Comments
TLAiBench[0]: A dataset and benchmark suite for evaluating Large Language Models (LLMs) on TLA+ formal specification tasks, featuring logic puzzles and real-world scenarios.
[0]: https://github.com/tlaplus/TLAiBench
Also for some problems I observe that models produced by LLMs often cause state space explosion. For simpler models they can fix this when you guide them though.
I’m sure LLMs will get even better.
That said, I take slightly different approach. Lamport said “If you're thinking without writing, you only think you're thinking.” So taking that advice I always try to write the first draft with hand and once I have the final shape in place I then turn to an LLM for further exploration and experimentation if I have to.
It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.
[1] https://pdfhost.io/v/KU2j37YKrP_Monopoly
https://github.com/lambdaclass/truth_research_zk
[1] https://github.com/verus-lang/verus
I thought the whole point of trying to write out TLA+ is so that you get a better idea of what you want and put it into formal language?
I get that an LLM can assist/help with expressing what we want in formal language a bit, but if one automates all this there is no human intent/design anymore.
If the LLM generates both the design (TLA+) and writes an arbitrary program that satisfies said design -- what exactly have we proved?
What assurance do humans get since human doesn't know or cannot specify what they want.