xcancel link: https://xcancel.com/jxmnop/status/1953899426075816164
this thing is clearly trained via RL to think and solve tasks for specific reasoning benchmarks. nothing else. and it truly is a tortured model. here the model hallucinates a programming problem about dominos and attempts to solve it, spending over 30,000 tokens in the process completely unprompted, the model generated and tried to solve this domino problem over 5,000 separate times
they seem to have trained on nearly everything you’ve ever heard of. especially a lot of Perl
This is profoundly hilarious to me for some reason. AppleScript, of all things, also seems suspiciously high on that graph. As does Pascal running neck and neck with Swift.
Python seems surprisingly low too
What’s this “RL” thing?
Reenforcement learning