@we_arent_here @Grady_Booch I lack the energy to go in detail but there are architectural limitations that push shallow memorization in LLMs. Learning paradigm of SGD also factors into this. Humans leverage derivable memorized abstractions with flexibilit
1,696 followers
681 followers
@_rdgao Could this be what you’re looking for? https://t.co/6FFiifPzhe I liked this article a lot but haven’t read it in a good few years!
1,920 followers
@dlevenstein @cimoore444 asked me that question once. The best i could answer was also in terms of a separation in time scale between that which can rapidly influence broad networks and the slower modulation of that coordination. Somehow reentry was also
2,562 followers
Here's a whole damn review on reentry because I liked term so much. http://t.co/vN3rXbyJiF