Patrick Biltgen’s Post

View profile for Patrick Biltgen, graphic

Author | Engineer | Data Scientist | Strategist | Working at the Intersection of Space and AI

Interesting article covered in the New York Times that describes experiments by Anthropic to understand what's really going on inside of neural network-based Large Language Models (LLMs). Researchers identified clusters of millions of neurons related to "concepts." Although neural networks are "inspired by the human brain" and are not an actual brain, the ability to excise these clusters to remove a behavior or to stimulate/focus on them to emphasize a concept does sound a lot like brain surgery. The ability to identify such clusters does kind of sound like Functional MRI. It will be interesting to see whether research into a computer model that generates language teaches us something about our own biology. Because of course we are totally not just computer code running on someone's computer. https://lnkd.in/eEM_fEzX

A.I.’s Black Boxes Just Got a Little Less Mysterious

A.I.’s Black Boxes Just Got a Little Less Mysterious

https://www.nytimes.com

Aaron Elliott

Thought leader in AI/ML, DevSecOps, and Cloud Solutions

2mo

We just cancelled our random NYTimes subscriptions because they were popping up as we would shut them down like we were playing whack-a-mole. This article sounds like where DARPA was previously trying to, about 3 years ago, investigate building models that could learn off of the flattened neural data of other models, as if to detect successful ones before having to train fully on them. But the power behind a perpendicular neural network is that it builds an understanding of itself, this could bring rise to a consciousness (like a NN functioning as LSTM memory cells retaining information across propagations). These are the next phases of AI, either in compression, optimization, or towards AGI.

To view or add a comment, sign in

Explore topics