![]() ![]() Proceedings of International Conference on Brain-Mind, East Lansing, 27-28 July 2013, 1-9.įrasconi, P., Gori, M., Maggini, M. (2013) Establish the Three Theorems: DP Optimally Self-Programs Logics Directly from Physics. Annual Review of Neuroscience, 18, 193-222. (1995) Neural Mechanisms of Selective Visual Attention. (2006) A Fast Learning Algorithm for Deep Belief nets. (1996) Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. IEEE Spectrum, Online Article Posted 20 October 2014. (2014) Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts. (1991) Logical versus Analogical or Symbolic versus Connectionist or Neat versus Scruffy. (2011) Why Have We Passed “Neural Networks Do Not Abstract Well”? Natural Intelligence: The INNS Magazine, 1, 13-22. 3rd Edition, Prentice-Hall, Upper Saddle River. (2010) Artificial Intelligence: A Modern Approach. ![]() IEEE Transactions on Autonomous Mental Development, 4, 29-53. (2012) Symbolic Models and Emergent Models: A Review. (2008) The Neuromodulatory System: A Framework for Survival and Adaptive Behavior in a Challenging World. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2823-2829. (2011) Synapse Maintenance in the Where-What Network. (2013) Modulation for Emergent Networks: Serotonin and Dopamine. Weng, J., Paslaski, S., Daly, J., VanDam, C. (2006) Introduction to Automata Theory, Languages, and Computation. IEEE Intelligent Systems Magazine, 29, 14-22. (2014) Brain-Inspired Concept Networks: Learning Concepts from Cluttered Scenes. IEEE Transactions on Autonomous Mental Development, 5, 89-116. (2013) Brain-Like Temporal Processing: Emergent Open States. IEEE Transactions on Autonomous Mental Development, 4, 161-185. (2012) Brain-Like Emergent Spatial Processing. Kandel, E.R., Schwartz, J.H., Jessell, T.M., Siegelbaum, S. 2nd Edition, Worth Publishers, New York.Ĭhomsky, N. (2013) Learning and Memory: From Brain to Behavior. (2012) Natural and Artificial Intelligence: Introduction to Computational Brain-Mind. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990. (2011) Three Theorems: Brain-Like Networks Logically Reason and Optimally Generalize. This paper gives an overview of the FA-in-DN brain theory and presents the three major theorems and their proofs. The DN learning from the FA is incremental, immediate and error-free 2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many inputs and actions based on the neuron’s inner-product distance, state equivalence, and the principle of maximum likelihood 3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood conditioned on its limited computational resource and its limited past experience. Weng 2011 proposed three major properties of the Developmental Network (DN) which bridged the two schools: 1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, a Developmental Program (DP) incrementally develops an emergent FA itself inside through naturally emerging image patterns of the symbolic inputs-outputs of the FA. In artificial intelligence (AI), there are two major schools: symbolic and connectionist. This paper models a biological brain-excluding motivation (e.g., emotions)-as a Finite Automaton in Developmental Network (FA-in-DN), but such an FA emerges incrementally in DN. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |