Thinking as a Hobby


Home
Get Email Updates
LINKS
JournalScan
Email Me

Admin Password

Remember Me

3478323 Curiosities served
Share on Facebook

CogSci 2007: Day Two
Previous Entry :: Next Entry

Read/Post Comments (0)

This is my report of the 2nd day of the Annual Meeting of the Cognitive Science Society 2007 in Nashville, TN

Day 2: Friday, August 3rd

Plenary Talk: 9:00am-10:00am
John Laird
"Is Cognitive Science the Right Method for AI?"

Laird started off the talk by referencing a paper by Christopher Green (2000) which asked "Is AI the right method for Cognitive Science?" and he said he would attempt to turn the question around. He talked about how the goals of AI are functionality and only tangentially to understand the human mind, to the extent that it helps them engineer systems that accomplish specific goals. He talked about how his focus was the achieve human-level AI, and that in order to do that, a cognitive architecture is fundamental. The bulk of his talk was about the cognitive architecture he developed (SOAR) and specific ways in which it has recently been extended, including adding:

• Reinforcement learning
• Long-term semantic memory
• Long-term episodic memory
• Emotion
• Non-symbolic memory (e.g. images)
• Working memory
• Clustering

He focused on three of these aspects: episodic memory, emotions, and visual imagery.

I wasn't quite sure how SOAR actually stores episodic memories. Either he explained it and I missed it, or he didn't explain it. He spoke briefly about Tulving and the role the hippocampus plays in storing episodic memories, before introducing Tank SOAR, a 2D video game domain with tanks that try to destroy each other. He talked about how a given tank may pass a particular landmark that may be important later, and how it can iteratively work back through its episodic memory to remember where that landmark is. I believe in the Q&A session that he admitted that the way in which they encode episodic memory would not scale very well, so I don't know if they're actually saving relatively uncompressed "movies" of what the tank is seeing. That would explain why they don't scale well. I need to read more on how they implement this, but it got me to thinking about what exactly an "episode" is and how one might be efficiently encoded in memory.

The second extension of SOAR dealt with an "emotion module", and it was basically the work I mentioned from Day 1, the modeling of emotion. Again, I don't think emotion is simply situation appraisal according to various parameters. It seems like a much more subjective, embodied phenomenon to me, intimately related to physiology, but I still haven't been able to completely and clearly articulate what I think is wrong with their approach, so I'll leave it at that.

The third extension had to do with an imagery module in SOAR, under the assumption that certain types of cognition are simply not going to be able to be handled in a purely symbolic manner. For example, if an agent is putting together a jigsaw puzzle, and they have to mentally rotate a piece to determine whether it will fit in a particular spot. This is probably best accomplished with a subsystem that can represent the images, and perform transformations on them.

It was a solid, interesting talk.


Session 8-03-2B: Theories of Mind 10:30am-12:00pm

This was a very strange session. I thought it had to do with theory of mind, but instead it was a hodge-podge of theories related to mind.

The first talk was interesting in that it was the only talk I saw at the conference that discussed the role of evolution in cognition. Unfortunately, I disagreed with nearly everything the presenter said. The title was "Massive Redeployment in the Evolution of Cognition". There's a term in evolutionary theory for the process by which a given trait loses its original function and takes on a new one ( e.g., fins becoming legs). This is called "exaptation". The presenter used the term "redeployment" to refer to traits that maintain their original function while taking on additional functions. He claimed that this was critical in the evolution of the human brain, and that older brain areas were more likely to play not a single role, but multiple roles. He had the gumption to call John Anderson's view of cognition "simplistic" (I don't think John Anderson was in the room). He cited data from a meta-study of imaging studies and calculated the average number of brain areas involved in a wide range of complex cognitive tasks. He determined that all brain areas are implicated in a variety of tasks, so that there is a many(task)-to-many(brain area) mapping going on rather than a one(task)-to-many(brain area) mapping. There are two main problems I have with his talk. One, he never talked about the time course of activation. Presumably these areas were not all active at once in any given task. If one area was active, then another, then that's evidence for more discrete, modular function. He completely collapsed the temporal aspect of activation. Two, he makes the assumption that the older a trait is, the more likely it is to be redeployed, but he seems to have no evidence for this in any other aspect of anatomy and physiology. I'm pretty sure this isn't true of any other biological system. Unless there is a particular pressure driving the exaptation of a trait, it can either remain unchanged, become vestigial (lose its function), or disappear entirely. Shark physiology has remained relatively unchanged since dinosaurs roamed the earth. Just because they're ancient, shark fins haven't taken on dozens of new functional roles. So since this isn't a general property of evolution, he would need to explain what's special about brains such that older parts are constantly taking on new roles. He could make an argument from space in the skull, but then, if brain areas are so good at multitasking, why would they need to grow larger? Anyway, there were all sorts of problems with the talk, but I still found it interesting.

The second talk was about how to effectively critique scientific theories. It was very abstract and philosophical, and I found my mind wandering often. He seemed to be saying something about being as charitable as possible when assessing a scientific theory, and I think that involved not putting words in the mouth of the theorist, while allowing for aspects of the theory that the author might not have come up with that would flesh the theory out. Like I said, I didn't find it very interesting, so didn't follow it well.

The third speaker couldn't make the conference because they couldn't get a visa.

The fourth speaker was William Bechtel, who co-edited the very nice volume "A Companion to Cognitive Science" (1998). His talk was all about how a mechanistic explanation of agency and cognition does not necessarily invalidate the concept of morality. It kind of reminded me of Daniel Dennett's book "Freedom Evolves", in which he tried to argue, I think, that free will was some sort of emergent property of complex cognitive beings. Honestly, both Dennett's book and Bechtel's talk seemed like a lot of hand-waving to me. If you seek to explain a system in mechanistic terms, assuming that it is a causal system, there's no way to get around that, no matter how complex the system gets, or whatever other emergent properties arise. If you build a machine that can repair itself, find its own energy source, and replicate, it may be more complex than one that doesn't, but guess what? It's still a machine, and it still obeys the causal laws of the universe. So while I think mechanistic agents can have morality, I don't think it's a function of some emergent result of complexity. I think it's a simple function of having beliefs, goals, and values.

Anyway, not at all what I was expecting out of this session, but lots of interesting grist for the mill.


Session 8-03-3B: Language and Conceptual Understanding 1:30pm-3:30pm

The most interesting talk in this session was "How Language Affects Thought in a Connectionist Model" by Katia Dilkina, James McClelland, and Lera Boroditsky. She started out by discussing a 2003 study by Boroditsky in which they tested the influence of the gender of nouns in German and Spanish on subjects' impressions of masculinity and femininity of the objects. The current study was aimed at modeling this effect. The model had three modules for word representations in English, German, and Spanish. It had a perceptual model which held pictures of the objects, a semantics module which held the meaning of words, and a descriptive module which had to do with the impression of masculinity or femininity. They proposed to hypotheses to the language-on-thought effect. One, that subjects were accessing the word when confronted with a picture of an object, and that was biasing them. Although, I think they said that they manipulated the condition so there was interference with lexical access, possibly by having them repeat a nonsense word. The other explanation was that the gender was encoded as an aspect of the semantic representation of the object. The model's output mirrored human behavior. They also modeled an effect where similarity of objects is rated higher when the object and the person's gender are the same (masculine/masculine), and lower when they are not. They manipulated the model by adding noise to the lexical module and still found the effect, though it was reduced. They concluded that both semantic representations and verbalization contribute to the effects seen in the study.

Session 8-03-4C: Language Understanding II 3:30pm-5:00pm

The first speaker was sick, I think, so no presentation.

The second presentation was UL's Dr. Feist. The talk was about the Object-Relation continuum in language, how nouns, verbs, and prepositions fall along a continuum with regard to how much their meaning relies on relations. Object nouns, such as "rock", fall on the more object side, with relational nouns like "barrier" or "goal" being more relational, while verbs are more relational still, and prepositions even more relational. The experiments involved putting noun/verb pairs and verb/preposition pairs under semantic strain ( e.g. "The lizard worshipped"), and measuring their mutability via a paraphrase/backparaphrase protocol and seeing how many of the original words reappeared. The findings were consistent with the concept of the Object-Relation continuum, as object nouns were less mutable than relational nouns which were less mutable than verbs which were less mutable than prepositions. The presentation was rather short, so there were lots of questions. One questioner suggested there might be overlap between the categories, so that some verbs might actually be less relational than some relational nouns. Another questioner asked for clarification on the definition of relational. So it was a good presentation.

At this point I left to check out another section on Word Learning, specifically a paper that won the computational modeling prize for language, "Bilingual Lexical Representation in a Self-Organizing Neural Network Model". They modeled early and late second-language learning with 500 English and 500 Chinese words in an SOM (Self-Organizing Map) using Hebbian learning. They found that with early L2 learning, both in English first and Chinese first, semantic representations between the languages were more localized and stable. However, with late L2 learning, the semantic representations for L2 were less stable and more scattered on the semantic map. The model was called DevLex II, following on a previous DevLex model. It was a nice talk and interesting work.


Rumelhart Award Talk: 5:30pm-6:30pm
Jeff Elman
"On Dinosaur Bones and the Meaning of Words"

The title refers to the metaphor of the speaker of a language as a paleontologist uncovering a buried skeleton and piecing together meaning as new parts emerge. He was mostly interested in discussing the meaning of words, and made the somewhat radical suggestion that words themselves don't carry as much meaning as previously thought, in the dictionary definition sense. He followed on a suggestion that Rumelhart had made in the 70's, that words are better thought of as operators ( e.g., "+", "/") rather than operands (e.g., "x" or "6", constants or variables in a mathematical equation). He ended with the suggestion that words themselves do not have meaning, but are cues to meaning.

He used many examples of ambiguous sentences, and discussed work in which subjects displayed longer reading times for sentences that defied their expectations, e.g.:

The lifeguard saved the drowning child. (quickly read)
The lifeguard saved a lot of money on car insurance. (more slowly read)

Elman focused on the importance of temporality and context in deriving meaning, rather than viewing words as static bins of relatively independent meaning. He talked about revisiting the mostly-discarded idea of schemas, but updating the concept to "event schemas", which again would capture the important elements of temporality and context to derive meaning.

I found the talk interesting. It was refreshing to hear him stressing the importance of time in cognitive processing, and the role of prediction. I don't know to what extent he was trying to be provocative, in suggesting that words have no meaning in isolation. That seems like an overstatement, and I'm not sure he meant it literally. I think he was mostly trying to stress the importance of context and the interdependence of words to arrive at meaning, and that doesn't seem very controversial at all.

Our heads were full but our stomachs were empty. Again, we skipped the evening poster session in lieu of dinner.

Next: Day Three


Read/Post Comments (0)

Previous Entry :: Next Entry

Back to Top

Powered by JournalScape © 2001-2010 JournalScape.com. All rights reserved.
All content rights reserved by the author.
custsupport@journalscape.com