Saturday, May 7, 2011

In a recent study published in the April Issue of Biological psychology, researchers at UT Austin and Yale University were able to replicate schizophrenic like symptoms in a neural network by hampering its ability to forget.  This research reinforces a hypothesis that schizophrenia can result when people lose the ability to forget unimportant information.  The neural network, which was designed to process natural language, responded to questions in a way that was very similar to how a schizophrenic might be expected to respond. From the article:
DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.In another instance, DISCERN began showing evidence of “derailment”-replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.
What’s striking is that a neural network is giving useful information about how the human mind might be functioning.  It is not hard to imagine how even an imperfect computer based model of the human mind, that only replicates partial aspects, could benefit research on mental illness.  One of the struggles of psychology is that you can only ever guess what is happening inside the brain.  The closest researchers can get to seeing what happens in the mind is through fMRI.  On the other hand, if a neural network can replicate a mental illness, then experimental parameters could be adjusted in ways that would be impossible with human test subjects.  Theoretically, key factors could be isolated through repeated experimentation without concern for harming the test subject.  Intentionally inducing schizophrenia in a healthy human is abhorrent, intentionally inducing schizophrenia in a computer is good science.   

More on Multitasking

         PBS Frontline did an excellent piece on multitasking in their episode “Digital Nation” about the effects of the wired society on people.  It clearly shows the gap between what people think is going on with multitasking and what actually happens.  



Watch the full episode. See more FRONTLINE.

The Fallacy of Multitasking

A recent study on multiasking in the journal Cyberpsychology, Behavior and Social Networking reminded me of a subject I have long meant to write about: the fallacy of multitasking.  Multitasking is a great myth.  It would be wonderful to do two tasks at once, but that is not how our brains are wired.  This doesn't stop people from pretending that multitasking is a skill or ability.  Students often claim that they are capable of surfing the internet while paying attention in class.  Job postings often list multitasking as a preferred skill.


At best, our brains work like an old windows computer.  Computers claimed to be able to multitask, long before they actually could. That was the claim to fame for windows 3.1, it supposedly enabled users to do two tasks at once.  Except it didn’t work; computer processors can only do one task at a time.  Advances in computer multitasking have been advances in processing power, memory and software, enabling the computer to switch faster between more tasks.  The only way to truly get tasks to run in parallel is to add more processors.


It turns out that the human mind works in much the same way.  The study looked at how often people would switch attention between multimedia devices while using two at the same time.  For instance, while using a computer and watching television, users claimed to switch attention 15 times during the space of a few commercials.  In fact the actual amount was 10 times that.  The users were operating in a highly distracted state.


Like computers, people who are better at multitasking, who are able to perform better than others when trying to do two tasks simultaneously, have a higher working memory.  These people can remember more information at once. However, even “high” performance at multitasking is much lower than focused performance. 


There is a place for multitasking, but is not in the classroom, and probably not in the workplace.  People can only effectively multi-task when performance requirements are low or unimportant, like whistling while walking or eating while watching television. The idea that someone could do more than one task at a time effectively, remains a myth.

        

Sunday, April 24, 2011

MRI Used to Scan Brains of Meditating Monks

        I love functional MRI (fMRI) machines, MRI machines which give a real-time view of which parts of the brain are active while subjects perform cognitive tasks.   In the last week, there seems to be a theme of "What scientists learned about meditation by putting Buddhist monks into fMRI machines."
    
       According to one BBC article, it appears that the human mind is separated into two cognitive networks, one internal and the other external.  The internal network deals with thoughts about the self.  The external network handles thoughts about everything outside of oneself.    Meditation, it appears, breaks down the barriers between these two networks.  Instead of using one network or another for a particular task, people in a meditative state use both at the same time.

         USA today also had an article about putting Buddhist monks into fMRI machines.  The article covers a recent paper in Frontiers of Neuroscience that looks at the effects of meditation on rationality.  It appears that people who meditate frequently, are more "rational" when thinking about rewards.  When playing the Ultimatum game, mediators used entirely different parts of their brain in deciding how to act.  

        There is a lot to criticize about fMRI.  They actually take an indirect measure of activity by observing blood flow, and the resolution is relatively low.  Because blood flow is measured, not electrical activity, there is a time delay of seconds between thought and detection..   Regardless, the ability to observe physical activity correlated to mental activity is a huge step in understanding.  For instance, we now know there is an actual physical difference in the mind of someone who frequently meditates.
        
           The secret of brain-imaging research is that the major advances don't come from the imaging technology, which has remained largely unchanged over the last decade, but from advances in computing power.

Digital Human Brain Map Unveiled

     A change of pace from looking at how ICT is changing how we think, today we look at how ICT is changing our understanding of the mind itself.  Microsoft founder Paul Allen funded a project called the Allen Human Brain Atlas, which recently revealed the worlds first computerized brain map.  From the website:


     The Allen Human Brain Atlas is a unique multi-modal atlas of the human brain that integrates anatomic and genomic information, coupled with a suite of visualization and mining tools to create an open public resource for brain researchers and other scientists across a wide range of specialties.
     Data modalities incorporated into this resource include magnetic resonance imaging (MRI), diffusion tensor imaging (DTI), histology and gene expression data derived from both microarray and in situ hybridization (ISH) approaches.


   This Atlas should prove a critical component in understanding the human mind.  The atlas maps over 1,000 unique sites correlated with their known function and chemistry.  The information is available for free online.   This is map is part of a $55 million undertaking, and is a major step forward in understanding the structure of the human brain.  Not only will this help scientists working on curing neurological diseases like Alzheimer's, but it will also push forward research on cognition and thought.

Sunday, April 17, 2011

Why is Anonymous' Latest Action so Pathetic

Over the course of the last week, Anonymous decided to move against Sony for going after someone who hacked their PS3.  The result has been less than impressive, described by Sony spokespeople as a minor annoyance.

To explain what is going on here, lets start with two key phenomena, de-individuation  and online dissociative disorder, critical linchpins of the Anonymous personality.

Deindividuation is the loss of individual identity experienced by members of a group or crowd.  This phenomena has been known to the world of psychology since 1895, with the publication of Gustav Le Bon's work on crowd theory.  The theory explains why crowds of people will often do things, like encourage would be suicide victims to jump, that individual members of a crowd would not.  Three elements are key to deindividualtion: anonymity, lack of responsibility and crowd size.  This effect is commonly used to indoctrinate people into the military, and religions.   It turns out that it can be equally effective online.

Online Disinhibition Effect  is the phenomena whereby people are willing to say and do things online that they would not otherwise do.  This effect has been attributed to common use of racist and homophobic speech online.  This effect is compounded when people are speaking online under anonymity.  

Now look again at Anonymous.  Knowing how individuals act when online and as part of groups, the organization, by its very structure, foments erratic and destructive behavior.  The rule organizing participants is simple, anyone can be Anonymous, all they have to do act in its name.  Working together, users are part of a cause, but are encouraged not to feel responsibility for their actions, instead those actions are the action of  the collective known as Anonymous.

This can lead to a lot of contradictory actions.  Factions within the Anonymous community play out as Anonymous fighting itself.  Since anyone can be Anonymous, there is no one true Anonymous.  This also means that not all actions will be executed with equal skill.  All it takes for Anonymous to do something is for a small group to decide to act as Anonymous, and suddenly "Anonymous is Attacking Sony."  This leads to all sorts of problems for law enforcement trying to stop the organization, you can't simply arrest the leaders - there are none.
      

Wednesday, April 13, 2011

How Twitter and Google Turn Individual Thoughts into Actionable Knowledge

Think of every Google search and every Tweet, as discrete units of  thought.  Today I want to share two examples of how analyzing this information is improving  decision making.

The first is tracking flu trends through the analysis of Google search query data.   CDC and Google researchers discovered that there is a direct correlation between flu related search queries in Google and an eventual flu outbreak.  Even more interesting, the query data precedes the data collected by health care providers, meaning outbreaks can be detected in an area before they are realized by the healthcare system. 

The second use of aggregating human thought is the correlation between twitter and stock performances.  Economists at the Technical University of Munich (Technische Universitaet Muenchen, TUM) discovered that twitter buzz is an accurate predictor of stock performance, with twitter buzz leading the market by about a day. The scientists have created a website where you can go to see what their model is predicting based on real time information.

These devices have important implications for how knowledge is realized.  The internet is making the process less formal, and more accurate.  Take the declaration of a flu outbreak.  The traditional system is for hospitals to aggregate information on people coming through their doors.  This skews the statistics by only capturing the people who are sick enough to bring themselves into hospitals.  Information is then collected from hospitals by the CDC and an outbreak is declared.  This declaration requires a huge formal structure and the involvement of multiple experts.  Contrast that with Google Flu Trends, where the process covers a wider data set, is real-time and automatic.

The twitter case is interesting for the same reasons as Google Flu Trends, but it also includes a tacit voting system, in which the volume of information is also a statement on the quality of information.  Users that give better tips are more likely to be repeated by other users.  Unlike the Google Flu Trends system, this information doesn't precede the traditional sources, it incorporates them.  Information that comes from outside sources, such as financial media or stock tips from friends, makes its way back into the system  through twitter users where it is put on an equal footing with all other data.

Unfortunately these systems are vulnerable to misuse, and the Twitter system be less useful the more popular it becomes.  While there is little advantage for anybody to flood Google to create a false declaration of a flu outbreak, the incentive to manipulate the Twitter stock pick system will rise as the system becomes more popular.    The more popular the service, the more effectively the system can move the market.   It is easy to see how someone could benefit by  creating a false move in a stock by injecting bad information through zombie twitter accounts.

In both cases better decisions can be made using systems that rely on what the mass of individuals are thinking, rather than the declarations of a few experts.