One of the things our lab is working on is developing methodologies to test the synchronization theory of flow. In a recent study, we demonstrate that secondary task reaction times can be used as a low cost, unobtrusive, and online measure of flow experiences. This study was presented at the 2013 International Communication Association conference. For a brief summary of the results, see the poster below. We are now in the process of designing a pilot study to fully test the cognitive synchronization of attentional and reward networks using fMRI.
Weber, R., & Huskey, R., (2013, June). Attentional capacity and Flow experiences: Examining the attentional component of synchronization theory. Paper accepted to the annual conference of the International Communication Association, London, United Kingdom.
Wow, it has been a long time since I last posted here. What gives? I’ve been focusing on a few different projects.
Roughly one year ago, I helped launch our lab website: medianeuroscience.org. I serve as a content administrator on the page and manage most of the content updates. I also manage the lab’s twitter account: @MediaNeuro. You will find the most recent updates on the lab’s work (including my own) at these two pages. It really is an exciting time to be working in this area!
What else… I just defended my thesis: Does Signaling Theory Account for Aggressive Behavior in Video Games? What a learning experience. The project took 9-months from inception to completion. Data collection alone took 10-weeks and required something like 150 lab hours. Data coding, pre-processing analysis probably clocked in at another 60 hours. I learned a few new statistical techniques in the process and have a long list of things I learned in the lab. The dataset is quite robust and my adviser and I are working on a few follow-up analyses (more on that later). I’ll post a .pdf of the thesis after I make a few final edits. For now, here is the abstract:
Signaling theory originated in evolutionary biology and explains the mechanisms behind the honest communication of information between organisms. Communication scholars are increasingly turning to signaling theory as a way to test evolutionary explanations for human behavior. The present study tests if receiver-dependent costly signals can be used to predict the moment of aggressive behavior in video game environments. Results show that high status (but not high trait aggression) male subjects were fastest to engage in combat against a low voice pitch male opponent – but only when subject skill was high.
Another exciting development from the past year; I work as one of two tech support staff in our department. We are largely responsible for maintaining faculty/staff computers, lab computers, technical teaching resources, and the department website. The department recently secured funding to revitalize our research facilities and I helped with the planning, purchasing, and deployment of the department’s new equipment. I won’t get into all the details, but we deployed 30 new computers in the past 6-months (17 of these machines were complete custom builds). If you are interested, check out the department’s research capabilities.
Other than that, I’ve been busy with coursework, trying to get a small handful of studies submitted to conferences and/or for publication, and another few studies kicked off. I’m off, for now! I’ll leave you with this great strip from PhD Comics.
The goal of this pre-conference is to bring together scholars who are working across sub-fields of communication studies using evolutionary theory, neuroscience and other biological measures to address core questions in communication studies. A critical mass of scholars are now employing such methods to advance theory and application within communication studies. Furthermore, biological paradigms clearly include additional questions and methods that can be added to our research agenda, however, incorporation of biological explanations and methods can also highlight new questions. In addition to plenary talks given by invited senior scholars in the area, the pre-conference participants will share new data and ideas and discuss a vision for how communication studies can best leverage such new theorizing and study paradigms moving forward.
More information (e.g., costs, deadlines, agenda) is available here. Hopefully I’ll see you there!
The Communication department at UCSB publishes a quarterly newsletter, The Gaucho Communicator. This newsletter generally contains useful information about the department, past/upcoming events, student opportunities, etc. The Fall 2012 issue features a brief profile of yours truly (page 6). <insertsnarkycommenthere>
UPDATE: Bio is on page 7. <insertevensnarkiercommenthere>
This video comes via the folks at Ethical Technology. Creators May-raz and Lazo offer a glimpse into an augmented reality future that allows individuals to monitor and interpret everything from environmental to nonverbal cues. Taken at face value, having access to these capabilities seems exciting. However, there are potential issues. In Sight, the slimy protagonist uses augmented reality capabilities in an attempt to seduce his date. Creepy, but an interesting premise that highlights some of the potential drawbacks of an imaginable future.
Sony Online Entertainment is beta testing a new feature for EverQuest II, SoEmote. This sort of technology has been around for a while, but to my knowledge, this is the first time it has been incorporated into an MMO (and a popular one, at that). This is likely an interesting feature for players, even if the audio is a bit off-putting. The audio fonts are pitched as a feature for role-players, but is this something they even want? I’m not terribly familiar with EQ2, but voice chat (outside of instances and raids) never really took off in WoW (even then 3rd party software seems dominant). For me, the face-tracking feature is far more interesting. I’m often put-off when my character’s head movements and facial expressions are different from what I expect them to be. SoEmote seems to do a nice job capturing, and replicating, facial movements (be sure to watch the video in full-screen, check out all those points of reference on his mouth, eyes, and eyebrows).
I digress… What has me most excited is the opportunity this feature offers researchers. One can easily think of several studies that test immersion, nonverbal cues, realism, etc. According to Kotaku, SOE will demo SoEmote at E3. Exciting!