I also continue to contribute to the basic development of functional near infrared spectroscopy with respect to hardware additions and signal processing [22, 31-34].
I have designed and built a number of GPIO and Arduino solutions that allow for NIRS, EEG, eye-tracking and physiology systems to be synchronized and recorded locally and across labs (even between London and Japan). This has dramatically allowed us to expand hyperscanning to answer questions related to differences in culture as well as increase the number of participants. I have contributed to signal processing in the field of fNIRS through a series of studies focused on best practices to remove systemic influences on the hemodynamic response.
I have recently published a peer-reviewed manuscript in the Journal Neurophotonics [34] that describes how to modify optical recordings to use short-separation distances between emitter and detector optodes in addition to long separation distances. This hardware addition of short-channel optodes allows for principal component separation of superficial and skin blood flow to be removed from cortical hemodynamics in the temporal domain. Removal of this artifact is imperative for specificity in application of optical neuroimaging to human machine interface [35]. I have also contributed to previous studies that show how we can remove this systemic artifact in the spatial domain using full head optical coverage and regressing the primary spatial component from neural signals [31, 33].
In addition to these data recording and signal processing tools, I have designed and built the infrastructure that allows for controlled viewing of a partner (human, robot, or animal) via a “smart glass” window that can be programmatically triggered to be transparent or opaque (used in emotion experiments above). The fast changing of states of the smart glass allows us to present faces of partners to participants in paradigms that are compatible with both NIRS and EEG timescales. This is a unique addition to these studies that improves the understanding of multimodal signals and helps us to better understand the relationship between localizable hemodynamic NIRS signals and the fast electrocortical signals recorded with EEG.
To further explore neural networks associated with social interaction, I have developed fully controllable rigged 3D characters that can be used in the Unity game engine and driven using the OpenFace optical facial tracking system combined with a Tobii eye-tracker. This combination of tracking devices allows for full articulation of either semi-realistic humanoid or cartoon characters to interact with human partners in real time. The use of these characters is targeted at understanding neural activity during social interaction with children with ASD. It has been shown that children are more likely to interact with virtual characters and avatars, but a research question remains as to what aspect of social feedback can best target understanding of the autistic brain.