I began my undergraduate studies firmly believing that I wanted to study two separate things: physics and music. It was only after my second year that I realized how much overlap there was between these two, seemingly disparate disciplines. Now, as an editor at Nature Human Behaviour, I am both continually learning how seemingly distinct fields can inform one another and am also frequently challenged by the miscommunication that can occur between people who have spent their lives researching the same phenomenon in different disciplines. Surprisingly, however, I encountered few such misperceptions at my first computational social sciences conference, the annual meeting of the Computational Social Science Society of the Americas.
The field of computational social sciences refers to quantitative, model-based research that is used to investigate and understand social observations. It therefore encompasses social behavior on Twitter, crowd behavior in emergency situations, social segregation, and almost any other social phenomenon one can imagine. This means that the conference itself encompassed a wide range of topics. Some examples from the first day include the description of algorithms that could detect Russian bot algorithms (Peter Chew), new ways to calculate the risk of clinical trial failures (Simon Gelleta), and new models of social learning based on observations of honey bee behavior(Steven Kimbrough); and these are only three examples from the first day!
One could imagine that a conference covering such different topics would be comprised of short, dense talks that were only understandable to a small fraction of the audience. However, the exact opposite was true. For example, Leigh Tesfatsion gave an invited talk on agent-based computational economics (ACE). ACE draws on cognitive science, computer science, and evolutionary economics and so I thought I would only understand the parts of the field that drew on cognitive science. The exact opposite was true: Leigh began by clearly defining the field, its principles, and objectives in such a clear and logical way that even I, with my superficial understanding of agent-based models and no knowledge of economics, could understand the contributions she – and this field – had made.
Spending roughly two and a half days with these computational social scientists taught me not only about all the interesting projects going on in the field, but about clear and efficient communication with a broad audience. In a way, the conference itself was an example of computational social science: it provided the empirical data with which one could build a model to generate clear and effective presentations: First, terms must be defined in clear, unambiguous language. Second, the aims must be stated. Third, the model must be displayed graphically but explained verbally. Fourth, the conclusions must be stated in a way in which people understand them rather than by using math and numbers only (for example, Mirta Galesic’s conclusion from her recent Nature Human Behaviour paper on predicting the 2016 election: “Responses to social-circle questions predicted election outcomes on national, state and individual levels, helped to explain last-minute changes in people’s voting intentions and provided information about the dynamics of echo chambers among supporters of different candidates.”). This lesson I learned at CSS is one I apply on a daily basis by providing authors with suggestions for how to make their paper accessible to our broad readership; I would therefore like to thank the organizers and participants of CSS not only for teaching me about their field, but also for showing me that communicating to a broad audience is not hard at all.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in