The visualizations below were made for one project or another of mine. Each was implemented with Python and its (data) science stacks. Contact me for code.
This visualization shows how interacting, simulated agents (from this model from Richie, Yang, and Coppola, 2014) eventually agree upon which gestures should refer to an object/concept. The x- and y-axes are the probabilities of producing two different gestures for the given object/concept. Each individual marker/line represents a single agent. Hotter colors represent the agents at later time points. We initially made this ‘phase portrait’ (in the parlance of dynamical systems) with the hope of understanding why certain social network topologies hasten the agents’ stabilization, but ended up not pursuing that question very far. Regardless, I think this visualization nicely shows how the agents initially fumble around for a long time, barely changing their probabilities, but once they start to move toward one of the corners (each of which represent an attractor in the ‘phase space’), their movement accelerates. Also, it shows that the agents more or less move through the space as a single group. That is, it was a priori conceivable that different groups of one or more agents would move into different regions of the space, and only later resolve their differences. The animation shows that this is not the case. This visualization was built with Python’s matplotlib.
This is an animation showing a gradual roll-out of site updates (‘infections’) to users of a website. Users of this site coach one another on the site’s content (the site contains educational material), thus forming a directed graph of users from the coaches/is-coached-by relations. Uninfected users are small and red, infected ones large and blue. Coaches in a coaches-coached pair are located at the thicker end of an edge. One challenge facing these so-called ‘infections’ is that one might want to limit infection to only a certain number of users of a coaching graph, but also minimize the number of coaches/coached (neighbors) of those users that did not get the update. After all, users are not web developers and may be confused if a coach has one version of the site while the coached has another. One approach to solving this problem is to detect community structure in the network, where users are densely connected within a community and sparsely connected between communities, and then limit infection to a community close to the target size (an approach I did implement with success). However, the animation suggests that it might also be possible to simply pick a random node, and run infection until close to the desired number of users are infected. Why? The breadth-first search that my infection algorithm implements tends to first infect all the users of a community/cluster before moving on. Thus, it seems to keep communities relatively intact in terms of site version. This visualization was built with Python’s NetworkX and matplotlib.
This visualization shows how activations of nodes in a neural network for word production (model here) change over time, in response to activating a certain concept node on the top layer. The general architecture of the network is as follows: the top layer corresponds to concepts (simply numbered for convenience), the next layer to words, the next to phonemes (letters, roughly), and the last to distinctive features (the articulatory components of phonemes, roughly). One simulates the model by activating a concept at the top, and allowing activation to feed to the corresponding word(s) for that concept, then the corresponding phonemes for that (those) word(s), and then the corresponding features for those phonemes. Activation can also feed back up each layer, except to the concept layer, and words inhibit each other as well. Target nodes are bigger than non-target nodes, and activation is indicated by node darkness. I won’t go over the research question associated with the model (unless the reader is curious, in which case contact me!), but this model was a reimplementation and extension of Peter Graff’s 2012 dissertation – his model was essentially the same, except it lacked the feature layer at the bottom. Since I was just reimplementing his model based on the verbal description in his dissertation, I was not at all certain that I was programming it properly. Hence, this visualization was helpful. It shows that, when a concept is activated, the corresponding words are activated much more strongly than their competitors. The same is true but to a lesser extent for the phoneme layer – the target phonemes are more activated than their peers, but not by much. However, by the time activation feeds down to the feature layer, it seems to have been so ‘diluted’ that the target features are just as activated as the non-target features. Hence, my model is not quite behaving as desired, and deserves further scrutiny. Maybe too much noise is added to each node’s activation at each time step. And/or maybe intra-layer competition of some sort should be added. This visualization as built with Matplotlib and NetworkX.