I think maybe next year, 90-minute workshop sessions starting at 9 and ending by 5 (9, 11, 1:30, 3:30) would be an improvement. This is a lot of network info to absorb.
The morning began with IU’s Alex Vespignani, who had to go early to catch a flight later that morning. Much of the content I had heard through his Simplicity of Complexity class last year. There was also some unavoidable math, which I’m beginning to realize I either have to re-conquer or farm out to others. Alex talked mainly about the challenges of mapping networks the size of the Internet, and how some measures look the same but have significant differences. The two biggest nuggets of new information were an explanation of the differences between Assortative (nodes tending to connect with similar nodes) and Disassortative (large nodes tending to connect with small-degree nodes), and a new way of measuring centrality called K-core decomposition (which peels away a network by the degree of each node). Alex also introduced the idea of a “stifler” in epidemic study, which acts to stop the spread of new information after encountering enough people who have already heard the news.
As I’m trying to view all of this in terms of RCT’s concept of mutuality, I’m looking at K-Core as a possible way to identify high mutuality in network components. I also wonder if Assortative = Bonding and Disassortative = Bridging in some way. Questions keep piling up that will only be answered after learning a lot more about terminology (and probably math) in network science, but I can’t help but get excited about the high-level similarities in some of the concepts arising from different domains.
Mark Newman, so far, has been the best of the lecturers. He picked up on many of the themes from Alex’s talk to talk in greater detail about processes on a network (mostly epidemilogy). Not only did he do a great job explaining all of this information, but he used a white board to go through the math slowly enough where I think I got it. I’m not anxious to do any proofs in the future, but I did remain engaged for the full two hours. Among the interesting tidbits was the notion that random nodes are just random, but random edges beget well-connected nodes (the chances of any given edge being connected to a high-degree node is much higher than a low-degree one).
I think Mark’s success was in part because of the info in Alex’s session. Lots of mutuality connections for me here:
- “Superspreaders” — These are people who expose many other people to diseases and can keep the spread of a disease alive when it otherwise would have died out. I wonder about whether the quality of the connections plays a role. I also wonder if this conceptualization can be used to predict successful starts and sustained interaction of a new online community.
- Assortative/Disassortative — The former kind of network (with a central core) makes it easier for a disease to spread, but fewer people get infected; The latter can hit more people but is more difficult to get to epidemic threshold. I wonder how this maps to the size of an online forum and the possible use of federated groups within a large community.
- Any given friend has more friends than you do — this is an on-average kind of thing, but it’s what Mark took so much time prooving on the white board. Assuming all relationships involve mutuality for the moment (they don’t), this might imply that friends are always better supported than you are. Maybe that is something that gives them the strength to help you out when you need it.
It is clear that the worst session slot of the day is the one in the afternoon following lunch. And both times, the content was about 90% math. Zoltán Toroczkai got that draw Wednesday. I initiated a phone call at the mid-session break and allowed it to give me an excuse not to go back for the second half. Before I ditched, though, the key bit was that the meaningful interval between snapshots when measuring a dynamic network depend on the importance of time to the goods traveling along the nodes (the flow).
The session I was really holding out for, though, was one on social capital by Jeff Johnson. I’ll have to go to the NetSci Blog to get his slides, because it is jam-packed with important references and information about the two studies he shared. One was on small group networks living in Antarctica, and the other was on political networks following the passage of some state legislation in one of the Carolinas.
The former, in particular, was a very HCI/d-ish approach to research. He established trust with his participants by working an a refueling team on the U.S. base at the South Pole (Jeff arrived in -67 degree weather). He changed the nature of the survey after meeting some resistance, opting for a kind of participatory design to correct the research mistakes in its methods. He established role categories through qualitative interviews with past antarctic crews. Really, absent some kind of design conclusion at the end, the entire Antarctic study info came across like one of our Informatics capstones. This is a guy I’ll definitely be following.
I see math on the docket for Thursday morning, so I’m heading in late with Justin. The two afternoon talks are sessions high on my list, so I’ll be very rested for them: Noshir Contractor and Reka Albert.
1 reply on “NetSci 2006 – Day 2”
[…] Mark Newman wrote an article in 2005 for Contemporary Physics examines systems from a mathematical perspective, relying on manipulations of power law equations. As was the case with his two talks at NetSci 2006, Mark was pretty effective in explaining the nuances of power law … even if this article also crystallized for me by desire to not dwell in this part of the domain more than I have to. […]