How Healthy is your Enterprise Social Network?

At the heart of any Enterprise Social Network (ESN) are the groups or communities formed within them. Understanding the health and productivity of these groups should therefore be front of mind. For ESNs we can look again to the more mature experiences with consumer and external customer communities for guidance. We have written previously about the need to take care when translating consumer network metrics to the Enterprise. But in the case of community health, we believe the mapping from external community to internal community can be fairly close.

What can we learn from consumer and customer networks?

Arguably the gold standard for community health measures was published several years ago by Lithium, a company that specialises in customer facing communities. Lithium used aggregate data from a decade’s worth of community activity (15 billion actions and 6 million users) to identify key measures of a community’s health:

  • Growth = Members (registrations)
  • Useful  = Content (post and page views)
  • Popular = Traffic (visits)
  • Responsiveness (speed of responsiveness of community members to each other)
  • Interactivity = Topic Interaction (depth of discussion threads taking into account number of contributors)
  • Liveliness (tracking a critical threshold of posting activity in any given area)

march-blog-1

march-blog-2

At the time of publishing, Lithium was hoping to facilitate the creation of an industry standard for measuring community health.

Other contributors to the measurement of online community health include online community consultancy Feverbee with their preferred measures as:

  • New visitors – a form of growth measure
  • New visitors to new registered members– conversion rate measure
  • % members which make a contribution– active participants
  • Members active within the past 30 days– time based activity
  • Contributions per active member per month– diversity and intensity measure
  • Visits per active member per month – traffic measure
  • Content popularity-useful content

Marketing firm Digital Marketer health measure recommendations include:

  • Measuring the total number of active members, rather than including passive members.
  • Number of members who made their first contribution as a proxy for growth.
  • A sense of community (using traditional survey methods).
  • Retention of active members i.e. minimal loss of active members (churn rate).
  • Diversity of membership, especially with respect to innovation communities.
  • Maturity, with reference to the Community Roundtable Maturity Model.

Using SWOOP for Assessing Enterprise Community/Group Health

SWOOP is focused on the Enterprise market and is therefore very interested in what we can usefully draw from the experiences of online consumer and customer networks. The following table summarises the experiences identified above and how SWOOP currently addresses these measures, or not:

Customer Community Health Measures SWOOP Enterprise Health Measures
Growth in Membership Measures active membership and provides a trend chart to monitor both growth and decline.
Useful Content Provides a most engaging posts widget to assess the usefulness of content posted.  We are currently developing a sentiment assessment for content.
Popularity/Traffic SWOOP does not currently measure views or reads. Our focus is more on connections that may result from content viewing.
Responsiveness Has a response rate widget that identifies overall response rate and the type of response e.g. like, reply and the time period within which responses are made.
Interactivity Has several rich measures for interactivity, including network connectivity and a network map, give-receive balance and two way connections. The Topic tab also identifies interactivity around tagged topics.
Liveliness The activity per user widget provides the closest to a liveliness (or lack of liveliness) indicator.
Activity over time The Active Users and Activity per User widgets report on this measure.
Contributions per member The Activity per User widget provides this. The New Community Health Index provides a 12 month history as well as alarms when certain thresholds are breached.
Sense of community Requires a survey, which is outside the scope of SWOOP.
Retention Not currently measured directly. The active members trend chart gives a sense of retention, but does not specifically measure individual retention rates.
Diversity Not provided on the SWOOP dashboard, but is now included in the SWOOP benchmarking service. Diversity can be measured across several dimensions, depending on the profile data provided to SWOOP e.g. formal lines of business, geography, gender etc. In the absence of profile data, diversity is measured by the diversity of individual membership of groups.
Maturity The Community Roundtable maturity assessment is a generic one for both online and offline communities. Our preference is to use a maturity framework that is more aligned to ESN, which we have reported on earlier. How the SWOOP measures can be related to this maturity curve is shown below.

march-blog-3

Thresholds for What’s Good, Not so good and Bad

We know that health measures are important, but they are of little use without providing some sense of what a good, bad or neutral score is. In the human health scenario, it is easy to find out what these thresholds are for basic health measures like BMI and Blood Pressure. This is because the medical research community has been able to access masses of data to correlate with actual health outcomes, to determine these thresholds with some degree of confidence. Online communities have yet to reach such a level of maturity, but the same ‘big data’ approach for determining health thresholds still applies.

As noted earlier, Lithium has gone furthest in achieving this, from the large data sets that they have available to them on their customer platform. At SWOOP we are also collecting similar data for ESNs but as yet, not to the level that Lithium has been able to achieve. Nevertheless, we believe we have achieved a starting point now with our new Community Health Index Widget. While we are only using a single ‘activity per active user’ measure, we have been able to establish some initial thresholds by analysing hundreds of groups across several Yammer installations.

march-blog-4

Our intent is to provide community/group leaders with an early warning system for when their groups may require some added attention. The effects of this attention can then be monitored in the widget itself, or more comprehensively through the suite of SWOOP measures identified in the table above.

Communities are the core value drivers of any ESN. Healthy enterprise communities lead to healthy businesses, so it’s worth taking the trouble to actively monitor it.

 

 

 

 

 

 

 

 

 

 

What can we Learn from Artificial Intelligence?

This might seem strange, suggesting that a science dedicated to learning from how we humans operate, could actually return the favour by teaching us about ourselves? As strange as this may sound, this is precisely what I am suggesting.

Having spent a good deal of my early career in the “first wave of AI” I had developed a healthy scepticism of many of the capability claims for AI. From the decade or more I spent as an AI researcher and developer I had come to the conclusion that AI worked best when the domains of endeavour were contained within discrete and well bounded ‘solution spaces’. In other words, despite the sophistication of mathematical techniques developed for dealing with uncertainty, AI was simply not that good in the “grey” areas.

AI’s Second Wave

alphago

The “second wave of AI” received a big boost when Google company Deep Mind managed to up the ante on IBM’s chess playing Deep Blue  by defeating the world Go champion Lee Sedol. According to Founder and CEO of Deep Mind Demis Hassabisis,  the success of their program AlphaGo could be attributed to the deeper learning capabilities built into the program, as opposed to Deep Blue’s largely brute force searching approach. Hassabisis emphasizes the ‘greyness’ in the game of Go, as compared to Chess. For those familiar with this ancient Chinese game, unlike chess, it has almost a spiritual dimension. I can vividly recall a research colleague of mine, who happened to be a Go master, teaching a novice colleague the game in a lunchtime session, and chastising him for what he called a “disrespectful move”. So AplhaGo’s success is indeed a leap forward for AI in conquering “grey”.

So what is this “deep learning” all about? You can certainly get tied up in a lot of academic rhetoric if you Google this, but for me it’s simply about learning from examples. The two critical requirements are the availability of lots of examples to learn from, and the development of what we call an “evaluation function”, i.e. something that can assess and rate an action we are considering on taking. The ‘secret sauce’ in AlphaGo is definitely the evaluation function. It has to be sophisticated enough be able to look many moves ahead and assess many competitive scenarios before evaluating its own next move. But this evaluation function, which takes the form of a neural network, has the benefit of being trained on thousands of examples drawn from online Go gaming sites, where the final result is known.

Deep Learning in Business

books

We can see many similarities to this context in business. For example, the law profession is founded on precedents, where there are libraries of cases available, for which the final result is known.  Our business schools regularly educate their students by working through case studies and connecting them to the underlying theories. Business improvement programs are founded on prior experience or business cases from which to learn. AI researchers have taken a lead from this and built machine learning techniques into their algorithms. An early technique that we had some success with is called “Case Based Reasoning”. Using this approach, it wasn’t necessary to articulate all the possible solution paths, which in most business scenarios, is infeasible.  All we needed to have was sufficient prior example cases to search through, to provide the cases that most matched the current context, leaving the human user to fill any gaps.

The Student Becomes the Teacher

Now back to my question; what can AI now teach us about ourselves? Perhaps the most vivid learnings are contained in the reflections of the Go champions that AlphaGo had defeated. The common theme was that AlphaGo was making many unconventional moves, that only appeared sensible in hindsight. Lee Sedol has stated his personal learning from his 4-1 defeat by AlphaGo in these comments: “My thoughts have become more flexible after the game with AlphaGo, I have a lot of ideas, so I expect good results” and “I decided to more accurately predict the next move instead of depending on my intuition”. So the teacher has now become the student!

It is common for us as human beings to become subjects of unconscious bias. We see what is being promoted as a “best practice”, perhaps reinforced by a selected few of our own personal experiences, and are then willing to swear by it as the “right” thing to do. We forget that there may be hundreds or even thousands of contrary cases that could prove us wrong, but we stubbornly stick to our original theses. Computers don’t suffer from these very human traits. What’s more they have the patience to trawl through thousands of cases to fine tune their learnings. So in summary, what can we learn from AI?

  • Remember that a handful of cases is not a justification for developing hard and fast rules;
  • Before you discount a ‘left field’ suggestion, try to understand the experience base that it is coming from. Do they have experiences and insights that are beyond those of your own close network?
  • Don’t be afraid to “push the envelope” on your own decision making, but be sure to treat each result, good or bad, as contributing to your own growing expertise; and
  • Push yourself to work in increasingly greyer areas. Despite the success of AlphaGo, it is still a game, with artificial rules and boundaries. Humans are still better at doing the grey stuff!

 

 

 

 

Can Collaboration Personas work with Sports Teams?

KEEPIA View from the Top – David Thodey Interview - Part 1- Why Enterprise Social Networking- (3)

Professional sport these days is rife with in depth analyses and statistics on player and team performance. Players are now often equipped with wearable devices to monitor their health and fitness by the minute. Increased betting on sport has added a whole new dimension to the desire for predictive analytics and anything that might assist the punters in predicting the result of a game.

What makes sport such an attraction to a large percentage of the world’s population is that despite the science that is being brought to sport, there is still significant uncertainty in the results. We all applaud the times when the ‘team of champions’ is upset by the underdog ‘champion team’. Who can forget the US amateur ice hockey team overcoming the all-conquering Russians at the 1980 Winter Olympic games? Equally memorable is the failure of the all-conquering US Basketball ‘Dream Team’ at the Athens 2004 Olympics. The search for that ‘X-Factor’ that drives the champion team to overcome the odds is the modern coach’s dream. In this post we will explore an area of sports analytics that is largely under-exploited.

For the novice sports punter the first port of call for team intelligence is the player profiles. The unwritten inference is that if you are well informed about the players and their individual strengths and weaknesses, then you will be able to predict team performances well. For example, if we go to the FIFA statistical support site for the 2014 World cup, this is what we find:

Fifa table

Again, the majority of the statistics profile individual player performance; how many minutes they played, goals scored, passes made, free kicks taken, tackles made, even which parts of the field the player occupied.

Incongruous howeveSwoop teamsr is that since football is a team game, why is there so little rec
orded about how they collaborated with each other on the field? We regularly see the NBA coach using small whiteboards to identify the passing structure wanted.  I had
to dig into the FIFA data to find some evidence of passing records of how the players interacted with each other i.e. connection data. I found it hidden away in the ‘Passing Distribution’ statistics. So what might this largely overlooked data provide us with? Can the network data provide us with the missing intelligence needed to predict that ‘x-factor’ that successful teams are blessed with?

Our analysis technique of choice is social network analysis (SNA). Traditionally, SNA is used to identify relationship networks in communities or large enterprises. Its application to sport is novel but not unprecedented as this academic study shows. The study used FIFA 2010 world cup statistical data and traditional SNA centrality scores to assess team performance. We decided to build on this by using similar data from the FIFA 2014 World Cup site for the game between eventual champions Germany and Portugal. We chose this game as Germany were convincing winners and therefore there would be a greater chance of our analyses identifying an ‘x-factor’ difference. Rather than use traditional SNA centrality scores, we decided to use the behavioural SWOOP personas that we designed to characterize collaboration behaviours of staff participating in enterprise social networking (ESN) platforms. The five personas are Engager (Linking), Catalysts (Energizing), Responder (Supporting), Broadcaster (Telling) and Observer (Watching) and we felt that they could be mapped to the following behavioural archetypes, that we might see on the football field:

Behavioural Persona Classifier Football Player Characteristic
Engager Roughly equal number of passes received as completed passes made Someone who is a central connector linking plays
Catalyst Receives more passes than completed passes made Someone who wants the ball and pushes the team forward
Responder Completes more passes than they receive (assumes they make more intercepts) A good support player; cleans up the plays
Broadcaster Completes more passes than they receive (assumes they take free kicks and corners) Takes the big kicks but does not back up or intercept that much
Observer They have a low level of participation Usually a bench player, but perhaps on the field, does not get involved that much.

Our SWOOP Personas are classified according to the posting patterns of the ESN participants. The order that they are shown in the table above is also what we believe is the order of most positive impact on collaboration performance. For example, an engager is able to balance the number of posts, replies and likes that they make with those that they receive. We see the Engager as the strongest persona for collaboration. A Catalyst might be the target for many passes. They may take more risks in pushing the ball forward and therefore more passes might go astray, leaving them with an excess of passes received over successful passes completed. A responder will make more passes than they receive, perhaps because in their ‘cleaning up’ work; they may intercept more passes from the opposition, leaving them with an excess of passes made over passes received from a teammate.  A Broadcaster also has an excess of passes made over passes received, but perhaps their passes come more from fixed ball situations like free kicks or corner kicks, rather than intercepts. Finally, the observer characterises someone who really isn’t in the game that much.

With these characterisations in mind, we took the passing distribution data from the Germany Portugal match into our SWOOP SNA analysis:

Passing data

The passing distribution shows the number of times a pass has gone from one player to another. The network is therefore directional as shown in the above matrices. The number of passes between two players can indicate strength of the connection between those players. We can represent these passing patterns in a social network diagram (sociogram):

Germany Portugal

The thicker lines relate to number of passes. The layout algorithm clusters more frequent connectors closer together physically. Qualitatively, the sociogram does appear to show Germany as a tighter outfit, in terms of their passing patterns, than Portugal. However, we need to look at the quantitative data to be sure of any marked differences:

Germany     Portugal    
Player Minutes Persona Player Minutes Persona
NEUER 94 Responder PATRICIO 94 Responder
HOEWEDES 94 Responder ALVES 94 Responder
HUMMELS 74 Catalyst PEPE 36 Responder
KHEDIRA 94 Engager VELOSO 47 Catalyst
OEZIL 64 Engager COENTRAO 66 Responder
MUELLER 83 Catalyst RONALDO 94 Catalyst
LAHM 94 Engager MOUTINHO 94 Catalyst
MERTESACKER 94 Engager ALMEIDA 27 Catalyst
KROOS 94 Catalyst MEIRELES 94 Responder
GOETZE 94 Engager NANI 94 Broadcaster
BOATENG 94 Broadcaster PEREIRA 94 Catalyst
SCHUERRLE 29 Catalyst EDER 66 Catalyst
PODOLSKI 10 Engager COSTA 47 Catalyst
MUSTAFI 19 Responder ALMEIDA 27 Engager

We can see that the tighter passing patterns of the German team is confirmed by the higher number of Engager personas (6 vs 1) and even then the Portuguese Engager was a substitute playing the least minutes. The Catalyst persona is the next most valued in our view and on this dimension Portugal has 7 vs Germany’s 4; suggesting that Portugal played a more expansive, yet more risky, pattern of play. The actual result was a 4-nil win to Germany.

We also wanted to do a similar analysis for the world cup final game between Germany and Argentina:

Germany     Argentina    
Player Minutes Persona Player Minutes Persona
NEUER 129 Responder ROMERO 129.00 Broadcaster
HOEWEDES 129 Broadcaster GARAY 129.00 Responder
HUMMELS 129 Broadcaster ZABALETA 129.00 Broadcaster
SCHWEINSTEIGER 129 Broadcaster BIGLIA 129.00 Engager
OEZIL 124 Engager PEREZ 87.00 Catalyst
KLOSE 89 Catalyst HIGUAIN 79.00 Engager
MUELLER 129 Engager MESSI 129.00 Broadcaster
LAHM 129 Engager MASCHERANO 129.00 Broadcaster
KROOS 129 Engager DEMICHELIS 129.00 Catalyst
BOATENG 129 Engager ROJO 129.00 Broadcaster
KRAMER 30 Responder LAVEZZI 47.00 Engager
SCHUERRLE 98 Broadcaster GAGO 41.00 Catalyst
MERTESACKER 4 Observer PALACIO 49.00 Engager
GOETZE 39 Catalyst AGUERO 82.00 Catalyst

In contrast to the Germany-Portugal game, the ‘Engager’ score was much closer (5-4), though two of Argentina’s Engagers were substitutes playing less minutes. The score was a very narrow 1-nil win to Germany in overtime. Compared to the previous game, there were also more Broadcasters on both sides. We surmised that broadcasters may start play from fixed ball positions i.e. they make more passes than they receive. Perhaps this reflects the stop-start nature of the final. Overall though, there is some evidence that team success might be predictable using relationship derived personas.

While we find the results interesting and intriguing, for us this analysis is a fun diversion; and therefore we are careful not to claim too much in terms of groundbreaking research. That said, we are looking to have our on-line personas identified with contexts beyond the online social networking field, so we think this analysis qualifies.

We close this article with some food for thought:

  • How much are sports teams really like work teams? There are defined roles and expectations in both. Sports teams however have clearer success criteria.
  • How much is the persona related to the role in the team versus the individual playing style?
  • How much might the personas change based on the context of the game and game specific tactics i.e. both in sport and work teams, how adaptable can the members be from their ‘preferred’ behaviour persona?
  • And the big question. Can relationship analytics predict the x-factor in team success, independent of player specific profile information?

Of course much more research work would need to be done. But we are happy to have been able to provide another example of how collaborative behaviours can span many contexts and not just be online specific.

Learn more about SWOOP: www.swoopanalytics.com