How Healthy is your Enterprise Social Network?

At the heart of any Enterprise Social Network (ESN) are the groups or communities formed within them. Understanding the health and productivity of these groups should therefore be front of mind. For ESNs we can look again to the more mature experiences with consumer and external customer communities for guidance. We have written previously about the need to take care when translating consumer network metrics to the Enterprise. But in the case of community health, we believe the mapping from external community to internal community can be fairly close.

What can we learn from consumer and customer networks?

Arguably the gold standard for community health measures was published several years ago by Lithium, a company that specialises in customer facing communities. Lithium used aggregate data from a decade’s worth of community activity (15 billion actions and 6 million users) to identify key measures of a community’s health:

  • Growth = Members (registrations)
  • Useful  = Content (post and page views)
  • Popular = Traffic (visits)
  • Responsiveness (speed of responsiveness of community members to each other)
  • Interactivity = Topic Interaction (depth of discussion threads taking into account number of contributors)
  • Liveliness (tracking a critical threshold of posting activity in any given area)



At the time of publishing, Lithium was hoping to facilitate the creation of an industry standard for measuring community health.

Other contributors to the measurement of online community health include online community consultancy Feverbee with their preferred measures as:

  • New visitors – a form of growth measure
  • New visitors to new registered members– conversion rate measure
  • % members which make a contribution– active participants
  • Members active within the past 30 days– time based activity
  • Contributions per active member per month– diversity and intensity measure
  • Visits per active member per month – traffic measure
  • Content popularity-useful content

Marketing firm Digital Marketer health measure recommendations include:

  • Measuring the total number of active members, rather than including passive members.
  • Number of members who made their first contribution as a proxy for growth.
  • A sense of community (using traditional survey methods).
  • Retention of active members i.e. minimal loss of active members (churn rate).
  • Diversity of membership, especially with respect to innovation communities.
  • Maturity, with reference to the Community Roundtable Maturity Model.

Using SWOOP for Assessing Enterprise Community/Group Health

SWOOP is focused on the Enterprise market and is therefore very interested in what we can usefully draw from the experiences of online consumer and customer networks. The following table summarises the experiences identified above and how SWOOP currently addresses these measures, or not:

Customer Community Health Measures SWOOP Enterprise Health Measures
Growth in Membership Measures active membership and provides a trend chart to monitor both growth and decline.
Useful Content Provides a most engaging posts widget to assess the usefulness of content posted.  We are currently developing a sentiment assessment for content.
Popularity/Traffic SWOOP does not currently measure views or reads. Our focus is more on connections that may result from content viewing.
Responsiveness Has a response rate widget that identifies overall response rate and the type of response e.g. like, reply and the time period within which responses are made.
Interactivity Has several rich measures for interactivity, including network connectivity and a network map, give-receive balance and two way connections. The Topic tab also identifies interactivity around tagged topics.
Liveliness The activity per user widget provides the closest to a liveliness (or lack of liveliness) indicator.
Activity over time The Active Users and Activity per User widgets report on this measure.
Contributions per member The Activity per User widget provides this. The New Community Health Index provides a 12 month history as well as alarms when certain thresholds are breached.
Sense of community Requires a survey, which is outside the scope of SWOOP.
Retention Not currently measured directly. The active members trend chart gives a sense of retention, but does not specifically measure individual retention rates.
Diversity Not provided on the SWOOP dashboard, but is now included in the SWOOP benchmarking service. Diversity can be measured across several dimensions, depending on the profile data provided to SWOOP e.g. formal lines of business, geography, gender etc. In the absence of profile data, diversity is measured by the diversity of individual membership of groups.
Maturity The Community Roundtable maturity assessment is a generic one for both online and offline communities. Our preference is to use a maturity framework that is more aligned to ESN, which we have reported on earlier. How the SWOOP measures can be related to this maturity curve is shown below.


Thresholds for What’s Good, Not so good and Bad

We know that health measures are important, but they are of little use without providing some sense of what a good, bad or neutral score is. In the human health scenario, it is easy to find out what these thresholds are for basic health measures like BMI and Blood Pressure. This is because the medical research community has been able to access masses of data to correlate with actual health outcomes, to determine these thresholds with some degree of confidence. Online communities have yet to reach such a level of maturity, but the same ‘big data’ approach for determining health thresholds still applies.

As noted earlier, Lithium has gone furthest in achieving this, from the large data sets that they have available to them on their customer platform. At SWOOP we are also collecting similar data for ESNs but as yet, not to the level that Lithium has been able to achieve. Nevertheless, we believe we have achieved a starting point now with our new Community Health Index Widget. While we are only using a single ‘activity per active user’ measure, we have been able to establish some initial thresholds by analysing hundreds of groups across several Yammer installations.


Our intent is to provide community/group leaders with an early warning system for when their groups may require some added attention. The effects of this attention can then be monitored in the widget itself, or more comprehensively through the suite of SWOOP measures identified in the table above.

Communities are the core value drivers of any ESN. Healthy enterprise communities lead to healthy businesses, so it’s worth taking the trouble to actively monitor it.











Bridging the Knowledge Sharing/Problem Solving Divide

problem-solvingWorking across organisational boundaries

One of the most frequently cited reasons we hear for implementing an enterprise social network platform is to “enable our organisation to better communicate and collaborate across organisational boundaries”.

The real objective is to let information and knowledge flow more freely to solve challenge business problems. This is the point where the focus changes from generic SHARING to business focused (problem-) SOLVING:


We’re previously introduced this maturity framework that incorporates the 4 stages of Simon Terry’s model, and in a recent discussion with Simon he shared with us with some constructive insights that he has drawn from the application of his maturity model.

He indicated to us that:

“Up to SHARING, people are just engaged in social exchange. It is chat. That can be entirely internal to the ESN and not connected to the business. Beyond that point they are delivering benefits from collaborative work. Moving over that transition and understanding the behaviours beyond that point is essential.

Simon then proceeded to describe the key things to consider in the ‘SOLVING’ stage as:

“Value chains and projects and their relationships to the silos captured in your Cross-team collaboration widget”.

In this post we will therefore review the SWOOP ‘Cross-Team Collaboration’ widget and give you insights about how this can help you in your enterprise social adoption efforts. Together with the recently reviewed Influential People and Response Rate widgets they collectively support the ‘SOLVE’ Stage.


The Cross-Team collaboration widget identifies the levels of interaction between selected organisational dimensions. The most common use is to identify interactions between the formal lines of business.

Two representations are offered:

  • The matrix view shades the intersecting squares by the relative interaction levels. The diagonal represents intra-unit interactions.
  • The map view (see below) more succinctly illustrates the degree to which different units are interacting.


If you have created a cross-enterprise group, or community of practice, it will tell you the degree to which all divisions have been engaged. If you have a corporate initiative that has been launched with a topic hash tag, it will also tell you the degree of cross-divisional engagement.

In a typical hierarchy, we would anticipate that most interactions would occur inside the formal structures, or between divisions along a defined value chain e.g. marketing interactions with sales. Cross organisational groups or teams are usually formed to facilitate interactions across the formal lines of business, for example a Supply/Value chain.

The Cross-Team Collaboration widget provides a view into the degree to which these cross organisational teams are effective. While interactions between formal departments is the most common, geographic location is also a popular dimension to explore interaction levels.

What is the Business Imperative?

It is the apparent inflexibility and poor responsiveness of the formal hierarchy that motivates many organisations to adopt enterprise social networks. Formal hierarchies are designed for efficient execution of pre-determined processes. However, CEOs are now looking for more than this. David Thodey, the former CEO of Australia’s largest Telco, summed up the sentiment by indicating that he wanted to short circuit the entrenched communication channels. He wanted his management team to be able to have authentic conversations with staff at all levels. Similarly, we recall a statement made by a former CEOs at BHP Billiton, an industrial resources conglomerate that was very process driven:

“Silos are not bad, this is how we get work done. We just need to dig some holes in the sides!” (please excuse the mining analogy)

Another of our favourite thought leaders is Heidi Gardner, a former McKinsey consultant and Harvard Business School professor now lecturing at Harvard Law School. She has spent over a decade conducting in-depth studies of numerous global professional service firms. Her research with clients and the empirical results of her studies demonstrate clearly and convincingly that collaboration pays, for both professionals and their firms. In her book Smart Collaboration, she shows that firms earn higher margins, inspire greater client loyalty, attract and retain the best talent, and gain a competitive edge when specialists collaborate across functional boundaries. The Cross-Team Collaboration widget enables you to measure if this is actually happening, and is one of the most important widgets connecting business outcomes with the adoption of your enterprise social network.

Specifically, in terms of problem solving, there will be problems that traverse the business unit boundaries. For example, a customer support problem may appear to be an operations problem, but perhaps the genesis of the problem is with Sales or Marketing, by how a product or service was represented to the customer in the first place. Also, supply chain problems are by definition, inter-dependent and cannot be solved by a single business unit. The Cross Team Collaboration widget can signal whether these cross-business unit problems are being addressed as a shared problem. If a cross-business unit problem has been hash tagged, it is also possible to use the SWOOP Topic tab to identify where the participants in the tagged problem solving activity are coming from. Are they appropriately cross-business unit?


Bridging the ‘sharing’ to ‘solving’ divide requires a stronger focus on what the business is trying to achieve. What are the key problems or challenges that must be met? What are the specific and identified collaborative interactions between the different organisational units, that will be required to solve them? The SWOOP Cross-unit Collaboration widget, along with the Response Rate and Influential People widgets have been designed to help you bridge the ‘Sharing’ to ‘Solving’ divide.

This post continues our series on key SWOOP indicators.


Why we Should Worry about Response Rates in Enterprise Social Systems

Why we Should Worry about Response Rates in Enterprise Social Systems 

response-rates-cartoonThis post continues our series on key SWOOP indicators. We have %Response Rate as a key performance indicator for organisations embracing problem solving and innovation within their Enterprise Social Networking (ESN) platforms. Difficult problems require deep dialogue, discussion and debate to be effectively solved. A response to a posting is hopefully the beginning of a constructive discussion, hence an important indicator of the degree to which an organisation is predisposed to solving problems online. Our ESN benchmarking of close to 50 organisations has the average response rate at 72%, but with a large range from a low of 32% to a high of 93%. response-rate-chartresponse-rate

The Response Rate widget identifies the percentage of posts that have received a written ‘reply’ and/or a ‘like’, for the period selected. It will also identify the % posts that have received no response; a measure that community managers need to monitor closely. The timeliness of the response is also reported.  

The Response Rate widget is available at all SWOOP reporting levels, from the individual, right through to the Enterprise overall. While not all posts are framed as problems, the response rate does reflect how responsive an organisation is overall. A response is a tangible signal of value received. In the absence of specific value stories, it is the most direct measure of value being facilitated on the ESN platform.  

For the individual, a poor response rate can indicate that your postings are not framed appropriately for attracting a response. For a group, a poor response rate may indicate a lack of a critical mass of members, or inadequate community management. 

Business Imperative 

It sounds obvious, but before problems can be solved, they need to be shared. Sharing a problem can be construed as a weakness. When senior management openly share a problem, they run the risk of ‘losing face’. Isn’t solving difficult problems what they are being paid to do?  Yet it is the senior management that need to lead the way in generating a culture for collaborative problem solving. As David Thodey, the former CEO of Telstra told us,Management don’t know everything…we have been guilty of releasing poor policies that have taken us years to recover from’. Thodey used the ESN to share problems that new policies were required for, and then getting feedback before finally releasing a new policy. 

The first challenge therefore is to develop a culture which respects that sharing a problem is not a weakness but a strength of character. Think about using hash tags to monitor problems posted, and their journey to a hopeful resolution. Once problems are shared freely on the ESN, the Response Rate measure can be used to measure problems solved. Many of the online technical forms are established specifically for tracking problem resolutions. There is no reason that the ESN cannot be used in a similar way. 


Data-Driven Collaboration Part 3: Sustaining Performance through Continuous Value Delivery

In Part 1 of our series on Data-Driven Collaboration, “How Rich Data Can Improve Your Communication,” we identified how to plan for collaboration by ensuring that goals were established and aligned with our organizational strategy. We then moved on to Part 2, “Recognizing Personas and Behaviors to Improve Engagement,” to explain how you can build engagement by managing behaviors. In this, the final post in our series, co-authored by Swoop Analytics and Carpool Agency, we will identify how to sustain the momentum to ensure that value is continuously delivered as a matter of course.

Previously, we identified the importance of migrating from simple activity measures to those that signify when collaborative relationships are being formed. It is through these relationships that tangible outcomes are achieved. Therefore, it is not surprising that analytics—as applied to sustained relationship-building—plays an important role in continuous value delivery from collaboration.

For example, a CEO from one of Carpool’s clients had been using Yammer to receive questions for a regular Q&A session, but they’d grown concerned that the CEO’s infrequent posts in the group were creating an echo chamber among the same small group of contributors. Careful analysis showed that this was more perception than reality, and the group showed a great deal of variety in cross-organization conversation. As this was precisely the executive’s goal in forming the group, the team doubled down on their investment in this executive-to-company relationship.

Monitoring Maturation Using Analytics

At SWOOP, we have been benchmarking Yammer Installations from start-up to ‘normal operations’ for some time. With Yammer, the typical pattern of start-up is a bottom-up use of ‘Free’ Yammer, which for some, lasts for many years. Without exception, however, sustained usage only occurred after a formal launch and the tacit approval of senior management. We observed different patterns of start-ups from the ‘big-bang’ public launch, through to more organic, yet managed approaches. Whatever strategy is used, organizations always reach a stage of steady-state operations or, at worst, a slow decline.


For an Enterprise Social Network (ESN) like Yammer, we have found that the average engagement rate of the 35+ organizations in our benchmark set is around 29% (i.e., non-observers) with the best at around 75%. It is evident from our benchmarking that for larger organizations—for example, more than say 5,000 participants—it can be hard to achieve engagement levels above 30%. However, this doesn’t mean that staff aren’t collaborating.

We are seeing a proliferation of offerings that make up the digital office. For a small organization, Yammer may be their main collaboration tool, where team level activities take place. For larger organizations, however, Yammer may be seen as a place to explore opportunities and build capabilities, rather than as an execution space. Increasingly, tools like Slack, HipChat, and now Microsoft Teams are being used to fill this space for some teams that depend on real-time conversations as their primary mode of communication.

A Collaboration Performance Framework

As organizations mature with their use of collaboration tools, it is critical not to be caught in the ‘collaboration for collaboration sake’ cycle. As we indicated in “How Rich Data Can Improve Your Communication,” collaboration must happen with a purpose and goals in mind. The path to achieving strategic goals is rarely linear. More regularly, we need to adopt a framework of continuous improvement toward our stated goals. For many organizations, this will take the form of a ‘Plan, Do, Check, Act’ cycle of continuous improvement. However, in this age of digital disruptions and transformations, we need a framework that can also accommodate transformational, as well as incremental innovation.

At SWOOP, we have developed a collaboration performance framework drawn from Network Science.


The framework balances two important dimensions for collaborative performance: diversity and cohesion. It identifies a continuous cycle of value delivery, whether it be radical or incremental. Let’s consider an innovation example, with an organizational goal of growing revenue by 200%:

Individuals may have their own ideas for how this radical target could be achieved. By ‘Exploring’ these ideas with others, we can start to get a sense of how feasible our ideas might be, but also have the opportunity to combine ideas to improve their prospects. The important ‘Engaging’ phase would see the ideas brokered between the originators and stakeholders. These stakeholders may be the key beneficiaries and/or providers of the resources needed to exploit a highly prospective idea. Finally, the ‘Exploiting’ phase requires the focus and strong cooperation of a smaller group of participants operating as a team to deliver on the idea.

The performance framework can be deployed at all levels, from enterprise-wide to individual business units, informal groups, teams, and right down to the individual. In a typical Carpool engagement, we work with smaller teams to demonstrate this cycle and then use the success stories to replicate the pattern more broadly. A current client started with a smaller community of interest of 400 people, and is now expanding the pattern to their global, 4,000-member division.

Deploying Analytics and the Performance Framework

Like any performance framework, it can’t operate without data. While the traditional outcome measures need to be present, the important predictors of collaborative success are relationship-centered measures. For example, your personal network can be assessed on its diversity by profiling the members of your network. Your personal network’s cohesiveness can be measured, firstly, by how many of your connections are connected to each other; and secondly, by how many of these connections are two-way (reciprocated). We can then add layers provided from HR systems such as gender, geography, organizational roles, age, ethnicity, etc. to provide a complete picture of diversity beyond typical dimensions.

In the example below, we show the collaboration performance of participants in a large Yammer network over a 12-month period. You can see how challenging it might be to become an ‘Engager’, maximizing both diversity and cohesion.


We profiled their personal networks for their diversity, cohesion, and size, and plotted them on the performance framework. Interestingly the data exposed that the nature of this Yammer network is a place for exploring and, for some, engaging. There is a gap, however, in the Exploiting region. This is not to say that these individuals were poor at putting projects into motion. More likely, at least in this organization, the ESN is not the usual place to collaborate as a team. If there is no easy transition from the ESN to a team environment, then we have a problem that many ESNs experience: lots of activity but a perception of few tangible results directly from the ESN. Carpool’s approach puts this data together with data from other services and sources to create a holistic picture of the results and impact of the organization’s collaboration evolution.

Continuous Monitoring

For many organizations, continuous monitoring simply means monitoring activity on digital platforms. As we indicated in “Recognizing Personas and Behaviors to Improve Engagement,” activity monitoring can be a poor predictor of performance. At SWOOP, we look at activity that establishes or strengthens a relationship. In the screenshot below, you can see measures such as the number of two-way reciprocated relationships; the degree to which relationships are forming between the formal organizational departments; and who is influential, based on the size of their network, not how frequently they contributed. We identify key player risk by looking at how polarized a network may be among a selected few leaders. Even the Activity/User measure inside groups predicts how cohesive that group may be. By providing this data in real-time, we have the best opportunity for both leaders and individuals to adapt their patterns of collaboration as they see fit.


At Carpool, our engagements use a set of such dashboards to regularly check in on all the various channels and stakeholders, and make recommendations on an ongoing basis that accounts for the holistic communication picture.

Final Thoughts

In this series, we have taken you on a journey from planning for, launching, and productively operating a digital office. At the very beginning we emphasized the need to collaborate for a purpose. We then emphasized the need to ‘engage’ through relationships and adopting appropriate behavioral personas. Finally, we have explained the importance of adopting a collaboration performance framework that can facilitate continuous delivery of value.

In order to do all of this effectively, we not only need analytics, but interventions triggered by such analytics to improve the way we work. Analytics on their own don’t create change. But in the hands of skilled facilitators, analytics and rich data provide a platform for productive change. Collaboration is not simply about how to get better results for your organization, but also to get better results for yourself, by helping you to be a better collaborator.

Want More?

We hope these insights into data-driven collaboration will give you new ideas to innovate your own approach to internal communication. If you have any questions, or would like to learn how to establish, nurture, and grow deep internal communities, Carpool and SWOOP has a team who are ready to help you grow your business and drive collaboration today.

Yammer Benchmarking Insights #3 – Collaboration at the Personal Level

 In this episode we drill down to the most detailed level. That’s you, the individual collaborator.

At SWOOP we have designed behavioural personas to characterise individual collaboration patterns based on your pattern of activity.For example, if you are a Catalyst, you are good at getting responses to your posts. Catalysts are important for energizing a community and driving the engagement of others. If you are a Responder, you are good at responding to other people’s posts. Responders are important for sustaining a community and extending the discussions. An Engager is able to balance their Catalyst and Responder behaviour and is seen as the Persona to aspire to, as the Engager effectively balances what they give to others in the form of posts, replies, likes etc. and those that they receive from others. Therefore they are well placed to broker new relationships. Broadcasters tend to post without engaging in conversations. Observers are simply not very active, with less than a single activity every 2 weeks. We see Broadcasting and Observing as negative personas.

behavioural-personasWhat does an organisation’s portfolio of Personas typically look like? The results below are generated from our benchmarking results from close to 40 organisations. The lines indicate the minimum-maximum range and the blue square is the average score.


The large range of % Observers, between less than 10% to over 70%, may reflect the large variation in maturity amongst the organisations we have benchmarked. It may not only be the case of maturity though, as it is fair to say that the smaller organisations have an easier time engaging a higher proportion of their staff with the Enterprise Social network (ESN).  We show the break-up of the active (non-observer) Personas, which shows that Catalysts lead the way with just over 40%, followed by Responders at just under 30%, Engagers just over 20% and Broadcasters at 10%. This would indicate that in general, ESNs are relying on Catalysts to continue to drive participation and then Responders to sustain it.

Personas within Groups

Given that groups are the space where most of the intense collaboration is likely to happen, we were interested in what the Persona patterns were for the leaders of the best performing groups. We used a combination of two-way connection scores and activity scores to identify the strongest groups. We then applied the same measures to the group members to identify the group leaders. In other words, a group leader is someone who has a high number of two-way connections with other group members, and meets a threshold level of overall activity.

Firstly, we plotted all members on a graph, locating them by the size of their network (y-axis) within the group and the number of 2-way connections they have in the group (x-axis). The bubble is sized by their relative levels of interactions (activity). As you can see, the group leaders are clearly identified in the top right hand corner of the graph as different coloured nodes.


Secondly, we then plotted the top 5 leader’s Persona movements in 1 week intervals, over a 6-month period. In the example above you can see that the leaders played the Catalyst, Engager and Responder roles primarily. The size of the bubbles reflects their relative number of connections made (breadth of influence), for that week. Not all leaders were active every week. What becomes interesting is that we find some leaders have preferred Personas that are sustained over time. Leaders 1 and 4 in this case have a preference for Catalysing and Engaging. Leader 5 prefers Responding. Leaders 2 and 3 appear to be comfortable switching between Personas.

What appears to be important here is that high performing groups need leaders that can cover the spectrum of positive Personas i.e. Catalyst, Engager, Responder. While it’s fine to have leaders who have a preference for a certain behavioural Persona, it is useful to have leaders who can adapt their Persona to the situation or context at hand.

Personal Networking Performance

At SWOOP we use a fundamental network performance framework, which measures performance against the complementary dimensions of cohesion and diversity. We have indicated that individuals with a large number of two-way connections are likely to have more closed and cohesive networks. Cohesive networks are good for getting things done (executing/implementing). From an innovation perspective however, closed networks can be impervious to new ideas. The best ideas come from more open and diverse networks. In our view therefore, maximum network performance occurs by optimising diversity and cohesion. In other words, it’s good to be part of a strong cohesive network, but this should not be at the expense of maintaining a healthy suite of more diverse connections.

In the graphic below we have plotted the members of one large group on the Network Performance graph. In this case the diversity is measured by the number of different groups that an individual has participated in. The size of the bubbles reflects the size of the individual’s network (breadth of influence).


We have labelled regions in the graph according to our Explore/Engage/Exploit model of innovation through networks. We can see that the majority of group members exist in the ‘High Diversity/Low Cohesion’ Explore region. This is consistent with what many people give for their reasons for joining a group. The ‘Engage’ region shows those members who are optimising their diversity/cohesion balance. These are the most important leaders in the group. In an innovation context, these people are best placed to broker the connections required to take a good idea into implementation. The bottom right corner is the Exploit region, which for this group is fairly vacant. This might suggest that this group would have difficulty organically deploying an innovation. They would need to take explicit steps to engage an implementation team to execute on the new products, services or practices that they initiate.

The Innovation Cycle – Create New Value for Your Organisation

We conclude this third edition of Yammer Benchmarking insights be reinforcing the role that individuals can play in creating new value for their organisations. For many organisations, the ESNs like Yammer are seen as a means for accelerating the level of innovation that is often stagnating within the formal lines of business.

As individual’s we may have a preference for a given style of working, as characterised by our Personas. Your personal networks may be large, open and diverse; or smaller, closed and cohesive; or indeed somewhere in between. It is important however to see how your collaboration behaviours contribute to the innovation performance of your organisation. Innovation is a collaborative activity, and therefore we recommend that in your groups you:

  1. Avoid lone work (Observing/Broadcasting) and look to explore new ideas and opportunities collaboratively, online (Catalysing/Engaging/Responding).
  2. Recognise that implementing good ideas needs resources, and those resources are owned by the formal lines of business. Use your network to engage with the resource holders. Make the connections. Influence on-line and off-line.
  3. When you have organisational resources behind you, it’s time to go into exploit mode. Build the cohesive focussed teams to execute/implement, avoiding distractions until the job is done.


Data-Driven Collaboration Part 2: Recognizing Personas and Behaviors to Improve Engagement

In Part 1 of this series, “Data-Driven Collaboration Design”—a collaboration between Swoop Analytics and Carpool Agency—we demonstrated how data can be used as a diagnostic tool to inform the goals and strategies that drive your business’ internal communication and collaboration. 

In this post, we will take that thought one step further and show how, after your course is charted to improve internal communication and collaboration, your data continues to play a vital role in shaping your journey.

Monitoring More Than participation

Only in the very initial stages of the launch of a new Enterprise Social Network (ESN) or group do we pay any attention to how much activity we see. Quickly, we move to watching such metrics as average response time; breadth of participation across the organization, teams, roles, or regions; and whether conversations are crossing those boundaries. We focus on measures that show something much closer to business value and motivate organizations to strengthen communities.
For our purposes in this post, it will be useful to pivot our strategy to one that focuses on influential individuals. The community or team—whether it’s a community of practice, a community of shared interest, or a working team—isn’t a “group” or “si te,” but a collection of individuals, with all the messiness, pride, altruism, and politics implied. Data can be used to layer some purpose and direction over the messiness.

Patterns Become Personas

The Swoop Social Network Analytics dashboard uniquely provides analytics that are customized to each person who is part of an organization’s ESN. Using the principle of “when you can see how you work, you are better placed to change how you work”, the intent is for individual collaborators to receive real-time feedback on their online collaboration patterns so they can respond appropriately in real-time.
We analyzed the individual online collaboration patterns across several organizations and identified a number of distinct trends that reflect the majority of personal collaboration behaviors. With that data, we were able to identify five distinct personas: Observers, Engagers, Catalysts, Responders, and Broadcasters.

In addition to classifying patterns into personas, we developed a means of ranking the preferred personas needed to enhance an organization’s overall collaboration performance. At the top we classify the Engager as a role that can grow and sustain a community or team through their balance of posting and responding. This is closely followed by the Catalyst, who can energize a community by provoking responses and engaging with a broad network of colleagues. The Responder ensures that participants gain feedback, which is an important role in sustaining a community. The Broadcaster is mostly seen as a negative persona: They post content, but tend not to engage in the conversations that are central to productive collaboration. Finally, we have the Observer, who are sometimes also called ‘lurkers’. Observers are seen as a negative persona with respect to collaboration. While they may indeed be achieving individual learning from the contribution of others, they are not explicitly collaborating.
Using Personas to Improve Your Online Collaboration Behavior
Individuals who log in to the Swoop platform are provided with a privacy-protected personal view of their online collaboration behaviors. The user is provided with their persona classification for the selected period, together with the social network of relationships that they have formed through their interactions:

You may notice that the balance between what you receive and what you contribute is central to determining persona classification. Balanced contributions amongst collaboration partners have been shown to be a key characteristic of high performing teams, hence the placement of the ‘Engager’ as the preferred persona.

Our benchmarking of some 35 Yammer installations demonstrates that 71% of participants, on average, are Observers. Of the positive personas, the Catalyst is the most common, followed by Responders, Engagers, and Broadcasters. It’s therefore not surprising that an organization’s priority often involves converting Observers into more active participants. Enrolling Observers into more active personas is a task that falls on the more-active Engagers and Catalysts, with Responders playing a role of keeping them there.
At Carpool, during a recent engagement with a client, we encountered a senior leadership team that was comprised of Broadcasters who relied on traditional internal communications. Through our coaching—all the while showing them data on their own behavior and the engagement of their audience—they have since transformed into Catalysts.
One team, for example, had been recruiting beta testers through more traditional email broadcasts. But after just a few posts in a more interactive and visible environment, where we taught them how to invite an active conversation, they have seen not only the value of more immediate feedback, but a larger turnout for their tests. Now, it’s all we can do to provide them with all the data they’re asking for!
Identifying the Key Players for Building Increased Participation

When Swoop looks at an organization overall, we will typically find that a small number of participants are responsible for the lion’s share of the connecting and networking load. In the social media world, these people are called ‘influencers’ and are typically measured by the size of the audience they can attract. In our Persona characterization, we refer to them as Catalysts. Unlike the world of consumer marketing—and this point is critical—attracting eyeballs is only part of the challenge. In the enterprise, we need people to actively collaborate and produce tangible business outcomes. This can only happen by engaging the audience in active relationship-building and cooperative work. This added dimension of relationship-building is needed to identify who the real key players are.
In our work with clients, Carpool teaches this concept by coaching influencers to focus on being “interested” in the work of others rather than on being “interesting” through the content they share, whether that’s an interesting link or pithy comment. With one client, our strategy is to take an organization’s leader, a solid Engager in the public social media space, and “transplant” him into the internal communications environment where he can not only legitimize the forum, but also model the behavior we want to see.
In the chart below, we show a typical ‘Personal Network Performance’ chart, using Enterprise Social Networking data from the most active participants in an enterprise. The two dimensions broadly capture an individual’s personal network size (number of unique connections) against the depth of relationships they have been able to form with them (number of reciprocated two-way connections). They reflect our Engager persona characteristics. Additionally, we have sized the bubbles by a diversity index assessed by their posting behavior across multiple groups.
The true ‘Key Players’ on this chart can be seen in the top right-hand corner. These individuals have not only been able to attract a large audience, but also engaged with that audience and reciprocated two-way interactions. And the greater their diversity of connections (bubble size), the more effective they are likely to be.

Data like this is useful in identifying current and potential key players and organizational leaders, and helps us shift those online collaboration personas from Catalyst to Engager and scale up as far and as broadly as they can go.

Continuous Coaching

Having data and continuous feedback on your online collaboration performance is one thing, but effectively taking this feedback and using it to build both your online and offline collaboration capability requires planning and, of course, other people to collaborate with! Carpool believes in a phased approach, where change the behavior of a local team, then like ripples in a pond, expand the movement to new ways of working through compelling storytelling, using the data that has driven previous waves of change.
To get started now, think about your own teams. Would you be prepared to have your team share their collaboration performance data and persona classifications? Are you complementing each other, or competing? If that’s a little too aggressive, why not form a “Working Out Loud” circle with some volunteers where you can collectively work on personal goals for personal collaboration capability, sharing, and critiquing one another’s networking performance data as you progress?
Think about what it takes to move from one behavior Persona to another. How would you accomplish such a transformation, personally? What about the teams you work in and with? Then come back for the next, and final, part of this co-authored series between Swoop and Carpool, where we will explain the value in gaining insights from ongoing analytics and the cycle of behavior changes, analysis, and pivoting strategies.

Are we Getting Closer to True Knowledge Sharing Systems?


(image credit:

First generation knowledge management (KM) systems were essentially re-labelled content stores. Labelling such content as ‘knowledge’ did much to discredit the whole Knowledge Management movement of the 1990s. During this time, I commonly referred to knowledge management systems as needing to comprise both “collections and connections”, but we had forgotten about the “connections”.  This shortcoming was addressed with the advent of Enterprise Social Networking (ESN) systems like Yammer, Jive, IBM Connect and now Workplace from Facebook. So now we do have both collections and connections. But do we now have true knowledge sharing?

Who do we Rely on for Knowledge Based Support?

A common occupation for KM professionals is to try and delineate a boundary between information, that can be effectively managed in an information store, and knowledge, which is implicitly and tacitly held by individuals. Tacit knowledge, arguably, can only be shared through direct human interaction. In our Social Network Analysis (SNA) consulting work we regularly surveyed staff on who they relied on to get their work done. We stumbled on the idea of asking them to qualify their selections by choosing only one of:

  • They review and approve my work (infers a line management connection)
  • They provide information that I need (infers an information brokering connection)
  • They provide advice to help me solve difficult problems (infers a knowledge based connection)

The forced choice was key. It proved to be a great way of delineating the information brokers from the true knowledge providers and the pure line managers. When we created our ‘top 10 lists’ for each role, there was regularly very little overlap. For organisations, the critical value in these nominations is that the knowledge providers are the hardest people to replace, and therefore it is critical to know who they are. And who they are, is not always apparent to line management!

So how do staff distribute their connections needs amongst line managers, information brokers and knowledge providers? We collated the results of several organisational surveys, comprising over 35,000 nominations, using this identical question, and came up with the following:


With 50% of the nominations, the results reinforce the perception that knowledge holders are critical to any organisation.

What do Knowledge Providers Look Like?

So what is special about these peer identified knowledge providers? Are they the ‘wise owls’ of the organisation, with long experiences spanning many different areas? Are they technical specialists with deep knowledge about fairly narrow areas? We took one organisation’s results and assessed the leaders of each of the categories of Approve/review, Information and Knowledge/Advice looking for their breadth or diversity of influence. We measured this by calculating the % of connections, nominating them as an important resource, that came from outside their home business unit. Here are the results:


As we might anticipate, the inferred line management had the broadest diversity of influence. The lowest % being for the knowledge providers, suggests that it’s not the broadly experienced wise old owls, but those specialising in relatively narrow areas, where people are looking for knowledge/advice from.

Implications for Knowledge Sharing Systems

We have previously written about our Network Performance Framework, where performance is judged based on how individuals, groups, or even full organisations balance diversity and cohesion in their internal networks:


The above framework identifies ‘Specialists’ as those who have limited diversity but a strong following i.e. many nominations as a key resource. These appear to be the people identifying as critical knowledge providers.

The question now is to whether online systems are identifying and supporting specialists to share their knowledge? At SWOOP we have aimed to explore this question initially by using a modification of this performance framework on interactions data drawn from Microsoft Yammer installations:


We measured each individual’s diversity of connections (y-axis) from their activities across multiple Yammer groups. The x-axis identifies the number of reciprocated connections an individual has i.e. stronger ties, together with the size of their personal network, identified by the size of the bubble representing them. We can see here that we have been able to identify those selected few ‘Specialists’ in the lower diversity/stronger cohesion quadrant, from their Yammer activities. These specialists all have relatively large networks of influence.

What we might infer from the above analysis is that an ESN like Yammer can identify those most prospective knowledge providers that staff are seeking out for knowledge transfer. But the bigger question is whether actual knowledge transfer can happen solely through an ESN like Yammer?

Is Having Systems that Provide Connections and Collections Enough to Ensure Effective Knowledge Sharing?

The knowledge management and social networking research is rich with studies addressing the question of how social network structure impacts on effective knowledge sharing. While an exhaustive literature review is beyond the scope of this article, for those inclined, this article on Network Structure and Knowledge Transfer: The Effects of Cohesion and Range is representative. Essentially this research suggests that ‘codified’ knowledge is best transferred through weak ties, but tacit knowledge sharing requires strong tie relationships. Codified knowledge commonly relates to stored artefacts like best practice procedural documents, lessons learned libraries, cases studies and perhaps even archived online Q&A forums. Tacit knowledge by definition cannot be codified, and therefore can only be shared through direct personal interactions.

I would contend that relationships formed solely through ESN interactions, or in fact any electronic systems like chat, email, etc. would be substantially weaker than those generated through regular face to face interactions. Complex tacit knowledge would need frequent and regular human interactions. It is unlikely that the strength of tie required, to effectively share complex knowledge, can be achieved solely through commonly available digital systems. What the ESN’s can do effectively is to help identify who you should be targeting as a knowledge sharing partner. Of course this situation is changing rapidly, as more immersive collaboration experiences are developed. But right now for codified knowledge, yes; for tacit knowledge, not yet


Getting “Liked”: Is Content Overrated?

We are regularly bombarded with the message that “Content is King”, quickly followed by a plethora of methods, tips and even tricks on how to make our content more attractive i.e. being “Liked” by many. Social media has introduced the “Like” button so we can more explicitly signal our appreciation of the content that we are exposed to. But how much is appreciation directed by the “content” of that message and how much is that appreciation directed by the messenger? We have some recent analytics that provides some new insights on this.

Content or Messenger?


Doubt about the true value of content was first flagged by Canadian Philosopher Marshall McLuhan, with his often quoted “the medium is the message” statement in the 1960s. In the age of social media, this has now morphed into “the messenger is the message”, with the rise to prominence of the “Influencer”. Influencers are those rare individuals that can influence the buying behaviours of many, simply through the power of their personal recommendation. Think about your own “liking” behaviour on Facebook. How often would you “like” a passive Facebook advertising page, as opposed to “liking” a posting made by a human influencer, linking back to that very same page? This is a clear example of the power of the messenger, being more important than the message itself.


Enterprise “Liking”

I have recently written  about how the “Like Economy” we experience in consumer social networks may not map well when social networks move inside the enterprise in the form of Enterprise Social Networks (ESN). Unlike consumer social networks, we are unlikely to see advertisements tolerated in the ESN. But Enterprises often do want to send messages to “all staff”, particularly for major change initiatives they want staff to “buy into”. Regularly, corporate communications staff are keen to look at statistics on how often the message is read and even ‘liked’. But is this a true reflection of engagement with a message?

Our benchmarking of ESNs  has identified that “Likes” make up well over 50% of all activities undertaken on ESNs. In the absence of carefully crafted advertising sites, just what is driving our “liking” behaviour in the Enterprise? We decided to explored this by not looking at every message posted (for privacy reasons Swoop does not access message content), but by looking at patterns of who “Likes” were directed at. We aggregated the “Likes” from 3 organisations, from our benchmarking partners, for individuals who had posted more than 500  “Likes” over a 12 month period. Collectively, there were over 4,000 individuals that met the criteria. We then categorised their “Likes” according to:

“Like” Characteristic Interpretation
One-off (‘Like’ recipient was a once only occurrence) Attraction is largely based on the content of the message alone.
Repeat Recipient (‘Like’ recipient was a repeat recipient from this individual) Recipients are potentially ‘influencers’, so the motivation may come from the person, more so than the message content.
Reciprocated (‘Like’ recipient has also been a ‘Like’ provider for this individual) Recipients have a ‘relationship’ with the ‘Liker’, which drives this behaviour

‘Like’ Analysis Results

The results of our analysis is shown below:


The results show clearly that in the Enterprise context, the driver for ‘liking’ behaviour is the relationship. The data suggests that you are nearly 3 times as likely to attract a ‘like’ to your message from someone, if you had previously ‘liked’ a posting of theirs.

So what is the implications for the Enterprise?

If indeed an Enterprise is relying on counting ‘likes’ as a measure of staff engagement, one needs to encourage the formation of relationships through reciprocated actions as a priority, over spending time ‘crafting the perfect message’, or even on relying on influencers to build engagement. Specifically, one could:

  • Acknowledge a “Like”, in particular, if you have never responded to this person before.
  • Craft your important messages as a means to start a conversation, more so than a statement of opinion. Explicitly frame your statement as a question or explicitly ask for feedback.
  • Start to think about ‘engagement’ as more than a ‘read’ or a ‘like’ and more from a relationship perspective. How deep and broadly is your issue being discussed?
  • When you read advice from social media experts on “how to generate more ‘Likes’ for you content”, replace this with “how to generate more ‘relationships’ using your content”.

As I am writing this post I’m painfully reminded of the need to ‘eat your own dog food’. So I’m making a commitment that if you respond or ‘like’ this article, I will at least try to respond in kind!



How do these results map with your own experiences?

What can we Learn from Artificial Intelligence?

This might seem strange, suggesting that a science dedicated to learning from how we humans operate, could actually return the favour by teaching us about ourselves? As strange as this may sound, this is precisely what I am suggesting.

Having spent a good deal of my early career in the “first wave of AI” I had developed a healthy scepticism of many of the capability claims for AI. From the decade or more I spent as an AI researcher and developer I had come to the conclusion that AI worked best when the domains of endeavour were contained within discrete and well bounded ‘solution spaces’. In other words, despite the sophistication of mathematical techniques developed for dealing with uncertainty, AI was simply not that good in the “grey” areas.

AI’s Second Wave


The “second wave of AI” received a big boost when Google company Deep Mind managed to up the ante on IBM’s chess playing Deep Blue  by defeating the world Go champion Lee Sedol. According to Founder and CEO of Deep Mind Demis Hassabisis,  the success of their program AlphaGo could be attributed to the deeper learning capabilities built into the program, as opposed to Deep Blue’s largely brute force searching approach. Hassabisis emphasizes the ‘greyness’ in the game of Go, as compared to Chess. For those familiar with this ancient Chinese game, unlike chess, it has almost a spiritual dimension. I can vividly recall a research colleague of mine, who happened to be a Go master, teaching a novice colleague the game in a lunchtime session, and chastising him for what he called a “disrespectful move”. So AplhaGo’s success is indeed a leap forward for AI in conquering “grey”.

So what is this “deep learning” all about? You can certainly get tied up in a lot of academic rhetoric if you Google this, but for me it’s simply about learning from examples. The two critical requirements are the availability of lots of examples to learn from, and the development of what we call an “evaluation function”, i.e. something that can assess and rate an action we are considering on taking. The ‘secret sauce’ in AlphaGo is definitely the evaluation function. It has to be sophisticated enough be able to look many moves ahead and assess many competitive scenarios before evaluating its own next move. But this evaluation function, which takes the form of a neural network, has the benefit of being trained on thousands of examples drawn from online Go gaming sites, where the final result is known.

Deep Learning in Business


We can see many similarities to this context in business. For example, the law profession is founded on precedents, where there are libraries of cases available, for which the final result is known.  Our business schools regularly educate their students by working through case studies and connecting them to the underlying theories. Business improvement programs are founded on prior experience or business cases from which to learn. AI researchers have taken a lead from this and built machine learning techniques into their algorithms. An early technique that we had some success with is called “Case Based Reasoning”. Using this approach, it wasn’t necessary to articulate all the possible solution paths, which in most business scenarios, is infeasible.  All we needed to have was sufficient prior example cases to search through, to provide the cases that most matched the current context, leaving the human user to fill any gaps.

The Student Becomes the Teacher

Now back to my question; what can AI now teach us about ourselves? Perhaps the most vivid learnings are contained in the reflections of the Go champions that AlphaGo had defeated. The common theme was that AlphaGo was making many unconventional moves, that only appeared sensible in hindsight. Lee Sedol has stated his personal learning from his 4-1 defeat by AlphaGo in these comments: “My thoughts have become more flexible after the game with AlphaGo, I have a lot of ideas, so I expect good results” and “I decided to more accurately predict the next move instead of depending on my intuition”. So the teacher has now become the student!

It is common for us as human beings to become subjects of unconscious bias. We see what is being promoted as a “best practice”, perhaps reinforced by a selected few of our own personal experiences, and are then willing to swear by it as the “right” thing to do. We forget that there may be hundreds or even thousands of contrary cases that could prove us wrong, but we stubbornly stick to our original theses. Computers don’t suffer from these very human traits. What’s more they have the patience to trawl through thousands of cases to fine tune their learnings. So in summary, what can we learn from AI?

  • Remember that a handful of cases is not a justification for developing hard and fast rules;
  • Before you discount a ‘left field’ suggestion, try to understand the experience base that it is coming from. Do they have experiences and insights that are beyond those of your own close network?
  • Don’t be afraid to “push the envelope” on your own decision making, but be sure to treat each result, good or bad, as contributing to your own growing expertise; and
  • Push yourself to work in increasingly greyer areas. Despite the success of AlphaGo, it is still a game, with artificial rules and boundaries. Humans are still better at doing the grey stuff!





SWOOP Video Blog 2 – Yammer Groups

The second in our SWOOP Video Blog Series:

Slide 1

Hi there, I’m Laurence Lock Lee, the co-founder and chief scientist at Swoop Analytics

In this second episode of Swoop Benchmarking insights we are drilling down to the Yammer Group level. Groups are where the real collaborative action happens.

As Yammer Groups can be started by anyone in the organisation, they quickly build up to hundreds, if not thousands in some organisations. Looking at activity levels alone we will see that the majority of groups do not sustain consistent activity, while a much smaller proportion look to be really thriving.

As useful as activity levels and membership size are, as we have suggested before, they are crude measures which can mask true relationship centred collaboration performance being achieved.

In this session we provide insights into how organisations can compare and benchmark their internal groups.

Slide 2

There is no shortage of literature and advice on how to build a successful on-line community or group. The universal advice for the first step is to identify the purpose. A well articulated purpose statement will identify what success would look like for this group or community.

What we do know from our experience to date is that there are a variety of purposes that online groups are formed. IBM has conducted a detailed analysis of their internal enterprise social networking system, looking to see if the usage logs could delineate the different types of groups being formed. What they found was five well delineated types of groups. {IBM classification from years of IBM experience }

The identified groups types were:

  1. Communities of Practice. CoPs are the centerpiece of knowledge sharing programs. Their purpose is to build capability in selected disciplines. They will usually be public groups. For example, a retail enterprise may form a CoP for all aspects of establishing and running a new retail outlet. The community would be used to share experiences on the way to converging to a suite of ‘best practices, that they would aim to implement across the organisation.
  2. Team/Process. This category covers task specific project teams or alternatively providing a shared space for a business process or function. In most cases these groups will be closed or private.
  3. Groups formed for sharing ideas and hopefully generating new value from innovations. It is best to think about such groups in two stages, being exploration and exploitation. The network needs to be large and diverse, to uncover the most opportunities. However, the exploit stage requires smaller, more focused teams to ensure a successful innovation
  4. The Expert / Help type group is what many of us see as the technical forums we might go to externally to get technical help. For novices, the answers are more than likely available in previously answered questions. In essence, they would be characterised by many questions posted, for a selected few to answer.
  5. Finally, the social (non-work) groups are sometimes frowned on; but in practice they are risk free places for staff to learn and experience online networking, so they do play an important part in the groups portfolio.

 Slide 3

This table summarizes the purposes and therefore value that can accrue from the different group types. Some important points that can be taken from this are:

  • Formally managed documents are important for some group types like CoPs and Teams, but less so for others, where archival search may be sufficient
  • Likewise with cohesive relationships, which are critical for teams say, but less so for Expert/Help groups for instance.
  • Large isn’t always good. For idea sharing the bigger and more diverse, the better. For teams, research has show that once we get past about 20 members, productivity decreases (

 Slide 4

More than 80 years of academic research on performance of networks could be reduced to an argument between the value of Open and diverse networks versus closed, cohesive networks. This graphic was developed by Professor Ron Burt from the University of Chicago Business School, who is best known for his research on brokerage in open networks. However, Burt now concedes in his book on Brokerage and Closure in 2005, that value is maximised when diversity and closure are balanced.

It is therefore this framework that we are using for assessing and benchmarking Yammer Groups.

Slide 5

For pragmatic reasons we are using group size as a proxy for diversity, with the assumption that the larger the group, the more likely the more diverse the membership will be. For cohesion, we measure the average 2-way connections/member, using the assumption that if members have many reciprocated relationships inside the group, then the group is likely to be more cohesive.

This plot shows a typical pattern we find. The bubble size is based on group activities, so as you can see, activity is an important measure. But the positioning on the network performance chart can be quite differentiated by their respective diversity and cohesion measures.

The pattern shown is also consistent with what we see in our prior network survey results, which essentially shows that it is difficult not to see diversity and cohesion as a trade-off; so the ideal maximum performance in the top right corner, is in fact just that, an ideal.

Side 6

Now if we overlay what we see as ideal ‘goal states’ for the different types of groups that can be formed, it is possible to assess more accurately how a group is performing.

For example, a community of practice should have moderate to high cohesion and a group size commensurate with the ‘practice’ being developed.

The red region is showing where high performing teams would be located. High performing teams are differentiated by their levels of cohesion. Group size and even relative activity levels are poor indicators for a group formed as a team. If your group aims to be a shared ideas space, but you find yourself characterised as a strong team, then you are clearly in danger of “group think”.

Likewise you can infer a goal space for the Expert/Help group type.

If you are an ideas sharing group you have an extra measure of monitoring the number of exploitation teams that have been launched from ideas qualified in your group.

For the group leaders, who start in the bottom left, and many who are still there, it becomes an exercise in re-thinking your group type and purpose and then deciding the most appropriate actions for moving your group into the chosen goal space.

For some this may be growing broader participation, if you are expert help group; or building deeper relationships if you are a community of practice or functional team.

Slide 7

So in summing up:

Groups come in different shapes and sizes, where simple activity levels and membership size are insufficient for assessing success or otherwise.

Gaining critical mass for a group is important. Research has shown that critical mass needs to also include things like the diversity in the membership and the modes used to generate productive outputs.


The Diversity vs Cohesion network performance matrix provides a more sophisticated means for groups to assess their performance, than simple activity and membership level measures.

Once group leaders develop clarity around their form and purpose, the network performance framework can be used to provide them with more precise and actionable directions for success

Slide 8

We have now covered benchmarking externally at the Enterprise level and now internally at the group level.

Naturally the next level is to look and compare the members inside successful groups.

Thank you for your attention and we look forward to having you at our next episode.