Bridging the Knowledge Sharing/Problem Solving Divide

problem-solvingWorking across organisational boundaries

One of the most frequently cited reasons we hear for implementing an enterprise social network platform is to “enable our organisation to better communicate and collaborate across organisational boundaries”.

The real objective is to let information and knowledge flow more freely to solve challenge business problems. This is the point where the focus changes from generic SHARING to business focused (problem-) SOLVING:

ps

We’re previously introduced this maturity framework that incorporates the 4 stages of Simon Terry’s model, and in a recent discussion with Simon he shared with us with some constructive insights that he has drawn from the application of his maturity model.

He indicated to us that:

“Up to SHARING, people are just engaged in social exchange. It is chat. That can be entirely internal to the ESN and not connected to the business. Beyond that point they are delivering benefits from collaborative work. Moving over that transition and understanding the behaviours beyond that point is essential.

Simon then proceeded to describe the key things to consider in the ‘SOLVING’ stage as:

“Value chains and projects and their relationships to the silos captured in your Cross-team collaboration widget”.

In this post we will therefore review the SWOOP ‘Cross-Team Collaboration’ widget and give you insights about how this can help you in your enterprise social adoption efforts. Together with the recently reviewed Influential People and Response Rate widgets they collectively support the ‘SOLVE’ Stage.

solve

The Cross-Team collaboration widget identifies the levels of interaction between selected organisational dimensions. The most common use is to identify interactions between the formal lines of business.

Two representations are offered:

  • The matrix view shades the intersecting squares by the relative interaction levels. The diagonal represents intra-unit interactions.
  • The map view (see below) more succinctly illustrates the degree to which different units are interacting.

collab

If you have created a cross-enterprise group, or community of practice, it will tell you the degree to which all divisions have been engaged. If you have a corporate initiative that has been launched with a topic hash tag, it will also tell you the degree of cross-divisional engagement.

In a typical hierarchy, we would anticipate that most interactions would occur inside the formal structures, or between divisions along a defined value chain e.g. marketing interactions with sales. Cross organisational groups or teams are usually formed to facilitate interactions across the formal lines of business, for example a Supply/Value chain.

The Cross-Team Collaboration widget provides a view into the degree to which these cross organisational teams are effective. While interactions between formal departments is the most common, geographic location is also a popular dimension to explore interaction levels.

What is the Business Imperative?

It is the apparent inflexibility and poor responsiveness of the formal hierarchy that motivates many organisations to adopt enterprise social networks. Formal hierarchies are designed for efficient execution of pre-determined processes. However, CEOs are now looking for more than this. David Thodey, the former CEO of Australia’s largest Telco, summed up the sentiment by indicating that he wanted to short circuit the entrenched communication channels. He wanted his management team to be able to have authentic conversations with staff at all levels. Similarly, we recall a statement made by a former CEOs at BHP Billiton, an industrial resources conglomerate that was very process driven:

“Silos are not bad, this is how we get work done. We just need to dig some holes in the sides!” (please excuse the mining analogy)

Another of our favourite thought leaders is Heidi Gardner, a former McKinsey consultant and Harvard Business School professor now lecturing at Harvard Law School. She has spent over a decade conducting in-depth studies of numerous global professional service firms. Her research with clients and the empirical results of her studies demonstrate clearly and convincingly that collaboration pays, for both professionals and their firms. In her book Smart Collaboration, she shows that firms earn higher margins, inspire greater client loyalty, attract and retain the best talent, and gain a competitive edge when specialists collaborate across functional boundaries. The Cross-Team Collaboration widget enables you to measure if this is actually happening, and is one of the most important widgets connecting business outcomes with the adoption of your enterprise social network.

Specifically, in terms of problem solving, there will be problems that traverse the business unit boundaries. For example, a customer support problem may appear to be an operations problem, but perhaps the genesis of the problem is with Sales or Marketing, by how a product or service was represented to the customer in the first place. Also, supply chain problems are by definition, inter-dependent and cannot be solved by a single business unit. The Cross Team Collaboration widget can signal whether these cross-business unit problems are being addressed as a shared problem. If a cross-business unit problem has been hash tagged, it is also possible to use the SWOOP Topic tab to identify where the participants in the tagged problem solving activity are coming from. Are they appropriately cross-business unit?

Summary

Bridging the ‘sharing’ to ‘solving’ divide requires a stronger focus on what the business is trying to achieve. What are the key problems or challenges that must be met? What are the specific and identified collaborative interactions between the different organisational units, that will be required to solve them? The SWOOP Cross-unit Collaboration widget, along with the Response Rate and Influential People widgets have been designed to help you bridge the ‘Sharing’ to ‘Solving’ divide.

This post continues our series on key SWOOP indicators.

 

Influential People – SWOOP Style

 

In this series of articles, we are profiling each of the SWOOP Analytics Widgets by referencing them to the Enterprise Social Maturity Framework, that we introduced previously. The SWOOP analytics widgets are designed to guide our end users through each stage of the maturity journey. The ‘Influential People’ widget is seen to be most valuable when you are looking to solve difficult problems and/or driving new innovations to positive outcomes: 

enter-collab-3

influential-peopleInfluential people, as the name suggests, are those people that are best positioned to influence others through their interactions. Platforms like LinkedIn and Twitter typically use the popularity of content published to measure influence. In LinkedIn’s  case, profile views contribute strongly to your perceived influence. SWOOP uses a different basis for measuring influence, drawn from the science of social network analysis (SNA). SNA bases influence measured on the size and nature of one’s connections. An individual’s influence in SWOOP is measured by the size of their personal network.  

Your Personal Network Map, which can be viewed on your  personal tab, is a visual representation of your full network. At the Group, Business Unit or Topic level, influence is measured by an individual’s network within the Group, Business Unit or those engaging with a given topic.  

img_0101

A network connection is formed when you interact with someone online. It could be a ‘reply’ or ‘like’ you have made to a post, or vice versa. Activity levels are not considered; only the unique connections made.  

Business Imperative 

If you want to influence the activities of a group of people, the most efficient way is to engage with those that are best placed to influence them. Influence propagates through relationship links. Enrolling the influencers in your target audience can accelerate the change that you are seeking. You can aim to become an influencer yourself by looking to expand your network within your target audience. If you are identified as an influencer yourself, it is important to use your privileged location in the network to bring others into the network i.e. being the Catalyst/Engager, ensuring  diverse points of view are accommodated. 

Influencers can play a big part in helping their organisations to become more responsive. Their central position in the network enables them to become important role models by being personally responsive to problems they see. Influencers need not be able to solve the problems themselves, but they are ideally placed to identify those in their network that can. 

 

Smart Collaboration = Smart Money

smart-collaboration-artwork

Smart Collaboration’ is the title of Harvard’s Heidi Gardner’s latest book. The book builds and expands on her well cited HBR article  “When Senior Managers Won’t Collaborate” , smart-collab-1where she presents some compelling data demonstrating that collaboration does pay, big time. Her network representation comparing the networks of two lawyers, with Lawyer 2 responsible for generating much higher revenues from her larger and more diverse network, may seem quite logical. Additionally, she shows that greater peer-to-peer collaboration does indeed generate much higher revenue levels; the key measure of success for most advisory firms. But those whom have worked in Partner led advisory firms, will understand the tribal norm of ‘Eat what you Kill’, can actively work against cross-enterprise collaboration. Gardner’s research will hopefully go a long way toward convincing the leaders of advisory organisations that it is time to abandon this tradition. But in the book she acknowledges and addresses head on, the challenges ahead.

In a decade of conducting survey based Organisational Network Analyses (ONA) projects around the world and across many industry sectors, we have found that it is the partner led organisations that fall most strongly into the ‘tribal’ area (High cohesion/Low Diversity) of our Network Performance framework.

smart-collab-2

The above graph plots a representative set of results from client projects undertaken over a decade. In our surveys we use a common question of “Who do you rely on to get your work done?”; to identify people to people relationships. We then look at the proportion of reciprocated (two-way) relationships to devise a cohesion score. The y-axis diversity score is determined by the proportion of cross-departmental activity, similar to Gardner’s ‘cross practice’ measures for consulting organisations. The bottom right region (High Cohesion/Low Diversity) is populated by advisory firms i.e. consulting/engineering etc.. Our advice to these firms mimics that of Gardner’s, to grow the diversity of their work teams, without sacrificing the existing levels of cohesion. This is easier to say than do, as you can see from the above data; diversity and cohesion are often traded off against each other, yet this doesn’t have to be the case.

So How can Partner-Led firms be Disrupted by Smart Collaboration?

In her book, Gardner uses role archetypes to characterise the different behavioural dimensions typically found in partner led organisations. Interestingly, we find a strong correspondence to our own ONA characterizations, and more recently our on-line collaboration personas.

Gardner also identifies the increasing use of collaborative software platforms by professional services firms to help ‘break down the silos’, to better facilitate ‘smart collaboration’. While we agree with the principle, the devil can be in the detail. Without a supporting collaborative culture, these platforms can be used to actually reinforce existing silos. We have seen many instances of teams creating private groups on the pretense of ‘competitive sensitivities’; sometimes warranted, but more often not.

The following chart identifies the synergies between Gardner’s archetypes overlaid onto our Personal Networking Performance framework, to provide a link to our network centric perspective. Additionally, we also overlay our online networking personas to extend the view further to the online collaboration environment within which SWOOP’s analytics operate.

collab-3

As we can see, there is a direct mapping between Gardner’s archetypes and our Networking archetypes. The ‘Seasoned Collaborator’ is Gardner’s key role supporting ‘Smart Collaboration’. Likewise, our ‘Ambassador’ role plays the key brokering role in networks, bridging the diversity/cohesion divide. The ‘Solo Specialists’, like our ‘Specialists’ have strong, cohesive, yet localised networks. The ‘Ring Master’, like our ‘Agent’ are playing an oversight role. They have diverse networks, but not the power to necessarily drive positive actions to the same degree as the ‘Seasoned Collaborators/Ambassadors’. Finally, the ‘Contributor’ / ‘Practitioner’ have both limited diversity and cohesion in their networks. They are more regularly younger or new to the organisation staff; or staff that are comfortable to ‘do their bit’, without trying to ‘push the envelope’.

By extending this framework to the online world, we are escalating the analytics from ‘snapshot’ project based analyses, to the real-time online analytics that SWOOP provides.  Online analytics can measure and monitor ‘Smart Collaboration’ in process. We have now benchmarked close to 50 organisations on a series of collaboration indices, which include the behavioural personas indicated. The correspondence is not one-to-one, but nevertheless still informative.

The ‘Engager’ maps closely to the ‘Seasoned Collaborator / Ambassador’ archetype, by identifying those participants whose online networks are both diverse and cohesive. The important ‘Catalyst’ persona instigates interactions. They are key to growing online communities, but are not always those that broker connections. Hence they are located between the ‘Agents/Ringmaster’ and ‘Ambassadors/Seasoned Collaborator’. The ‘Responder’ persona will regularly have a diverse, yet less cohesive online network. There is not a correspondence with the pro-active ‘Agent/Ringmaster’ role, as the role is more re-active, than pro-active. The ‘Broadcaster’ tends to prioritise ‘telling’ over ‘discussing’. In this sense the behaviour is similar to the solo specialist, but ‘Broadcasters’ do not have highly cohesive networks; hence their positioning toward the ‘Contributor’/’Practitioner’. Finally, the ‘Observer’ persona has minimal participation in the online platform and therefore has low or non-existing online diversity and cohesion. The ‘Observer’ is an artefact of online platforms and there is little, if any, correspondence to the ‘Contributor / Practitioner’ archetype.

So can digital ‘Smart Collaboration’ disrupt the current status quo of the big end consulting companies? Well Harvard Professor Clayton Christensen thinks so. He wrote an article on Consulting on the cusp of Disruption back in 2013, citing the clients’ drive for more transparency and also the increased availability of big data analytics and predictive analytics. More recently, this article on Big Four Firms face tsunami of threats from Digital Groups’ also explores the digital disruption potential. And of course Heidi Gardner’s ‘Smart Collaboration’ might be framed as a helpful guide to partner led advisory firms, but could also be read as a ‘if you don’t, someone else will’ warning to the incumbents.

Final Comments

In this article we aimed to draw linkages between Heidi Gardner’s recent work on ‘Smart Collaboration’ and firstly, our own organisational network analysis consulting work. We both used survey techniques to elicit our insights, though Gardner also drew from personal interviews and observations. The extension of these insights toward insights that can be drawn from online interactions is still ‘work in progress’. Unlike surveys, interviews and observation; online analytics has to create its insights through more indirect means. That said, the wealth and volume of data available online swamps the data that can be gained from traditional surveys and interviews. At SWOOP we have now collected and analysed collaboration data from more organisations in less than two years, than from a decade of consulting projects. While consulting projects are often necessarily constrained to a limited scope, the online analysis, drawing its data from the collaborative online platforms, covers the full breadth of these organisations.

We are excited by the potential for online analytics to facilitate ‘Smart Collaboration’ in real-time. Watch this space for updates on our collaboration benchmarking research.

 

 

 

 

 

 

 

Yammer Benchmarking Insights #3 – Collaboration at the Personal Level

 In this episode we drill down to the most detailed level. That’s you, the individual collaborator.

At SWOOP we have designed behavioural personas to characterise individual collaboration patterns based on your pattern of activity.For example, if you are a Catalyst, you are good at getting responses to your posts. Catalysts are important for energizing a community and driving the engagement of others. If you are a Responder, you are good at responding to other people’s posts. Responders are important for sustaining a community and extending the discussions. An Engager is able to balance their Catalyst and Responder behaviour and is seen as the Persona to aspire to, as the Engager effectively balances what they give to others in the form of posts, replies, likes etc. and those that they receive from others. Therefore they are well placed to broker new relationships. Broadcasters tend to post without engaging in conversations. Observers are simply not very active, with less than a single activity every 2 weeks. We see Broadcasting and Observing as negative personas.

behavioural-personasWhat does an organisation’s portfolio of Personas typically look like? The results below are generated from our benchmarking results from close to 40 organisations. The lines indicate the minimum-maximum range and the blue square is the average score.

persona-proportions

The large range of % Observers, between less than 10% to over 70%, may reflect the large variation in maturity amongst the organisations we have benchmarked. It may not only be the case of maturity though, as it is fair to say that the smaller organisations have an easier time engaging a higher proportion of their staff with the Enterprise Social network (ESN).  We show the break-up of the active (non-observer) Personas, which shows that Catalysts lead the way with just over 40%, followed by Responders at just under 30%, Engagers just over 20% and Broadcasters at 10%. This would indicate that in general, ESNs are relying on Catalysts to continue to drive participation and then Responders to sustain it.

Personas within Groups

Given that groups are the space where most of the intense collaboration is likely to happen, we were interested in what the Persona patterns were for the leaders of the best performing groups. We used a combination of two-way connection scores and activity scores to identify the strongest groups. We then applied the same measures to the group members to identify the group leaders. In other words, a group leader is someone who has a high number of two-way connections with other group members, and meets a threshold level of overall activity.

Firstly, we plotted all members on a graph, locating them by the size of their network (y-axis) within the group and the number of 2-way connections they have in the group (x-axis). The bubble is sized by their relative levels of interactions (activity). As you can see, the group leaders are clearly identified in the top right hand corner of the graph as different coloured nodes.

persona-tracking

Secondly, we then plotted the top 5 leader’s Persona movements in 1 week intervals, over a 6-month period. In the example above you can see that the leaders played the Catalyst, Engager and Responder roles primarily. The size of the bubbles reflects their relative number of connections made (breadth of influence), for that week. Not all leaders were active every week. What becomes interesting is that we find some leaders have preferred Personas that are sustained over time. Leaders 1 and 4 in this case have a preference for Catalysing and Engaging. Leader 5 prefers Responding. Leaders 2 and 3 appear to be comfortable switching between Personas.

What appears to be important here is that high performing groups need leaders that can cover the spectrum of positive Personas i.e. Catalyst, Engager, Responder. While it’s fine to have leaders who have a preference for a certain behavioural Persona, it is useful to have leaders who can adapt their Persona to the situation or context at hand.

Personal Networking Performance

At SWOOP we use a fundamental network performance framework, which measures performance against the complementary dimensions of cohesion and diversity. We have indicated that individuals with a large number of two-way connections are likely to have more closed and cohesive networks. Cohesive networks are good for getting things done (executing/implementing). From an innovation perspective however, closed networks can be impervious to new ideas. The best ideas come from more open and diverse networks. In our view therefore, maximum network performance occurs by optimising diversity and cohesion. In other words, it’s good to be part of a strong cohesive network, but this should not be at the expense of maintaining a healthy suite of more diverse connections.

In the graphic below we have plotted the members of one large group on the Network Performance graph. In this case the diversity is measured by the number of different groups that an individual has participated in. The size of the bubbles reflects the size of the individual’s network (breadth of influence).

personal-network

We have labelled regions in the graph according to our Explore/Engage/Exploit model of innovation through networks. We can see that the majority of group members exist in the ‘High Diversity/Low Cohesion’ Explore region. This is consistent with what many people give for their reasons for joining a group. The ‘Engage’ region shows those members who are optimising their diversity/cohesion balance. These are the most important leaders in the group. In an innovation context, these people are best placed to broker the connections required to take a good idea into implementation. The bottom right corner is the Exploit region, which for this group is fairly vacant. This might suggest that this group would have difficulty organically deploying an innovation. They would need to take explicit steps to engage an implementation team to execute on the new products, services or practices that they initiate.

The Innovation Cycle – Create New Value for Your Organisation

We conclude this third edition of Yammer Benchmarking insights be reinforcing the role that individuals can play in creating new value for their organisations. For many organisations, the ESNs like Yammer are seen as a means for accelerating the level of innovation that is often stagnating within the formal lines of business.

As individual’s we may have a preference for a given style of working, as characterised by our Personas. Your personal networks may be large, open and diverse; or smaller, closed and cohesive; or indeed somewhere in between. It is important however to see how your collaboration behaviours contribute to the innovation performance of your organisation. Innovation is a collaborative activity, and therefore we recommend that in your groups you:

  1. Avoid lone work (Observing/Broadcasting) and look to explore new ideas and opportunities collaboratively, online (Catalysing/Engaging/Responding).
  2. Recognise that implementing good ideas needs resources, and those resources are owned by the formal lines of business. Use your network to engage with the resource holders. Make the connections. Influence on-line and off-line.
  3. When you have organisational resources behind you, it’s time to go into exploit mode. Build the cohesive focussed teams to execute/implement, avoiding distractions until the job is done.

 

Data-Driven Collaboration Part 2: Recognizing Personas and Behaviors to Improve Engagement

In Part 1 of this series, “Data-Driven Collaboration Design”—a collaboration between Swoop Analytics and Carpool Agency—we demonstrated how data can be used as a diagnostic tool to inform the goals and strategies that drive your business’ internal communication and collaboration. 

In this post, we will take that thought one step further and show how, after your course is charted to improve internal communication and collaboration, your data continues to play a vital role in shaping your journey.

Monitoring More Than participation

Only in the very initial stages of the launch of a new Enterprise Social Network (ESN) or group do we pay any attention to how much activity we see. Quickly, we move to watching such metrics as average response time; breadth of participation across the organization, teams, roles, or regions; and whether conversations are crossing those boundaries. We focus on measures that show something much closer to business value and motivate organizations to strengthen communities.
For our purposes in this post, it will be useful to pivot our strategy to one that focuses on influential individuals. The community or team—whether it’s a community of practice, a community of shared interest, or a working team—isn’t a “group” or “si te,” but a collection of individuals, with all the messiness, pride, altruism, and politics implied. Data can be used to layer some purpose and direction over the messiness.

Patterns Become Personas

The Swoop Social Network Analytics dashboard uniquely provides analytics that are customized to each person who is part of an organization’s ESN. Using the principle of “when you can see how you work, you are better placed to change how you work”, the intent is for individual collaborators to receive real-time feedback on their online collaboration patterns so they can respond appropriately in real-time.
We analyzed the individual online collaboration patterns across several organizations and identified a number of distinct trends that reflect the majority of personal collaboration behaviors. With that data, we were able to identify five distinct personas: Observers, Engagers, Catalysts, Responders, and Broadcasters.

In addition to classifying patterns into personas, we developed a means of ranking the preferred personas needed to enhance an organization’s overall collaboration performance. At the top we classify the Engager as a role that can grow and sustain a community or team through their balance of posting and responding. This is closely followed by the Catalyst, who can energize a community by provoking responses and engaging with a broad network of colleagues. The Responder ensures that participants gain feedback, which is an important role in sustaining a community. The Broadcaster is mostly seen as a negative persona: They post content, but tend not to engage in the conversations that are central to productive collaboration. Finally, we have the Observer, who are sometimes also called ‘lurkers’. Observers are seen as a negative persona with respect to collaboration. While they may indeed be achieving individual learning from the contribution of others, they are not explicitly collaborating.
Using Personas to Improve Your Online Collaboration Behavior
Individuals who log in to the Swoop platform are provided with a privacy-protected personal view of their online collaboration behaviors. The user is provided with their persona classification for the selected period, together with the social network of relationships that they have formed through their interactions:

You may notice that the balance between what you receive and what you contribute is central to determining persona classification. Balanced contributions amongst collaboration partners have been shown to be a key characteristic of high performing teams, hence the placement of the ‘Engager’ as the preferred persona.

Our benchmarking of some 35 Yammer installations demonstrates that 71% of participants, on average, are Observers. Of the positive personas, the Catalyst is the most common, followed by Responders, Engagers, and Broadcasters. It’s therefore not surprising that an organization’s priority often involves converting Observers into more active participants. Enrolling Observers into more active personas is a task that falls on the more-active Engagers and Catalysts, with Responders playing a role of keeping them there.
At Carpool, during a recent engagement with a client, we encountered a senior leadership team that was comprised of Broadcasters who relied on traditional internal communications. Through our coaching—all the while showing them data on their own behavior and the engagement of their audience—they have since transformed into Catalysts.
One team, for example, had been recruiting beta testers through more traditional email broadcasts. But after just a few posts in a more interactive and visible environment, where we taught them how to invite an active conversation, they have seen not only the value of more immediate feedback, but a larger turnout for their tests. Now, it’s all we can do to provide them with all the data they’re asking for!
Identifying the Key Players for Building Increased Participation

When Swoop looks at an organization overall, we will typically find that a small number of participants are responsible for the lion’s share of the connecting and networking load. In the social media world, these people are called ‘influencers’ and are typically measured by the size of the audience they can attract. In our Persona characterization, we refer to them as Catalysts. Unlike the world of consumer marketing—and this point is critical—attracting eyeballs is only part of the challenge. In the enterprise, we need people to actively collaborate and produce tangible business outcomes. This can only happen by engaging the audience in active relationship-building and cooperative work. This added dimension of relationship-building is needed to identify who the real key players are.
In our work with clients, Carpool teaches this concept by coaching influencers to focus on being “interested” in the work of others rather than on being “interesting” through the content they share, whether that’s an interesting link or pithy comment. With one client, our strategy is to take an organization’s leader, a solid Engager in the public social media space, and “transplant” him into the internal communications environment where he can not only legitimize the forum, but also model the behavior we want to see.
In the chart below, we show a typical ‘Personal Network Performance’ chart, using Enterprise Social Networking data from the most active participants in an enterprise. The two dimensions broadly capture an individual’s personal network size (number of unique connections) against the depth of relationships they have been able to form with them (number of reciprocated two-way connections). They reflect our Engager persona characteristics. Additionally, we have sized the bubbles by a diversity index assessed by their posting behavior across multiple groups.
The true ‘Key Players’ on this chart can be seen in the top right-hand corner. These individuals have not only been able to attract a large audience, but also engaged with that audience and reciprocated two-way interactions. And the greater their diversity of connections (bubble size), the more effective they are likely to be.

Data like this is useful in identifying current and potential key players and organizational leaders, and helps us shift those online collaboration personas from Catalyst to Engager and scale up as far and as broadly as they can go.

Continuous Coaching

Having data and continuous feedback on your online collaboration performance is one thing, but effectively taking this feedback and using it to build both your online and offline collaboration capability requires planning and, of course, other people to collaborate with! Carpool believes in a phased approach, where change the behavior of a local team, then like ripples in a pond, expand the movement to new ways of working through compelling storytelling, using the data that has driven previous waves of change.
To get started now, think about your own teams. Would you be prepared to have your team share their collaboration performance data and persona classifications? Are you complementing each other, or competing? If that’s a little too aggressive, why not form a “Working Out Loud” circle with some volunteers where you can collectively work on personal goals for personal collaboration capability, sharing, and critiquing one another’s networking performance data as you progress?
Think about what it takes to move from one behavior Persona to another. How would you accomplish such a transformation, personally? What about the teams you work in and with? Then come back for the next, and final, part of this co-authored series between Swoop and Carpool, where we will explain the value in gaining insights from ongoing analytics and the cycle of behavior changes, analysis, and pivoting strategies.

Data-Driven Collaboration Part 1: How Rich Data Can Improve Your Communication

Originally published on Carpool.

This is the first of a series, coauthored by Laurence Lock Lee of Swoop Analytics and Chris Slemp of Carpool Agency, in which we will explain how you can use rich, people-focused data to enhance communication, increase collaboration, and develop a more efficient and productive workforce.

It’s safe to say that every enterprise hungers for new and better ways of working. It’s even safer to say that the path to those new and better ways is often a struggle.

Many who struggle do so because they are starting from a weak foundation. Some are simply following trends. Others believe they should adopt a new tool or capability simply because it was bundled with another service. Then there are those organizations that focus primarily on “reining in” non-compliant behaviors or tools.

But there’s a way to be innovative and compliant that also improves your adoption: focus instead on the business value of working in new ways—be data-driven. When you incorporate information about your usage patterns to set your goals, you are better positioned to track the value of your efforts and drive the behavior changes that will help you achieve your business objectives.

While it’s assumed that doing market research is critical when marketing to customers, investments in internal audience research have gained less traction, yet they yield the same kinds of return. Data-driven internal communication planning starts at the very beginning of your project.

Here we will demonstrate—using real-world examples—how Carpool and Swoop use data to create better communications environments, nurture those environments, and make iterative improvements to ensure enterprises are always working to their full potential.

Use Data to Identify Your Actual Pain Points

One team Carpool worked with was focused on partnering with customers and consultants to create innovations. They thought they needed a more effective intranet site that would sell their value to internal partners. However, a round of interviews with key stakeholders and end-of-line consumers revealed that a better site wasn’t going to address the core challenge: There were too many places to go for information and each source seemed to tell a slightly different story. We worked with the client to consolidate communications channels and implemented a more manageable content strategy that focused on informal discussion and formal announcements from trusted sources.

In the end, we were able to identify the real pain point for the client and help them address it accordingly because of the research we obtained.

Use Data to Identify New Opportunities

Data can drive even the earliest strategy conversations. In Carpool’s first meeting with a global retail operation, they explained that they wanted to create a new Yammer network as they were trying to curb activity in another, unapproved network. Not only did we agree, but we brought data to that conversation that illustrated the exact size and shape of their compliance situation and the nature of the collaboration that was already happening. This set the tone for a project that is now laser-focused on demonstrating business value and not just bringing their network into compliance.

Use Data to Identify and Enhance Your Strengths

In-depth interviews can be added to the objective data coming from your service usage. Interviews reveal the most important and effective channels, and the responses can be mapped visually to highlight where a communication ecosystem has broadcasters without observers, or groups of catalysts who are sharing knowledge without building any broader consensus or inclusion.

Below, you see one of Carpool’s chord chart diagrams we use to map the interview data we gather. We can filter the information to focus on specific channels and tools, which we then break down further to pinpoint where we have weaknesses, strengths, gaps, and opportunities in our information flow.

CHORD CHART

Turning Data Into Action

These kinds of diagnostic exercises can reveal baselines and specific strategies that can be employed with leaders of the project or the organization.

One of the first activities organizations undertake when implementing an Enterprise Social Networking (ESN) platform is to encourage staff to form collaborative groups and then move their collaboration online. This is the first real signal of ‘shop floor empowerment’, where staff are free to form groups and collaborate as they see fit, without the oversight of their line management. As these groups form, the inevitable ‘long tail’ effect kicks in, where the vast majority of these groups fall into disuse, in contrast to a much smaller number that are wildly successful, and achieving all of the expectations for the ESN. So how can organizations increase their Win/Loss ratio? At Swoop Analytics we have started to look at some of the ‘start-up’ patterns of the Yammer installations of our benchmarking partners. These patterns can emerge after as little as 6 months of operations.

Below, we show a typical first 6 months’ network performance chart, which measures group performance on the dimensions of Diversity (Group Size), Cohesion (Mean 2-Way Relationships formed), and Activity (postings, replies, likes etc.). We then overlay the chart with ‘goal state’ regions reflecting the common group types typically found in ESN implementations. The regions reflect the anticipated networking patterns for a well-performing group of the given type. If a group’s stated purpose positions them in the goal-state region, then we would suggest that they are well positioned to deliver tangible business benefits, aligned with their stated purpose. If they are outside of the goal state, then the framework provides them with implicit guidance as to what has to happen to move them there.

BUBBLE GRAPH

At launch, all groups start in the bottom left-hand corner. As you can see, a selected few have ‘exploded out of the blocks’, while the majority are still struggling to make an impact. The 6-month benchmark provides an early opportunity for group leaders to assess their group against their peer groups, learn from each other, and then begin to accelerate their own performances.

Painting the Big Picture

The convergence of multiple data sources paints a holistic picture of communication and collaboration that extends beyond team boundaries. This new picture extends across platforms and prescribes the design for an ecosystem that meets user and business needs, aligns with industry trends, and is informed by actual usage patterns.

ECOSYSTEM DESIGN

The discussion about the ROI of adopting new ways of working, such as ESNs, hasn’t disappeared. While we believe it’s a waste of resources to try measuring a return from new technologies that have already been proven, it’s clear that developing business metrics and holding these projects accountable to them is just as critical as any effort to increase productivity.

The nature of these metrics also needs to shift from a focus on “counts and amounts” to measures of a higher order that tie more closely to business value. For example, knowing that posting activity has risen by 25% in a year may make you feel a little better about your investment in a collaboration platform. Knowing that there is a higher ratio of people engaging vs. those who are simply consuming is much better. Showing a strong correlation in departments that have higher percentages of engaged users with lower attrition rates … that’s gold.

So now is the time to look at your own organization and wonder: “Do I track how my people are connecting? Do I know how to help them become more engaged and productive? When was the last time I measured the impact of my internal communication ecosystem?”

Then take a moment to imagine the possibilities of what you could do with all of that information.

Stay tuned in the coming weeks for Part 2 and Part 3 when we address the topics of driving engagement by identifying types of enterprise social behavior in individuals, and the results we’ve seen from being data-driven in how we shape internal communications and collaboration.

Are we Getting Closer to True Knowledge Sharing Systems?

knowledge-systems

(image credit: https://mariaalbatok.wordpress.com/2015/02/10/religious-knowledge-systems/)

First generation knowledge management (KM) systems were essentially re-labelled content stores. Labelling such content as ‘knowledge’ did much to discredit the whole Knowledge Management movement of the 1990s. During this time, I commonly referred to knowledge management systems as needing to comprise both “collections and connections”, but we had forgotten about the “connections”.  This shortcoming was addressed with the advent of Enterprise Social Networking (ESN) systems like Yammer, Jive, IBM Connect and now Workplace from Facebook. So now we do have both collections and connections. But do we now have true knowledge sharing?

Who do we Rely on for Knowledge Based Support?

A common occupation for KM professionals is to try and delineate a boundary between information, that can be effectively managed in an information store, and knowledge, which is implicitly and tacitly held by individuals. Tacit knowledge, arguably, can only be shared through direct human interaction. In our Social Network Analysis (SNA) consulting work we regularly surveyed staff on who they relied on to get their work done. We stumbled on the idea of asking them to qualify their selections by choosing only one of:

  • They review and approve my work (infers a line management connection)
  • They provide information that I need (infers an information brokering connection)
  • They provide advice to help me solve difficult problems (infers a knowledge based connection)

The forced choice was key. It proved to be a great way of delineating the information brokers from the true knowledge providers and the pure line managers. When we created our ‘top 10 lists’ for each role, there was regularly very little overlap. For organisations, the critical value in these nominations is that the knowledge providers are the hardest people to replace, and therefore it is critical to know who they are. And who they are, is not always apparent to line management!

So how do staff distribute their connections needs amongst line managers, information brokers and knowledge providers? We collated the results of several organisational surveys, comprising over 35,000 nominations, using this identical question, and came up with the following:

work-done

With 50% of the nominations, the results reinforce the perception that knowledge holders are critical to any organisation.

What do Knowledge Providers Look Like?

So what is special about these peer identified knowledge providers? Are they the ‘wise owls’ of the organisation, with long experiences spanning many different areas? Are they technical specialists with deep knowledge about fairly narrow areas? We took one organisation’s results and assessed the leaders of each of the categories of Approve/review, Information and Knowledge/Advice looking for their breadth or diversity of influence. We measured this by calculating the % of connections, nominating them as an important resource, that came from outside their home business unit. Here are the results:

external-links

As we might anticipate, the inferred line management had the broadest diversity of influence. The lowest % being for the knowledge providers, suggests that it’s not the broadly experienced wise old owls, but those specialising in relatively narrow areas, where people are looking for knowledge/advice from.

Implications for Knowledge Sharing Systems

We have previously written about our Network Performance Framework, where performance is judged based on how individuals, groups, or even full organisations balance diversity and cohesion in their internal networks:

personal-networking

The above framework identifies ‘Specialists’ as those who have limited diversity but a strong following i.e. many nominations as a key resource. These appear to be the people identifying as critical knowledge providers.

The question now is to whether online systems are identifying and supporting specialists to share their knowledge? At SWOOP we have aimed to explore this question initially by using a modification of this performance framework on interactions data drawn from Microsoft Yammer installations:

performance

We measured each individual’s diversity of connections (y-axis) from their activities across multiple Yammer groups. The x-axis identifies the number of reciprocated connections an individual has i.e. stronger ties, together with the size of their personal network, identified by the size of the bubble representing them. We can see here that we have been able to identify those selected few ‘Specialists’ in the lower diversity/stronger cohesion quadrant, from their Yammer activities. These specialists all have relatively large networks of influence.

What we might infer from the above analysis is that an ESN like Yammer can identify those most prospective knowledge providers that staff are seeking out for knowledge transfer. But the bigger question is whether actual knowledge transfer can happen solely through an ESN like Yammer?

Is Having Systems that Provide Connections and Collections Enough to Ensure Effective Knowledge Sharing?

The knowledge management and social networking research is rich with studies addressing the question of how social network structure impacts on effective knowledge sharing. While an exhaustive literature review is beyond the scope of this article, for those inclined, this article on Network Structure and Knowledge Transfer: The Effects of Cohesion and Range is representative. Essentially this research suggests that ‘codified’ knowledge is best transferred through weak ties, but tacit knowledge sharing requires strong tie relationships. Codified knowledge commonly relates to stored artefacts like best practice procedural documents, lessons learned libraries, cases studies and perhaps even archived online Q&A forums. Tacit knowledge by definition cannot be codified, and therefore can only be shared through direct personal interactions.

I would contend that relationships formed solely through ESN interactions, or in fact any electronic systems like chat, email, etc. would be substantially weaker than those generated through regular face to face interactions. Complex tacit knowledge would need frequent and regular human interactions. It is unlikely that the strength of tie required, to effectively share complex knowledge, can be achieved solely through commonly available digital systems. What the ESN’s can do effectively is to help identify who you should be targeting as a knowledge sharing partner. Of course this situation is changing rapidly, as more immersive collaboration experiences are developed. But right now for codified knowledge, yes; for tacit knowledge, not yet

 

What can we Learn from Artificial Intelligence?

This might seem strange, suggesting that a science dedicated to learning from how we humans operate, could actually return the favour by teaching us about ourselves? As strange as this may sound, this is precisely what I am suggesting.

Having spent a good deal of my early career in the “first wave of AI” I had developed a healthy scepticism of many of the capability claims for AI. From the decade or more I spent as an AI researcher and developer I had come to the conclusion that AI worked best when the domains of endeavour were contained within discrete and well bounded ‘solution spaces’. In other words, despite the sophistication of mathematical techniques developed for dealing with uncertainty, AI was simply not that good in the “grey” areas.

AI’s Second Wave

alphago

The “second wave of AI” received a big boost when Google company Deep Mind managed to up the ante on IBM’s chess playing Deep Blue  by defeating the world Go champion Lee Sedol. According to Founder and CEO of Deep Mind Demis Hassabisis,  the success of their program AlphaGo could be attributed to the deeper learning capabilities built into the program, as opposed to Deep Blue’s largely brute force searching approach. Hassabisis emphasizes the ‘greyness’ in the game of Go, as compared to Chess. For those familiar with this ancient Chinese game, unlike chess, it has almost a spiritual dimension. I can vividly recall a research colleague of mine, who happened to be a Go master, teaching a novice colleague the game in a lunchtime session, and chastising him for what he called a “disrespectful move”. So AplhaGo’s success is indeed a leap forward for AI in conquering “grey”.

So what is this “deep learning” all about? You can certainly get tied up in a lot of academic rhetoric if you Google this, but for me it’s simply about learning from examples. The two critical requirements are the availability of lots of examples to learn from, and the development of what we call an “evaluation function”, i.e. something that can assess and rate an action we are considering on taking. The ‘secret sauce’ in AlphaGo is definitely the evaluation function. It has to be sophisticated enough be able to look many moves ahead and assess many competitive scenarios before evaluating its own next move. But this evaluation function, which takes the form of a neural network, has the benefit of being trained on thousands of examples drawn from online Go gaming sites, where the final result is known.

Deep Learning in Business

books

We can see many similarities to this context in business. For example, the law profession is founded on precedents, where there are libraries of cases available, for which the final result is known.  Our business schools regularly educate their students by working through case studies and connecting them to the underlying theories. Business improvement programs are founded on prior experience or business cases from which to learn. AI researchers have taken a lead from this and built machine learning techniques into their algorithms. An early technique that we had some success with is called “Case Based Reasoning”. Using this approach, it wasn’t necessary to articulate all the possible solution paths, which in most business scenarios, is infeasible.  All we needed to have was sufficient prior example cases to search through, to provide the cases that most matched the current context, leaving the human user to fill any gaps.

The Student Becomes the Teacher

Now back to my question; what can AI now teach us about ourselves? Perhaps the most vivid learnings are contained in the reflections of the Go champions that AlphaGo had defeated. The common theme was that AlphaGo was making many unconventional moves, that only appeared sensible in hindsight. Lee Sedol has stated his personal learning from his 4-1 defeat by AlphaGo in these comments: “My thoughts have become more flexible after the game with AlphaGo, I have a lot of ideas, so I expect good results” and “I decided to more accurately predict the next move instead of depending on my intuition”. So the teacher has now become the student!

It is common for us as human beings to become subjects of unconscious bias. We see what is being promoted as a “best practice”, perhaps reinforced by a selected few of our own personal experiences, and are then willing to swear by it as the “right” thing to do. We forget that there may be hundreds or even thousands of contrary cases that could prove us wrong, but we stubbornly stick to our original theses. Computers don’t suffer from these very human traits. What’s more they have the patience to trawl through thousands of cases to fine tune their learnings. So in summary, what can we learn from AI?

  • Remember that a handful of cases is not a justification for developing hard and fast rules;
  • Before you discount a ‘left field’ suggestion, try to understand the experience base that it is coming from. Do they have experiences and insights that are beyond those of your own close network?
  • Don’t be afraid to “push the envelope” on your own decision making, but be sure to treat each result, good or bad, as contributing to your own growing expertise; and
  • Push yourself to work in increasingly greyer areas. Despite the success of AlphaGo, it is still a game, with artificial rules and boundaries. Humans are still better at doing the grey stuff!

 

 

 

 

Yammer Benchmarking Edition 1

 

First in a series of SWOOP Yammer Benchmarking video blogs. Swoop has benchmarked some 36 Yammer installations to date. This first video blog shares some insights gained on the important measures that influence collaboration performance.

 

Video script:

SLIDE 1

Hello there

My Name is Laurence Lock Lee, and I’m the Co-Founder and Chief Scientist at Swoop Analytics.

If you are watching this you probably know what we do, but just in case you don’t, Swoop is a social analytics dashboard that draws its raw data from enterprise social networking tools like Yammer and provides collaboration intelligence to its users, who can be anyone in the organisation.

Our plan is to provide an ongoing series of short video blogs specifically on our Yammer benchmarking insights, as we work with the data we collect. We will aim to use this format to keep you appraised of developments as they happen. We have also recently signed a joint research agreement with the Digital Disruption Research Group at the University of Sydney in Australia. So expect to see the results of this initiative covered in future editions.

The Swoop privacy safeguards means its pure context free analysis, no organisational names, group names, individual names…we don’t collect them.

SLIDE 2

This is the “Relationships First” benchmarking framework we designed for our benchmarking. But we also measure traditional activity measures, which we tend not to favour as a collaboration performance measure…but more about that later. The 14 measures  help us characterise the organisations we benchmark by comparing them against the maximum, minimum and average scores of those in our sample set,  which currently sits at 36 organisations and growing rapidly. They represent organisations large and small from a full cross section of industries and geographies.

SLIDE 3

For those of you who have not been exposed to the Swoop behavioural online personas, you will find a number of articles on our blog.

Because I will be referring to them it’s useful to know the connection patterns inferred by each of them. We don’t include the ‘Observer’ persona here as they are basically non-participants.

Starting with the Responder; Responders make connections through responding to other people’s posts or replies. This can be a simple ’like’, mention or notify..…and it often is, but sometimes it can be a full written reply.

In contrast the catalyst makes connections through people replying to their posts. A good catalyst can make many connections through a good post. Responders have to work a bit harder. They mostly only get one connection per interaction.

The Engager as you can see is able to mix their giving and receiving. This is a bit of an art, but important as engagers are often the real connectors in the community or group.

And what about the broadcaster? Well if your posts don’t attract any response, then we can’t identify any connections for you.

SLIDE 4

This is how we present our benchmarking results to the participants. You can see that we have the 14 dimensions normalized such that the ‘best in class’ results are scored at 100 points and the worst performance at zero. The orange points are the score for the organisation with lines connecting their scores to the average scores.

A few points to note are that we only count ‘active users’ being those that have had at least one activity in Yammer over the period we analyze, which is the most recent 6 months.

Some of the measures have asterisks (*) , which means that the score has been reversed for comparison purposes. For example, a high score for %Observers is actually a bad result, so this is reversed for comparison purposes.

Finally, not all of the measures are independent of each other, so it is possible to see recurring patterns amongst organisations. We can therefore tell a story of their journey to date, through seeing these patterns.  For example, a poor post/reply ratio indicates to us that the network is immature and therefore we would also expect a high % observers score.

SLIDE 5

One way of understanding which of the 14 measures are most important to monitor is to look at the relative variances for each measure across the full sample set. Where we see a large relative variance, we might assume that this is an area which provides most opportunity for improvement. In our sample to date it is the two-way connections measure which leads the way. I’ll go into a bit more detail on this later on. The % Direction measure relies solely on the use of the ‘notification’ type, which we know some organisations have asked users to avoid, as it’s really just like a cc in an email. So perhaps we can ignore this one to some extent. The Post/Reply measure is, we believe, an indicator of maturity. Foe a new network we would expect a higher proportion of posts to replies, as community leaders look to grow activity. However, over time we would expect that the ratio would move more toward favoring replies, as participants become more comfortable with online discussions.

It’s not surprising that this measure shows up as we do have quite a mix of organisations at different maturity stages in our sample to date. The area where we have seen less variance are the behavioural personas, perhaps with the exception of the %Broadcasters. This suggests that at least at the Enterprise level, organisations are behaving similarly.

SLIDE 6

This slide is a little more complex, but it is important if you are to gain an appreciation of some of the important relationship measures that SWOOP reports on.

Following this simple example:

Mr Catalyst here makes a post in Yammer. It attracts a response from Ms Responder and Mr Engager. These responses we call interactions, or activities. By undertaking an interaction, we have also created a connection for all three participants.

Now Mr Engager’s response was a written reply, that mentions Ms Responder, because that’s the sort of guy he is. Mr Catalyst responds in kind , so now you can see that Mr Catalyst and Mr Engager have created a two way connection.

And Ms Responder responds to Mr Engager’s mention with an appreciative like, thereby creating a two-way connection Between Mr Engager and Ms Responder.  Mr Engager is now placed as a broker of the relationship between Mr Catalyst and Ms Responder. Mr Catalyst could create his own two-way connection with Ms Responder, but perhaps she just responded to Mr catalyst with a like…leaving little opportunity for a return response.

So after this little flurry of activity each individual can reflect on connections made…as Mr Engager is doing here.

So in summary, An interaction is any activity on the platform. A connection is created by an interaction and of course strengthened by more interactions with that connection. Finally, we value two-way interactions as this is reciprocity, which we know leads to trust and more productive collaboration

SLIDE 7

Finally I want to show you how the two-way connections scores varies amongst the 36  participants to date. Typically, we would look to build the largest and most cohesive Yammer network as possible, though we accept this might not always be the case. While the data shows that the top 4 cohesive networks were relatively small, there are also 3 organisations that have quite large networks with quite respectable two-way connections scores.

So there is definitely something to be learnt here between the participants.

SLIDE 8

So in summing up, as of September we have 36 participants in our benchmark and growing rapidly now. The two-way connections measure, which is arguably the most important predictor of collaborative performance, was also the most varied amongst the participants.

By looking at the patterns between the measures we can start to see emerging patterns. We hope to explore these patterns in more detail with our research partners in the coming year.

Finally, we show that network size should not be seen as a constraint to building a more cohesive network. We have reported previously that another common measure, network activity levels are also an unreliable measure for predicting collaboration performance.

SLIDE 9

In the next video blog we will be looking at Yammer groups in more detail. We are aware that for many organisations, it’s the Yammer groups that form the heart of the network, so it makes sense to take a deeper dive into looking at them.

Thank you for your attention and look forward to seeing you next time.

Book Review: “Networked: The New Social Operating System” – Lee Rainie and Barry Wellman

Holiday breaks are a good time to catch up on your reading and I had put this one aside for just that. I won’t be offering a full chapter-by-chapter review as I’m sure that has been done elsewhere. This is more of a personal reflection. Having spent considerable time researching in field of Social Network Analysis (SNA), Barry Wellman was well known to me. We have never met face-to-face but I had met up with a number of his Netlab colleagues at a couple of INSA Sunbelt conferences. My first recollection of Wellman’s work goes back to some of his early pre-Internet research on electronically facilitated communications and the social network.  Even then there was the fear that such communication technology could lead to de-socialization with less face-to-face contact and a subsequent loss of community.  Wellman argued then that rather than replacing face-to-face socialization, collaborative technologies would actually lead to people meeting up more than they did before and with a broader circle of connections. This counter-intuitive theme continues to run through the book, with Rainie’s Pew Internet research results and Wellman’s networking research providing plenty of factual supporting evidence.

Rainie and Wellman focus on anther apparent contradiction that they refer to as “Networked Individualism”. One of the claims made that did catch my attention is that the authors believe that the networked world has matured to the extent that groups and communities are no longer a prime focus. Individuals will belong to multiple groups and communities of practice; and will therefore share their attention amongst such groups as their individual need or context demands, at any given time.  The focus has therefore moved from the group to the individual. The onus is therefore now on the individual to learn how to navigate their networks for personal benefit, rather than relying on group or community leaders. When I look at my own use of Linkedin groups I would have to agree. Some groups I lurk in just to get a sense of what is important to that community. I can move in and out of such groups as the context demands. Others that I am closer to, I will more actively participate on a regular basis.

The other key theme from the book is what the authors call the triple revolution: Social Network, Internet and Mobile. While the authors go to some pains to say the book is not about technology, it is hard not to see these revolutions as technology driven. I’m not sure that is matters. Having spent most of my career at the leading/bleeding edge of technology, for me there was nothing particularly new or novel that I hadn’t read about before, though Rainie’s Pew Internet Research added some additional colour to the coverage. That’s not to say I didn’t enjoy reading about how these technologies have evolved over the past 20 years or more. Like me, I suspect the authors entered the workforce before e-mail was invented in the 1970s. Reading about how that first email was sent, how hypertext and the first Internet browser was developed and our first mobile phones, which were the size and weight the equivalent of a house brick, made me reflect on just how fortunate one has been to have lived and worked through such an exciting era of technological change. The authors talk about how individuals are now equipped with smart phones which equips them to operate effectively as a networked individual. I can recall on my first day at work of being impressed with having my own telephone on my desk! Of course it could only ring in and out internally (does this sound like some Enterprise Social Networking implementations?).

One area I was particularly looking for was networking ‘at work’. We hear a lot about networking in the ‘friends and family’ space, but enterprise networking provides a whole new suite of challenges. A single chapter is devoted to ‘Networked Work’. Again the historical story telling made it an interesting read. Even if it was not perhaps new to me it will interest those wanting to understand how networking is changing the world of work. The Boeing examples of their networked approach to airplane design provide good and instructive reading. One point that did stand out however is that collaborative technologies have not reduced the need for business travel and face-to-face connections. In fact the opposite has happened; reinforcing the theme identified earlier in that the technology is not replacing the need to connect in person, but is actually facilitating more face-to-face connections. If anything this book is not short of supporting facts and figures.

 Finally the book is full of anecdotes from Wellman’s Gen Y students. In fact some of his students helped co-write some of the chapters. One of the insights I gained from some of the Gen Y voices was how they were using social networking technologies mostly to organize face-to-face meet-ups. These interactions were often short, sharp and multi-modal i.e. text, voice, IM etc.. These examples reinforce Wellman’s long-term theme that the collaborative technologies are not replacing face-to-face communications, but augmenting and expanding it.

I see some common threads here with research by MIT’s Sandy Pentland on high performance teams and Tom Allen’s research on communications and physical separation. Additionally it also provides some comfort for me in explaining some of our initial work in developing a ‘give/receive’ social analytic measure based on Pentland’s work. Pentland found that the most productive teams have balanced short and sharp interactions. As we were developing our metrics we noted that many of these supposedly highly productive conversations were around organising meet-ups. Our initial thoughts were that we shouldn’t really count these; but after having read this book, perhaps the most productive thing that we can do is to organise a meetup!