Data-Driven Collaboration Part 1: How Rich Data Can Improve Your Communication

Originally published on Carpool.

This is the first of a series, coauthored by Laurence Lock Lee of Swoop Analytics and Chris Slemp of Carpool Agency, in which we will explain how you can use rich, people-focused data to enhance communication, increase collaboration, and develop a more efficient and productive workforce.

It’s safe to say that every enterprise hungers for new and better ways of working. It’s even safer to say that the path to those new and better ways is often a struggle.

Many who struggle do so because they are starting from a weak foundation. Some are simply following trends. Others believe they should adopt a new tool or capability simply because it was bundled with another service. Then there are those organizations that focus primarily on “reining in” non-compliant behaviors or tools.

But there’s a way to be innovative and compliant that also improves your adoption: focus instead on the business value of working in new ways—be data-driven. When you incorporate information about your usage patterns to set your goals, you are better positioned to track the value of your efforts and drive the behavior changes that will help you achieve your business objectives.

While it’s assumed that doing market research is critical when marketing to customers, investments in internal audience research have gained less traction, yet they yield the same kinds of return. Data-driven internal communication planning starts at the very beginning of your project.

Here we will demonstrate—using real-world examples—how Carpool and Swoop use data to create better communications environments, nurture those environments, and make iterative improvements to ensure enterprises are always working to their full potential.

Use Data to Identify Your Actual Pain Points

One team Carpool worked with was focused on partnering with customers and consultants to create innovations. They thought they needed a more effective intranet site that would sell their value to internal partners. However, a round of interviews with key stakeholders and end-of-line consumers revealed that a better site wasn’t going to address the core challenge: There were too many places to go for information and each source seemed to tell a slightly different story. We worked with the client to consolidate communications channels and implemented a more manageable content strategy that focused on informal discussion and formal announcements from trusted sources.

In the end, we were able to identify the real pain point for the client and help them address it accordingly because of the research we obtained.

Use Data to Identify New Opportunities

Data can drive even the earliest strategy conversations. In Carpool’s first meeting with a global retail operation, they explained that they wanted to create a new Yammer network as they were trying to curb activity in another, unapproved network. Not only did we agree, but we brought data to that conversation that illustrated the exact size and shape of their compliance situation and the nature of the collaboration that was already happening. This set the tone for a project that is now laser-focused on demonstrating business value and not just bringing their network into compliance.

Use Data to Identify and Enhance Your Strengths

In-depth interviews can be added to the objective data coming from your service usage. Interviews reveal the most important and effective channels, and the responses can be mapped visually to highlight where a communication ecosystem has broadcasters without observers, or groups of catalysts who are sharing knowledge without building any broader consensus or inclusion.

Below, you see one of Carpool’s chord chart diagrams we use to map the interview data we gather. We can filter the information to focus on specific channels and tools, which we then break down further to pinpoint where we have weaknesses, strengths, gaps, and opportunities in our information flow.

CHORD CHART

Turning Data Into Action

These kinds of diagnostic exercises can reveal baselines and specific strategies that can be employed with leaders of the project or the organization.

One of the first activities organizations undertake when implementing an Enterprise Social Networking (ESN) platform is to encourage staff to form collaborative groups and then move their collaboration online. This is the first real signal of ‘shop floor empowerment’, where staff are free to form groups and collaborate as they see fit, without the oversight of their line management. As these groups form, the inevitable ‘long tail’ effect kicks in, where the vast majority of these groups fall into disuse, in contrast to a much smaller number that are wildly successful, and achieving all of the expectations for the ESN. So how can organizations increase their Win/Loss ratio? At Swoop Analytics we have started to look at some of the ‘start-up’ patterns of the Yammer installations of our benchmarking partners. These patterns can emerge after as little as 6 months of operations.

Below, we show a typical first 6 months’ network performance chart, which measures group performance on the dimensions of Diversity (Group Size), Cohesion (Mean 2-Way Relationships formed), and Activity (postings, replies, likes etc.). We then overlay the chart with ‘goal state’ regions reflecting the common group types typically found in ESN implementations. The regions reflect the anticipated networking patterns for a well-performing group of the given type. If a group’s stated purpose positions them in the goal-state region, then we would suggest that they are well positioned to deliver tangible business benefits, aligned with their stated purpose. If they are outside of the goal state, then the framework provides them with implicit guidance as to what has to happen to move them there.

BUBBLE GRAPH

At launch, all groups start in the bottom left-hand corner. As you can see, a selected few have ‘exploded out of the blocks’, while the majority are still struggling to make an impact. The 6-month benchmark provides an early opportunity for group leaders to assess their group against their peer groups, learn from each other, and then begin to accelerate their own performances.

Painting the Big Picture

The convergence of multiple data sources paints a holistic picture of communication and collaboration that extends beyond team boundaries. This new picture extends across platforms and prescribes the design for an ecosystem that meets user and business needs, aligns with industry trends, and is informed by actual usage patterns.

ECOSYSTEM DESIGN

The discussion about the ROI of adopting new ways of working, such as ESNs, hasn’t disappeared. While we believe it’s a waste of resources to try measuring a return from new technologies that have already been proven, it’s clear that developing business metrics and holding these projects accountable to them is just as critical as any effort to increase productivity.

The nature of these metrics also needs to shift from a focus on “counts and amounts” to measures of a higher order that tie more closely to business value. For example, knowing that posting activity has risen by 25% in a year may make you feel a little better about your investment in a collaboration platform. Knowing that there is a higher ratio of people engaging vs. those who are simply consuming is much better. Showing a strong correlation in departments that have higher percentages of engaged users with lower attrition rates … that’s gold.

So now is the time to look at your own organization and wonder: “Do I track how my people are connecting? Do I know how to help them become more engaged and productive? When was the last time I measured the impact of my internal communication ecosystem?”

Then take a moment to imagine the possibilities of what you could do with all of that information.

Stay tuned in the coming weeks for Part 2 and Part 3 when we address the topics of driving engagement by identifying types of enterprise social behavior in individuals, and the results we’ve seen from being data-driven in how we shape internal communications and collaboration.

Diversity is Essential but not Sufficient

diversity-imageDiversity is a big word in business today. We are preached to continuously about how important having diverse leadership is to improving your performance. HBR in their article on ‘Why Diverse teams are Smarter”, identify studies showing that diversity based on both ethnicity and/or gender can lead to above average returns. In our own work with networks, research has shown that individuals with more diverse personal networks are more likely to be promoted and succeed in their occupations. Although I’ve always thought that my own personal network was quite diverse, I received a wake-up call from the recent US elections. I was not aware of any of my fairly extensive US citizen network that were voting for Trump! So it does take a conscious effort to build and sustain a diverse network of connections. It’s far too easy to fall back to the comfortable relationships with those just like us.

But diversity alone is only a pre-condition to high performance. One must be able to exploit the diversity in one’s network to actually deliver the superior results that it promises. In a previous post we introduced our network performance framework, which identifies a balance between Diversity and Cohesion in networks, for maximizing performance:

swoop-diversity

In this framework we identify that high performers are those that can effectively balance their diverse connections i.e. identifying high potential opportunities, with their close connections, with whom they can collaborate to exploit those opportunities. From our project consulting experiences these people are either recognised as organisational ‘ambassadors’ or are completely invisible i.e. the quiet achievers. The fact that we find so few people in this quadrant is testimony as to how hard achieving this balance can be.

UGM Consulting explores this tension in their recent article on Innovation and the Diversity Paradox. They nominate the following attributes for those diverse networks that can successfully exploit the opportunities that they identify:

They have a sense of shared common goals and purpose;

  1. They know how to genuinely listen to each other, seeking out elaboration and novel combinations;
  2. They have high levels of mutual trust, so speaking up and disagreements can be had, risk free;
  3. They have the skills to constructively explore alternatives and agree on a direction; and
  4. There exists a strong co-operative atmosphere at both the team and enterprise levels.

For leaders this will mean actively enabling or creating such conditions. For the individual it could boil down to simply developing a diverse network that you actively consult with.  At times you may leverage these relationships by enrolling others in selected joint activities, to bring about positive change in your own areas of influence.

Top Image credit: http://www.ispt-innovationacademy.eu/innovation-research.html

Are we Getting Closer to True Knowledge Sharing Systems?

knowledge-systems

(image credit: https://mariaalbatok.wordpress.com/2015/02/10/religious-knowledge-systems/)

First generation knowledge management (KM) systems were essentially re-labelled content stores. Labelling such content as ‘knowledge’ did much to discredit the whole Knowledge Management movement of the 1990s. During this time, I commonly referred to knowledge management systems as needing to comprise both “collections and connections”, but we had forgotten about the “connections”.  This shortcoming was addressed with the advent of Enterprise Social Networking (ESN) systems like Yammer, Jive, IBM Connect and now Workplace from Facebook. So now we do have both collections and connections. But do we now have true knowledge sharing?

Who do we Rely on for Knowledge Based Support?

A common occupation for KM professionals is to try and delineate a boundary between information, that can be effectively managed in an information store, and knowledge, which is implicitly and tacitly held by individuals. Tacit knowledge, arguably, can only be shared through direct human interaction. In our Social Network Analysis (SNA) consulting work we regularly surveyed staff on who they relied on to get their work done. We stumbled on the idea of asking them to qualify their selections by choosing only one of:

  • They review and approve my work (infers a line management connection)
  • They provide information that I need (infers an information brokering connection)
  • They provide advice to help me solve difficult problems (infers a knowledge based connection)

The forced choice was key. It proved to be a great way of delineating the information brokers from the true knowledge providers and the pure line managers. When we created our ‘top 10 lists’ for each role, there was regularly very little overlap. For organisations, the critical value in these nominations is that the knowledge providers are the hardest people to replace, and therefore it is critical to know who they are. And who they are, is not always apparent to line management!

So how do staff distribute their connections needs amongst line managers, information brokers and knowledge providers? We collated the results of several organisational surveys, comprising over 35,000 nominations, using this identical question, and came up with the following:

work-done

With 50% of the nominations, the results reinforce the perception that knowledge holders are critical to any organisation.

What do Knowledge Providers Look Like?

So what is special about these peer identified knowledge providers? Are they the ‘wise owls’ of the organisation, with long experiences spanning many different areas? Are they technical specialists with deep knowledge about fairly narrow areas? We took one organisation’s results and assessed the leaders of each of the categories of Approve/review, Information and Knowledge/Advice looking for their breadth or diversity of influence. We measured this by calculating the % of connections, nominating them as an important resource, that came from outside their home business unit. Here are the results:

external-links

As we might anticipate, the inferred line management had the broadest diversity of influence. The lowest % being for the knowledge providers, suggests that it’s not the broadly experienced wise old owls, but those specialising in relatively narrow areas, where people are looking for knowledge/advice from.

Implications for Knowledge Sharing Systems

We have previously written about our Network Performance Framework, where performance is judged based on how individuals, groups, or even full organisations balance diversity and cohesion in their internal networks:

personal-networking

The above framework identifies ‘Specialists’ as those who have limited diversity but a strong following i.e. many nominations as a key resource. These appear to be the people identifying as critical knowledge providers.

The question now is to whether online systems are identifying and supporting specialists to share their knowledge? At SWOOP we have aimed to explore this question initially by using a modification of this performance framework on interactions data drawn from Microsoft Yammer installations:

performance

We measured each individual’s diversity of connections (y-axis) from their activities across multiple Yammer groups. The x-axis identifies the number of reciprocated connections an individual has i.e. stronger ties, together with the size of their personal network, identified by the size of the bubble representing them. We can see here that we have been able to identify those selected few ‘Specialists’ in the lower diversity/stronger cohesion quadrant, from their Yammer activities. These specialists all have relatively large networks of influence.

What we might infer from the above analysis is that an ESN like Yammer can identify those most prospective knowledge providers that staff are seeking out for knowledge transfer. But the bigger question is whether actual knowledge transfer can happen solely through an ESN like Yammer?

Is Having Systems that Provide Connections and Collections Enough to Ensure Effective Knowledge Sharing?

The knowledge management and social networking research is rich with studies addressing the question of how social network structure impacts on effective knowledge sharing. While an exhaustive literature review is beyond the scope of this article, for those inclined, this article on Network Structure and Knowledge Transfer: The Effects of Cohesion and Range is representative. Essentially this research suggests that ‘codified’ knowledge is best transferred through weak ties, but tacit knowledge sharing requires strong tie relationships. Codified knowledge commonly relates to stored artefacts like best practice procedural documents, lessons learned libraries, cases studies and perhaps even archived online Q&A forums. Tacit knowledge by definition cannot be codified, and therefore can only be shared through direct personal interactions.

I would contend that relationships formed solely through ESN interactions, or in fact any electronic systems like chat, email, etc. would be substantially weaker than those generated through regular face to face interactions. Complex tacit knowledge would need frequent and regular human interactions. It is unlikely that the strength of tie required, to effectively share complex knowledge, can be achieved solely through commonly available digital systems. What the ESN’s can do effectively is to help identify who you should be targeting as a knowledge sharing partner. Of course this situation is changing rapidly, as more immersive collaboration experiences are developed. But right now for codified knowledge, yes; for tacit knowledge, not yet

 

Getting “Liked”: Is Content Overrated?

We are regularly bombarded with the message that “Content is King”, quickly followed by a plethora of methods, tips and even tricks on how to make our content more attractive i.e. being “Liked” by many. Social media has introduced the “Like” button so we can more explicitly signal our appreciation of the content that we are exposed to. But how much is appreciation directed by the “content” of that message and how much is that appreciation directed by the messenger? We have some recent analytics that provides some new insights on this.

Content or Messenger?

content-image

Doubt about the true value of content was first flagged by Canadian Philosopher Marshall McLuhan, with his often quoted “the medium is the message” statement in the 1960s. In the age of social media, this has now morphed into “the messenger is the message”, with the rise to prominence of the “Influencer”. Influencers are those rare individuals that can influence the buying behaviours of many, simply through the power of their personal recommendation. Think about your own “liking” behaviour on Facebook. How often would you “like” a passive Facebook advertising page, as opposed to “liking” a posting made by a human influencer, linking back to that very same page? This is a clear example of the power of the messenger, being more important than the message itself.

 

Enterprise “Liking”

I have recently written  about how the “Like Economy” we experience in consumer social networks may not map well when social networks move inside the enterprise in the form of Enterprise Social Networks (ESN). Unlike consumer social networks, we are unlikely to see advertisements tolerated in the ESN. But Enterprises often do want to send messages to “all staff”, particularly for major change initiatives they want staff to “buy into”. Regularly, corporate communications staff are keen to look at statistics on how often the message is read and even ‘liked’. But is this a true reflection of engagement with a message?

Our benchmarking of ESNs  has identified that “Likes” make up well over 50% of all activities undertaken on ESNs. In the absence of carefully crafted advertising sites, just what is driving our “liking” behaviour in the Enterprise? We decided to explored this by not looking at every message posted (for privacy reasons Swoop does not access message content), but by looking at patterns of who “Likes” were directed at. We aggregated the “Likes” from 3 organisations, from our benchmarking partners, for individuals who had posted more than 500  “Likes” over a 12 month period. Collectively, there were over 4,000 individuals that met the criteria. We then categorised their “Likes” according to:

“Like” Characteristic Interpretation
One-off (‘Like’ recipient was a once only occurrence) Attraction is largely based on the content of the message alone.
Repeat Recipient (‘Like’ recipient was a repeat recipient from this individual) Recipients are potentially ‘influencers’, so the motivation may come from the person, more so than the message content.
Reciprocated (‘Like’ recipient has also been a ‘Like’ provider for this individual) Recipients have a ‘relationship’ with the ‘Liker’, which drives this behaviour


‘Like’ Analysis Results

The results of our analysis is shown below:

like-analysis

The results show clearly that in the Enterprise context, the driver for ‘liking’ behaviour is the relationship. The data suggests that you are nearly 3 times as likely to attract a ‘like’ to your message from someone, if you had previously ‘liked’ a posting of theirs.

So what is the implications for the Enterprise?

If indeed an Enterprise is relying on counting ‘likes’ as a measure of staff engagement, one needs to encourage the formation of relationships through reciprocated actions as a priority, over spending time ‘crafting the perfect message’, or even on relying on influencers to build engagement. Specifically, one could:

  • Acknowledge a “Like”, in particular, if you have never responded to this person before.
  • Craft your important messages as a means to start a conversation, more so than a statement of opinion. Explicitly frame your statement as a question or explicitly ask for feedback.
  • Start to think about ‘engagement’ as more than a ‘read’ or a ‘like’ and more from a relationship perspective. How deep and broadly is your issue being discussed?
  • When you read advice from social media experts on “how to generate more ‘Likes’ for you content”, replace this with “how to generate more ‘relationships’ using your content”.

As I am writing this post I’m painfully reminded of the need to ‘eat your own dog food’. So I’m making a commitment that if you respond or ‘like’ this article, I will at least try to respond in kind!

likeimage

 

How do these results map with your own experiences?

What can we Learn from Artificial Intelligence?

This might seem strange, suggesting that a science dedicated to learning from how we humans operate, could actually return the favour by teaching us about ourselves? As strange as this may sound, this is precisely what I am suggesting.

Having spent a good deal of my early career in the “first wave of AI” I had developed a healthy scepticism of many of the capability claims for AI. From the decade or more I spent as an AI researcher and developer I had come to the conclusion that AI worked best when the domains of endeavour were contained within discrete and well bounded ‘solution spaces’. In other words, despite the sophistication of mathematical techniques developed for dealing with uncertainty, AI was simply not that good in the “grey” areas.

AI’s Second Wave

alphago

The “second wave of AI” received a big boost when Google company Deep Mind managed to up the ante on IBM’s chess playing Deep Blue  by defeating the world Go champion Lee Sedol. According to Founder and CEO of Deep Mind Demis Hassabisis,  the success of their program AlphaGo could be attributed to the deeper learning capabilities built into the program, as opposed to Deep Blue’s largely brute force searching approach. Hassabisis emphasizes the ‘greyness’ in the game of Go, as compared to Chess. For those familiar with this ancient Chinese game, unlike chess, it has almost a spiritual dimension. I can vividly recall a research colleague of mine, who happened to be a Go master, teaching a novice colleague the game in a lunchtime session, and chastising him for what he called a “disrespectful move”. So AplhaGo’s success is indeed a leap forward for AI in conquering “grey”.

So what is this “deep learning” all about? You can certainly get tied up in a lot of academic rhetoric if you Google this, but for me it’s simply about learning from examples. The two critical requirements are the availability of lots of examples to learn from, and the development of what we call an “evaluation function”, i.e. something that can assess and rate an action we are considering on taking. The ‘secret sauce’ in AlphaGo is definitely the evaluation function. It has to be sophisticated enough be able to look many moves ahead and assess many competitive scenarios before evaluating its own next move. But this evaluation function, which takes the form of a neural network, has the benefit of being trained on thousands of examples drawn from online Go gaming sites, where the final result is known.

Deep Learning in Business

books

We can see many similarities to this context in business. For example, the law profession is founded on precedents, where there are libraries of cases available, for which the final result is known.  Our business schools regularly educate their students by working through case studies and connecting them to the underlying theories. Business improvement programs are founded on prior experience or business cases from which to learn. AI researchers have taken a lead from this and built machine learning techniques into their algorithms. An early technique that we had some success with is called “Case Based Reasoning”. Using this approach, it wasn’t necessary to articulate all the possible solution paths, which in most business scenarios, is infeasible.  All we needed to have was sufficient prior example cases to search through, to provide the cases that most matched the current context, leaving the human user to fill any gaps.

The Student Becomes the Teacher

Now back to my question; what can AI now teach us about ourselves? Perhaps the most vivid learnings are contained in the reflections of the Go champions that AlphaGo had defeated. The common theme was that AlphaGo was making many unconventional moves, that only appeared sensible in hindsight. Lee Sedol has stated his personal learning from his 4-1 defeat by AlphaGo in these comments: “My thoughts have become more flexible after the game with AlphaGo, I have a lot of ideas, so I expect good results” and “I decided to more accurately predict the next move instead of depending on my intuition”. So the teacher has now become the student!

It is common for us as human beings to become subjects of unconscious bias. We see what is being promoted as a “best practice”, perhaps reinforced by a selected few of our own personal experiences, and are then willing to swear by it as the “right” thing to do. We forget that there may be hundreds or even thousands of contrary cases that could prove us wrong, but we stubbornly stick to our original theses. Computers don’t suffer from these very human traits. What’s more they have the patience to trawl through thousands of cases to fine tune their learnings. So in summary, what can we learn from AI?

  • Remember that a handful of cases is not a justification for developing hard and fast rules;
  • Before you discount a ‘left field’ suggestion, try to understand the experience base that it is coming from. Do they have experiences and insights that are beyond those of your own close network?
  • Don’t be afraid to “push the envelope” on your own decision making, but be sure to treat each result, good or bad, as contributing to your own growing expertise; and
  • Push yourself to work in increasingly greyer areas. Despite the success of AlphaGo, it is still a game, with artificial rules and boundaries. Humans are still better at doing the grey stuff!

 

 

 

 

We’ve Disrupted the Formal Organisation: But what does it look like now?

Digital disruption, Holocracies, Wirearchies are attacking the formal hierarchy as we had come to know it. While we might accept that the formal hierarchy is becoming less reflective of how work is getting done, it still reflects how senior executives are designing for work to be done. For most organisations, senior executives still agonise over appropriate formal hierarchical structures. And the published organisational chart is usually the first port of call for those wanting to understand the inner workings of an organisation. Is it distributed or centralised?; Sales driven or product driven? Operations, Technology centric or Financial centric?

If the formal organisation chart were to truly disappear, what could we replace it with? Where would the external stakeholders go to understand how the Holocracy, Wirearchy or Networked Organisation is operating? Where are the core capabilities in such environments? What about the disconnected workgroups? 

The good news is that formal methods for mapping informal organisational structure have been around for some time. Social Network Analysis (SNA) has provided us with a means for mapping the connections between individuals based on their relationships. With the advent of informal organisational groups, be they part of an Organisational Social Networking platform like Yammer or Jive, an email group or a team site in Slack or Skype, there is a need to understand how these informally created entities are connected to each other. Without this facility it can be hard to see the ‘big picture’ of what may be really happening, leaving the organisational executive flying blind. 

One of the easiest methods for creating an organisational wide map is to use a simple ‘shared membership’ approach. Commonly called ‘affinity mapping’, it is the same technique that has been used to uncover board of director interlocks, which have provided insights into largely invisible connections between publicly listed companies. It also happens to be the way that Amazon promotes new books to you, by inferring that you have an affinity relationship with those that have read the same books. 

Here is an example map we have created using an organisation’s Yammer group membership (group names have been changed to protect privacy): 

At the start of this clip we can see that all the groups are formed into one large cluster, as invariably most groups are inter-connected to some degree with other groups. But you will see as we increase the relationship ‘strength’ filter to only include overlapping memberships of a certain size, the informal group structure starts to materialise in front of our eyes. When taken to the extreme, we are left with the two groups with the greatest level of overlap, being Enterprise Communications and Customer Delivery. The number of common members is shown next to the strength filter. As we move the strength filter back from this point we gradually see other connected groups become exposed. We see the regional stores cluster emerge, suggesting perhaps some common regional issues. We also see a number of non-work groups emerge, interestingly connected to a sponsored diversity group, with all being strongly connected to the enterprise communications hub. This is good news, as these groups are doing their job of connecting staff who would normally not be connected in other ways.  

By using this simple relationship strength filter, we can start to explore the emerging structures formed from the voluntary, ‘bottom up’ actions of mainstream staff. The highly connected groups could be seen larger nodes representing the core interest/capability areas that are developing. The enterprise leaders that ‘own’ the formal organisation chart can now ask questions like ‘how well is our informal structure reinforcing our formal structures, or not?’; ‘Are there key capability areas that are not developing and may need more nurturing?’; ‘Are we encouraging a diversity of interests in our staff and if so, how are they helping to reinforce our mainstream businesses?”. 

We regularly see analytics provided for activity levels inside groups, but rarely between them. The power for the enterprise now comes from being able to overlay the formal and the informal, as the formal hierarchy starts to give way to the more adaptive and flexible informal structures, being increasingly exposed by Enterprise Social Networking platforms and the like. 

SWOOP Video Blog 2 – Yammer Groups

The second in our SWOOP Video Blog Series:

Slide 1

Hi there, I’m Laurence Lock Lee, the co-founder and chief scientist at Swoop Analytics

In this second episode of Swoop Benchmarking insights we are drilling down to the Yammer Group level. Groups are where the real collaborative action happens.

As Yammer Groups can be started by anyone in the organisation, they quickly build up to hundreds, if not thousands in some organisations. Looking at activity levels alone we will see that the majority of groups do not sustain consistent activity, while a much smaller proportion look to be really thriving.

As useful as activity levels and membership size are, as we have suggested before, they are crude measures which can mask true relationship centred collaboration performance being achieved.

In this session we provide insights into how organisations can compare and benchmark their internal groups.

Slide 2

There is no shortage of literature and advice on how to build a successful on-line community or group. The universal advice for the first step is to identify the purpose. A well articulated purpose statement will identify what success would look like for this group or community.

What we do know from our experience to date is that there are a variety of purposes that online groups are formed. IBM has conducted a detailed analysis of their internal enterprise social networking system, looking to see if the usage logs could delineate the different types of groups being formed. What they found was five well delineated types of groups. {IBM classification from years of IBM experience  http://perer.org/papers/adamPerer-CHI2012.pdf }

The identified groups types were:

  1. Communities of Practice. CoPs are the centerpiece of knowledge sharing programs. Their purpose is to build capability in selected disciplines. They will usually be public groups. For example, a retail enterprise may form a CoP for all aspects of establishing and running a new retail outlet. The community would be used to share experiences on the way to converging to a suite of ‘best practices, that they would aim to implement across the organisation.
  2. Team/Process. This category covers task specific project teams or alternatively providing a shared space for a business process or function. In most cases these groups will be closed or private.
  3. Groups formed for sharing ideas and hopefully generating new value from innovations. It is best to think about such groups in two stages, being exploration and exploitation. The network needs to be large and diverse, to uncover the most opportunities. However, the exploit stage requires smaller, more focused teams to ensure a successful innovation
  4. The Expert / Help type group is what many of us see as the technical forums we might go to externally to get technical help. For novices, the answers are more than likely available in previously answered questions. In essence, they would be characterised by many questions posted, for a selected few to answer.
  5. Finally, the social (non-work) groups are sometimes frowned on; but in practice they are risk free places for staff to learn and experience online networking, so they do play an important part in the groups portfolio.

 Slide 3

This table summarizes the purposes and therefore value that can accrue from the different group types. Some important points that can be taken from this are:

  • Formally managed documents are important for some group types like CoPs and Teams, but less so for others, where archival search may be sufficient
  • Likewise with cohesive relationships, which are critical for teams say, but less so for Expert/Help groups for instance.
  • Large isn’t always good. For idea sharing the bigger and more diverse, the better. For teams, research has show that once we get past about 20 members, productivity decreases (https://www.getflow.com/blog/optimal-team-size-workplace-productivity)

 Slide 4

More than 80 years of academic research on performance of networks could be reduced to an argument between the value of Open and diverse networks versus closed, cohesive networks. This graphic was developed by Professor Ron Burt from the University of Chicago Business School, who is best known for his research on brokerage in open networks. However, Burt now concedes in his book on Brokerage and Closure in 2005, that value is maximised when diversity and closure are balanced.

It is therefore this framework that we are using for assessing and benchmarking Yammer Groups.

Slide 5

For pragmatic reasons we are using group size as a proxy for diversity, with the assumption that the larger the group, the more likely the more diverse the membership will be. For cohesion, we measure the average 2-way connections/member, using the assumption that if members have many reciprocated relationships inside the group, then the group is likely to be more cohesive.

This plot shows a typical pattern we find. The bubble size is based on group activities, so as you can see, activity is an important measure. But the positioning on the network performance chart can be quite differentiated by their respective diversity and cohesion measures.

The pattern shown is also consistent with what we see in our prior network survey results, which essentially shows that it is difficult not to see diversity and cohesion as a trade-off; so the ideal maximum performance in the top right corner, is in fact just that, an ideal.

Side 6

Now if we overlay what we see as ideal ‘goal states’ for the different types of groups that can be formed, it is possible to assess more accurately how a group is performing.

For example, a community of practice should have moderate to high cohesion and a group size commensurate with the ‘practice’ being developed.

The red region is showing where high performing teams would be located. High performing teams are differentiated by their levels of cohesion. Group size and even relative activity levels are poor indicators for a group formed as a team. If your group aims to be a shared ideas space, but you find yourself characterised as a strong team, then you are clearly in danger of “group think”.

Likewise you can infer a goal space for the Expert/Help group type.

If you are an ideas sharing group you have an extra measure of monitoring the number of exploitation teams that have been launched from ideas qualified in your group.

For the group leaders, who start in the bottom left, and many who are still there, it becomes an exercise in re-thinking your group type and purpose and then deciding the most appropriate actions for moving your group into the chosen goal space.

For some this may be growing broader participation, if you are expert help group; or building deeper relationships if you are a community of practice or functional team.

Slide 7

So in summing up:

Groups come in different shapes and sizes, where simple activity levels and membership size are insufficient for assessing success or otherwise.

Gaining critical mass for a group is important. Research has shown that critical mass needs to also include things like the diversity in the membership and the modes used to generate productive outputs.

{http://research.microsoft.com/en-us/um/redmond/groups/connect/CSCW_10/docs/p71.pdf}

The Diversity vs Cohesion network performance matrix provides a more sophisticated means for groups to assess their performance, than simple activity and membership level measures.

Once group leaders develop clarity around their form and purpose, the network performance framework can be used to provide them with more precise and actionable directions for success

Slide 8

We have now covered benchmarking externally at the Enterprise level and now internally at the group level.

Naturally the next level is to look and compare the members inside successful groups.

Thank you for your attention and we look forward to having you at our next episode.

Who Should Decide How You Should Collaborate or Not?

In a recent post pre- Microsoft’s recent Ignite 2016 conference, we intimated that we hoped that in the push to build the ultimate office tool that the core features of the component parts were not sacrificed in the name of standardisation. I can happily say now that post MS Ignite it appears that, at least for the product we are most interested in, Yammer, has re-surfaced as a more integral part of Office 365, without sacrificing its core value proposition. As a Yammer core user, it appears now that as circumstances arise, where our collaboration partners might need to manage content, collaborate in real time, schedule and manage an event, we will be able to seamlessly access these core functions of other components like Sharepoint, Skype, Outlook etc.. Now while of course we know events like MS Ignite are mostly to announce intentions, more so than working products, it is comforting to see a positive roadmap like this.

In effect Office 365 is now offering a whole multiplex of collaboration vehicles. There will be individuals looking for a simple ‘usage matrix’ of what to use when. Yet collaboration can mean different things to different people. Is working in your routine processing team a collaboration? Is reading someone else’s content a collaboration? Is sending an email a collaboration?

How do we define Collaboration?

A couple of years ago Deloitte Australia’s economics unit produced a significant report on the economic value of collaboration to the Australian economy. As part of the process Deloitte surveyed thousands of workers looking for how they spent their time at work, specifically related to collaboration activities:

collab-blog-tif

While the numbers will vary between individuals, we can look at the categories as typical work tasks and then look to map them to O365 components. For me the nearly 10% ‘Collaboration” is a natural home for Yammer, and probably “Socialising”. Routine tasks fit nicely into Sharepoint and Team sites. Outlook for Routine Communication. The individual work maps very nicely to core office 365 tools like Word, Excel and Powerpoint. So what we can see is that O365 can be nicely mapped to the O365 components. But does just knowing this help us use it productively? Who decides how we should interact and how?

Who should control collaboration?

The Deloitte work characterisation separates “collaboration” out from “interactions” as activities that staff engage in to be able to improve the way they work; improvising and innovating. While it may constitute only 10% of their work time on average, the impact is in improving the productivity of say routine tasks, routine communication and even individual work. So is it the role of managers to dictate modes of collaboration for their staff? Maybe its community managers of workplace improvement specialists? As the workplace moves to become more distributed and networked it is quickly becoming beyond that capability of specialist roles to orchestrate collaborative processes, without bloating the middle manager layers.

So what are we left with? I believe that it all comes back to the individual to “negotiate” how they interact and collaborate and how. As it turns out, the one who knows best as to how to improve your productivity is yourself. This comprehensive study on time-wasting by Paychex found that the most effective way to reduce time wasting is more flexible time scheduling or time off. Carpool recently ran an experiment in working from anywhere. Carpool CEO Jarom Reid speaks about the productivity improvements available when you have the flexibility of not being tied to a physical office. In the industrial age we became used to executives jobs being solely about linking and communication. However Reid, being the leader of a digitally enabled organisation, values having personal time where he can feel more productive than in the office. Andrew Pope writes about the dangers of over-collaboration. We all want our collaborations and interactions with colleagues to be productive. We feel we are over-collaborating when we feel we have wasted time in non-essential meetings. Pope suggests that individuals should take control of their collaboration activities to match their natural styles and tendencies, rather than trying to adhere to a particular organisational norm.

How will Office 365 Help?

So how would the new world of Office 365 support individual preference led collaboration? For those of us that have been used to living in Yammer or Sharepoint or Outlook it does put the onus on the individual to become competent in all the key toolsets, if we are to accommodate the potential preferences of our collaboration partners and avoid “tool solos”.

The nice thing about the Office 365 roadmap is that the tool silo walls have become more elastic. We can form a group from Yammer to explore an idea and then form a team to exploit the idea still inside Yammer, without having to move to a Teamsite. Alternatively, we can reach out from a Teamsite into a broader community group inside Yammer, if and when the need arises.  The benefits in making this investment in learning is the flexibility it can afford to enable you, as an individual, to be in charge of your own productivity and performance.

 

 

 

 

Yammer Benchmarking Edition 1

 

First in a series of SWOOP Yammer Benchmarking video blogs. Swoop has benchmarked some 36 Yammer installations to date. This first video blog shares some insights gained on the important measures that influence collaboration performance.

 

Video script:

SLIDE 1

Hello there

My Name is Laurence Lock Lee, and I’m the Co-Founder and Chief Scientist at Swoop Analytics.

If you are watching this you probably know what we do, but just in case you don’t, Swoop is a social analytics dashboard that draws its raw data from enterprise social networking tools like Yammer and provides collaboration intelligence to its users, who can be anyone in the organisation.

Our plan is to provide an ongoing series of short video blogs specifically on our Yammer benchmarking insights, as we work with the data we collect. We will aim to use this format to keep you appraised of developments as they happen. We have also recently signed a joint research agreement with the Digital Disruption Research Group at the University of Sydney in Australia. So expect to see the results of this initiative covered in future editions.

The Swoop privacy safeguards means its pure context free analysis, no organisational names, group names, individual names…we don’t collect them.

SLIDE 2

This is the “Relationships First” benchmarking framework we designed for our benchmarking. But we also measure traditional activity measures, which we tend not to favour as a collaboration performance measure…but more about that later. The 14 measures  help us characterise the organisations we benchmark by comparing them against the maximum, minimum and average scores of those in our sample set,  which currently sits at 36 organisations and growing rapidly. They represent organisations large and small from a full cross section of industries and geographies.

SLIDE 3

For those of you who have not been exposed to the Swoop behavioural online personas, you will find a number of articles on our blog.

Because I will be referring to them it’s useful to know the connection patterns inferred by each of them. We don’t include the ‘Observer’ persona here as they are basically non-participants.

Starting with the Responder; Responders make connections through responding to other people’s posts or replies. This can be a simple ’like’, mention or notify..…and it often is, but sometimes it can be a full written reply.

In contrast the catalyst makes connections through people replying to their posts. A good catalyst can make many connections through a good post. Responders have to work a bit harder. They mostly only get one connection per interaction.

The Engager as you can see is able to mix their giving and receiving. This is a bit of an art, but important as engagers are often the real connectors in the community or group.

And what about the broadcaster? Well if your posts don’t attract any response, then we can’t identify any connections for you.

SLIDE 4

This is how we present our benchmarking results to the participants. You can see that we have the 14 dimensions normalized such that the ‘best in class’ results are scored at 100 points and the worst performance at zero. The orange points are the score for the organisation with lines connecting their scores to the average scores.

A few points to note are that we only count ‘active users’ being those that have had at least one activity in Yammer over the period we analyze, which is the most recent 6 months.

Some of the measures have asterisks (*) , which means that the score has been reversed for comparison purposes. For example, a high score for %Observers is actually a bad result, so this is reversed for comparison purposes.

Finally, not all of the measures are independent of each other, so it is possible to see recurring patterns amongst organisations. We can therefore tell a story of their journey to date, through seeing these patterns.  For example, a poor post/reply ratio indicates to us that the network is immature and therefore we would also expect a high % observers score.

SLIDE 5

One way of understanding which of the 14 measures are most important to monitor is to look at the relative variances for each measure across the full sample set. Where we see a large relative variance, we might assume that this is an area which provides most opportunity for improvement. In our sample to date it is the two-way connections measure which leads the way. I’ll go into a bit more detail on this later on. The % Direction measure relies solely on the use of the ‘notification’ type, which we know some organisations have asked users to avoid, as it’s really just like a cc in an email. So perhaps we can ignore this one to some extent. The Post/Reply measure is, we believe, an indicator of maturity. Foe a new network we would expect a higher proportion of posts to replies, as community leaders look to grow activity. However, over time we would expect that the ratio would move more toward favoring replies, as participants become more comfortable with online discussions.

It’s not surprising that this measure shows up as we do have quite a mix of organisations at different maturity stages in our sample to date. The area where we have seen less variance are the behavioural personas, perhaps with the exception of the %Broadcasters. This suggests that at least at the Enterprise level, organisations are behaving similarly.

SLIDE 6

This slide is a little more complex, but it is important if you are to gain an appreciation of some of the important relationship measures that SWOOP reports on.

Following this simple example:

Mr Catalyst here makes a post in Yammer. It attracts a response from Ms Responder and Mr Engager. These responses we call interactions, or activities. By undertaking an interaction, we have also created a connection for all three participants.

Now Mr Engager’s response was a written reply, that mentions Ms Responder, because that’s the sort of guy he is. Mr Catalyst responds in kind , so now you can see that Mr Catalyst and Mr Engager have created a two way connection.

And Ms Responder responds to Mr Engager’s mention with an appreciative like, thereby creating a two-way connection Between Mr Engager and Ms Responder.  Mr Engager is now placed as a broker of the relationship between Mr Catalyst and Ms Responder. Mr Catalyst could create his own two-way connection with Ms Responder, but perhaps she just responded to Mr catalyst with a like…leaving little opportunity for a return response.

So after this little flurry of activity each individual can reflect on connections made…as Mr Engager is doing here.

So in summary, An interaction is any activity on the platform. A connection is created by an interaction and of course strengthened by more interactions with that connection. Finally, we value two-way interactions as this is reciprocity, which we know leads to trust and more productive collaboration

SLIDE 7

Finally I want to show you how the two-way connections scores varies amongst the 36  participants to date. Typically, we would look to build the largest and most cohesive Yammer network as possible, though we accept this might not always be the case. While the data shows that the top 4 cohesive networks were relatively small, there are also 3 organisations that have quite large networks with quite respectable two-way connections scores.

So there is definitely something to be learnt here between the participants.

SLIDE 8

So in summing up, as of September we have 36 participants in our benchmark and growing rapidly now. The two-way connections measure, which is arguably the most important predictor of collaborative performance, was also the most varied amongst the participants.

By looking at the patterns between the measures we can start to see emerging patterns. We hope to explore these patterns in more detail with our research partners in the coming year.

Finally, we show that network size should not be seen as a constraint to building a more cohesive network. We have reported previously that another common measure, network activity levels are also an unreliable measure for predicting collaboration performance.

SLIDE 9

In the next video blog we will be looking at Yammer groups in more detail. We are aware that for many organisations, it’s the Yammer groups that form the heart of the network, so it makes sense to take a deeper dive into looking at them.

Thank you for your attention and look forward to seeing you next time.

Disrupt Sydney 2016 – Not to Take Anything for Granted 

Q&A with SWOOP Co-Founder Dr Laurence Lock Lee after attending and presenting at “Disrupt Sydney” on 23 September, 2016. Disrupt Sydney is a one-day annual conference organised by the University of Sydney which aims to discuss and debate (digital) disruption.


What is digital disruption and why is it important for organisations? 

Here is the definition used by the event organisers:

“Digital disruption refers to changes enabled by digital technologies that occur at a pace and magnitude that disrupt established ways of value creation, social interactions, doing business and more generally our thinking” 

This event is in its 4th Edition and was big news at its initiation. In his introduction, the founder of the event Prof. Kai Reimer bemoaned the fact that the term has since been ‘hijacked’ by mainstream media, declaring just about anything as a digital disruption. But the strong attendance suggests that many of us are excited by the potential for significant change to the status quo, whatever the label.

Can you share any conference tips to leverage digital disruption?

I think the biggest tip would be “not to take anything for granted!” We had several speakers challenge the audience with findings from their own research and experiences. Statements like “Brainstorming meetings are a waste of time”, “Open offices are bad for you”, “Multi-discipline teams don’t work”, “Games make failure fun”, “It’s very difficult to live in the share economy “, “Blockchain claims are all lies”; kept us on our toes.
  

How will you implement any of the learnings at SWOOP?

I co-facilitated a workshop session on “Disrupting Traditional Business Intelligence Systems with Social Data”. My claim was that traditional data warehouse based business intelligence systems had changed little since the 1970s and were costly to build and of questionable value; and therefore ripe for disruption. My disruptive proposition was that we should move the emphasis to the execution stage, using social analytics to monitor whether insights were engaging the collaborators required to take an action. We had 4 teams work through their disruptive ideas covering the full scope of BI. Some key points that I took away were firstly that no piece of intelligence will be universally accepted by all, no matter how robust the intelligence gathering process is. The Climate Change debate was mentioned as proof of that! The second is that in order to engage disinterested stakeholders we may need to employ some gamification tactics. Both of these points reinforce the directions we are taking with SWOOP via the use of gamification to better communicate our social analytics messages.

Who was your stand out presenter and why?

Well to be fair the question should be “other than Dr. Karl Kruszelnicki ”. Of course Dr. Karl is a recognised national treasure for his abilities to communicate about science. He even did a reasonable job of trying to explain Blockchain…the biggest technological black box in history!

I liked Claire Marshall’s talk on her experiment with living in the share economy in London for a full month! She met some tremendous ‘giving’ people but found it hard to earn a living through the freelancing sites. Basically it was hard work to win and then do the work; for not much.

What was most exciting for you to hear?

Of all of Dr. Karl’s stories, the one I remember most was something about Russian submarines surfacing from under Arctic ice for 30+ years at the same time and place. They were able to support the climate change claims based on the thickness of the ice that they had to break through each year, getting less and less.

Stand out conference insight

I was excited to hear a detractor for Blockchain. As we know Blockchain is the next ‘big thing’ and there was a panel on non-financial uses of Blockchain. Dr. Karl facilitated the session and tried to simplify the concept for the audience. But in the end it sounded like you needed to be a mathematical geek to make any sense of how it could work. The detractor was an acknowledged ‘Mathematical geek’. I’m not that fond of ‘black box’ solutions as you can see.

What other question would you ask yourself?

None. You’ve done a great job!
  

Anything else you’d like to add?

Only that this has been my first time at the event, and will definitely be back next year. Perhaps with a story from the joint research we are conducting with the Digital Disruption Research Group (the organisers of the event).