SWOOP Video Blog 2 – Yammer Groups

The second in our SWOOP Video Blog Series:

Slide 1

Hi there, I’m Laurence Lock Lee, the co-founder and chief scientist at Swoop Analytics

In this second episode of Swoop Benchmarking insights we are drilling down to the Yammer Group level. Groups are where the real collaborative action happens.

As Yammer Groups can be started by anyone in the organisation, they quickly build up to hundreds, if not thousands in some organisations. Looking at activity levels alone we will see that the majority of groups do not sustain consistent activity, while a much smaller proportion look to be really thriving.

As useful as activity levels and membership size are, as we have suggested before, they are crude measures which can mask true relationship centred collaboration performance being achieved.

In this session we provide insights into how organisations can compare and benchmark their internal groups.

Slide 2

There is no shortage of literature and advice on how to build a successful on-line community or group. The universal advice for the first step is to identify the purpose. A well articulated purpose statement will identify what success would look like for this group or community.

What we do know from our experience to date is that there are a variety of purposes that online groups are formed. IBM has conducted a detailed analysis of their internal enterprise social networking system, looking to see if the usage logs could delineate the different types of groups being formed. What they found was five well delineated types of groups. {IBM classification from years of IBM experience  http://perer.org/papers/adamPerer-CHI2012.pdf }

The identified groups types were:

  1. Communities of Practice. CoPs are the centerpiece of knowledge sharing programs. Their purpose is to build capability in selected disciplines. They will usually be public groups. For example, a retail enterprise may form a CoP for all aspects of establishing and running a new retail outlet. The community would be used to share experiences on the way to converging to a suite of ‘best practices, that they would aim to implement across the organisation.
  2. Team/Process. This category covers task specific project teams or alternatively providing a shared space for a business process or function. In most cases these groups will be closed or private.
  3. Groups formed for sharing ideas and hopefully generating new value from innovations. It is best to think about such groups in two stages, being exploration and exploitation. The network needs to be large and diverse, to uncover the most opportunities. However, the exploit stage requires smaller, more focused teams to ensure a successful innovation
  4. The Expert / Help type group is what many of us see as the technical forums we might go to externally to get technical help. For novices, the answers are more than likely available in previously answered questions. In essence, they would be characterised by many questions posted, for a selected few to answer.
  5. Finally, the social (non-work) groups are sometimes frowned on; but in practice they are risk free places for staff to learn and experience online networking, so they do play an important part in the groups portfolio.

 Slide 3

This table summarizes the purposes and therefore value that can accrue from the different group types. Some important points that can be taken from this are:

  • Formally managed documents are important for some group types like CoPs and Teams, but less so for others, where archival search may be sufficient
  • Likewise with cohesive relationships, which are critical for teams say, but less so for Expert/Help groups for instance.
  • Large isn’t always good. For idea sharing the bigger and more diverse, the better. For teams, research has show that once we get past about 20 members, productivity decreases (https://www.getflow.com/blog/optimal-team-size-workplace-productivity)

 Slide 4

More than 80 years of academic research on performance of networks could be reduced to an argument between the value of Open and diverse networks versus closed, cohesive networks. This graphic was developed by Professor Ron Burt from the University of Chicago Business School, who is best known for his research on brokerage in open networks. However, Burt now concedes in his book on Brokerage and Closure in 2005, that value is maximised when diversity and closure are balanced.

It is therefore this framework that we are using for assessing and benchmarking Yammer Groups.

Slide 5

For pragmatic reasons we are using group size as a proxy for diversity, with the assumption that the larger the group, the more likely the more diverse the membership will be. For cohesion, we measure the average 2-way connections/member, using the assumption that if members have many reciprocated relationships inside the group, then the group is likely to be more cohesive.

This plot shows a typical pattern we find. The bubble size is based on group activities, so as you can see, activity is an important measure. But the positioning on the network performance chart can be quite differentiated by their respective diversity and cohesion measures.

The pattern shown is also consistent with what we see in our prior network survey results, which essentially shows that it is difficult not to see diversity and cohesion as a trade-off; so the ideal maximum performance in the top right corner, is in fact just that, an ideal.

Side 6

Now if we overlay what we see as ideal ‘goal states’ for the different types of groups that can be formed, it is possible to assess more accurately how a group is performing.

For example, a community of practice should have moderate to high cohesion and a group size commensurate with the ‘practice’ being developed.

The red region is showing where high performing teams would be located. High performing teams are differentiated by their levels of cohesion. Group size and even relative activity levels are poor indicators for a group formed as a team. If your group aims to be a shared ideas space, but you find yourself characterised as a strong team, then you are clearly in danger of “group think”.

Likewise you can infer a goal space for the Expert/Help group type.

If you are an ideas sharing group you have an extra measure of monitoring the number of exploitation teams that have been launched from ideas qualified in your group.

For the group leaders, who start in the bottom left, and many who are still there, it becomes an exercise in re-thinking your group type and purpose and then deciding the most appropriate actions for moving your group into the chosen goal space.

For some this may be growing broader participation, if you are expert help group; or building deeper relationships if you are a community of practice or functional team.

Slide 7

So in summing up:

Groups come in different shapes and sizes, where simple activity levels and membership size are insufficient for assessing success or otherwise.

Gaining critical mass for a group is important. Research has shown that critical mass needs to also include things like the diversity in the membership and the modes used to generate productive outputs.

{http://research.microsoft.com/en-us/um/redmond/groups/connect/CSCW_10/docs/p71.pdf}

The Diversity vs Cohesion network performance matrix provides a more sophisticated means for groups to assess their performance, than simple activity and membership level measures.

Once group leaders develop clarity around their form and purpose, the network performance framework can be used to provide them with more precise and actionable directions for success

Slide 8

We have now covered benchmarking externally at the Enterprise level and now internally at the group level.

Naturally the next level is to look and compare the members inside successful groups.

Thank you for your attention and we look forward to having you at our next episode.

Who Should Decide How You Should Collaborate or Not?

In a recent post pre- Microsoft’s recent Ignite 2016 conference, we intimated that we hoped that in the push to build the ultimate office tool that the core features of the component parts were not sacrificed in the name of standardisation. I can happily say now that post MS Ignite it appears that, at least for the product we are most interested in, Yammer, has re-surfaced as a more integral part of Office 365, without sacrificing its core value proposition. As a Yammer core user, it appears now that as circumstances arise, where our collaboration partners might need to manage content, collaborate in real time, schedule and manage an event, we will be able to seamlessly access these core functions of other components like Sharepoint, Skype, Outlook etc.. Now while of course we know events like MS Ignite are mostly to announce intentions, more so than working products, it is comforting to see a positive roadmap like this.

In effect Office 365 is now offering a whole multiplex of collaboration vehicles. There will be individuals looking for a simple ‘usage matrix’ of what to use when. Yet collaboration can mean different things to different people. Is working in your routine processing team a collaboration? Is reading someone else’s content a collaboration? Is sending an email a collaboration?

How do we define Collaboration?

A couple of years ago Deloitte Australia’s economics unit produced a significant report on the economic value of collaboration to the Australian economy. As part of the process Deloitte surveyed thousands of workers looking for how they spent their time at work, specifically related to collaboration activities:

collab-blog-tif

While the numbers will vary between individuals, we can look at the categories as typical work tasks and then look to map them to O365 components. For me the nearly 10% ‘Collaboration” is a natural home for Yammer, and probably “Socialising”. Routine tasks fit nicely into Sharepoint and Team sites. Outlook for Routine Communication. The individual work maps very nicely to core office 365 tools like Word, Excel and Powerpoint. So what we can see is that O365 can be nicely mapped to the O365 components. But does just knowing this help us use it productively? Who decides how we should interact and how?

Who should control collaboration?

The Deloitte work characterisation separates “collaboration” out from “interactions” as activities that staff engage in to be able to improve the way they work; improvising and innovating. While it may constitute only 10% of their work time on average, the impact is in improving the productivity of say routine tasks, routine communication and even individual work. So is it the role of managers to dictate modes of collaboration for their staff? Maybe its community managers of workplace improvement specialists? As the workplace moves to become more distributed and networked it is quickly becoming beyond that capability of specialist roles to orchestrate collaborative processes, without bloating the middle manager layers.

So what are we left with? I believe that it all comes back to the individual to “negotiate” how they interact and collaborate and how. As it turns out, the one who knows best as to how to improve your productivity is yourself. This comprehensive study on time-wasting by Paychex found that the most effective way to reduce time wasting is more flexible time scheduling or time off. Carpool recently ran an experiment in working from anywhere. Carpool CEO Jarom Reid speaks about the productivity improvements available when you have the flexibility of not being tied to a physical office. In the industrial age we became used to executives jobs being solely about linking and communication. However Reid, being the leader of a digitally enabled organisation, values having personal time where he can feel more productive than in the office. Andrew Pope writes about the dangers of over-collaboration. We all want our collaborations and interactions with colleagues to be productive. We feel we are over-collaborating when we feel we have wasted time in non-essential meetings. Pope suggests that individuals should take control of their collaboration activities to match their natural styles and tendencies, rather than trying to adhere to a particular organisational norm.

How will Office 365 Help?

So how would the new world of Office 365 support individual preference led collaboration? For those of us that have been used to living in Yammer or Sharepoint or Outlook it does put the onus on the individual to become competent in all the key toolsets, if we are to accommodate the potential preferences of our collaboration partners and avoid “tool solos”.

The nice thing about the Office 365 roadmap is that the tool silo walls have become more elastic. We can form a group from Yammer to explore an idea and then form a team to exploit the idea still inside Yammer, without having to move to a Teamsite. Alternatively, we can reach out from a Teamsite into a broader community group inside Yammer, if and when the need arises.  The benefits in making this investment in learning is the flexibility it can afford to enable you, as an individual, to be in charge of your own productivity and performance.

 

 

 

 

Yammer Benchmarking Edition 1

 

First in a series of SWOOP Yammer Benchmarking video blogs. Swoop has benchmarked some 36 Yammer installations to date. This first video blog shares some insights gained on the important measures that influence collaboration performance.

 

Video script:

SLIDE 1

Hello there

My Name is Laurence Lock Lee, and I’m the Co-Founder and Chief Scientist at Swoop Analytics.

If you are watching this you probably know what we do, but just in case you don’t, Swoop is a social analytics dashboard that draws its raw data from enterprise social networking tools like Yammer and provides collaboration intelligence to its users, who can be anyone in the organisation.

Our plan is to provide an ongoing series of short video blogs specifically on our Yammer benchmarking insights, as we work with the data we collect. We will aim to use this format to keep you appraised of developments as they happen. We have also recently signed a joint research agreement with the Digital Disruption Research Group at the University of Sydney in Australia. So expect to see the results of this initiative covered in future editions.

The Swoop privacy safeguards means its pure context free analysis, no organisational names, group names, individual names…we don’t collect them.

SLIDE 2

This is the “Relationships First” benchmarking framework we designed for our benchmarking. But we also measure traditional activity measures, which we tend not to favour as a collaboration performance measure…but more about that later. The 14 measures  help us characterise the organisations we benchmark by comparing them against the maximum, minimum and average scores of those in our sample set,  which currently sits at 36 organisations and growing rapidly. They represent organisations large and small from a full cross section of industries and geographies.

SLIDE 3

For those of you who have not been exposed to the Swoop behavioural online personas, you will find a number of articles on our blog.

Because I will be referring to them it’s useful to know the connection patterns inferred by each of them. We don’t include the ‘Observer’ persona here as they are basically non-participants.

Starting with the Responder; Responders make connections through responding to other people’s posts or replies. This can be a simple ’like’, mention or notify..…and it often is, but sometimes it can be a full written reply.

In contrast the catalyst makes connections through people replying to their posts. A good catalyst can make many connections through a good post. Responders have to work a bit harder. They mostly only get one connection per interaction.

The Engager as you can see is able to mix their giving and receiving. This is a bit of an art, but important as engagers are often the real connectors in the community or group.

And what about the broadcaster? Well if your posts don’t attract any response, then we can’t identify any connections for you.

SLIDE 4

This is how we present our benchmarking results to the participants. You can see that we have the 14 dimensions normalized such that the ‘best in class’ results are scored at 100 points and the worst performance at zero. The orange points are the score for the organisation with lines connecting their scores to the average scores.

A few points to note are that we only count ‘active users’ being those that have had at least one activity in Yammer over the period we analyze, which is the most recent 6 months.

Some of the measures have asterisks (*) , which means that the score has been reversed for comparison purposes. For example, a high score for %Observers is actually a bad result, so this is reversed for comparison purposes.

Finally, not all of the measures are independent of each other, so it is possible to see recurring patterns amongst organisations. We can therefore tell a story of their journey to date, through seeing these patterns.  For example, a poor post/reply ratio indicates to us that the network is immature and therefore we would also expect a high % observers score.

SLIDE 5

One way of understanding which of the 14 measures are most important to monitor is to look at the relative variances for each measure across the full sample set. Where we see a large relative variance, we might assume that this is an area which provides most opportunity for improvement. In our sample to date it is the two-way connections measure which leads the way. I’ll go into a bit more detail on this later on. The % Direction measure relies solely on the use of the ‘notification’ type, which we know some organisations have asked users to avoid, as it’s really just like a cc in an email. So perhaps we can ignore this one to some extent. The Post/Reply measure is, we believe, an indicator of maturity. Foe a new network we would expect a higher proportion of posts to replies, as community leaders look to grow activity. However, over time we would expect that the ratio would move more toward favoring replies, as participants become more comfortable with online discussions.

It’s not surprising that this measure shows up as we do have quite a mix of organisations at different maturity stages in our sample to date. The area where we have seen less variance are the behavioural personas, perhaps with the exception of the %Broadcasters. This suggests that at least at the Enterprise level, organisations are behaving similarly.

SLIDE 6

This slide is a little more complex, but it is important if you are to gain an appreciation of some of the important relationship measures that SWOOP reports on.

Following this simple example:

Mr Catalyst here makes a post in Yammer. It attracts a response from Ms Responder and Mr Engager. These responses we call interactions, or activities. By undertaking an interaction, we have also created a connection for all three participants.

Now Mr Engager’s response was a written reply, that mentions Ms Responder, because that’s the sort of guy he is. Mr Catalyst responds in kind , so now you can see that Mr Catalyst and Mr Engager have created a two way connection.

And Ms Responder responds to Mr Engager’s mention with an appreciative like, thereby creating a two-way connection Between Mr Engager and Ms Responder.  Mr Engager is now placed as a broker of the relationship between Mr Catalyst and Ms Responder. Mr Catalyst could create his own two-way connection with Ms Responder, but perhaps she just responded to Mr catalyst with a like…leaving little opportunity for a return response.

So after this little flurry of activity each individual can reflect on connections made…as Mr Engager is doing here.

So in summary, An interaction is any activity on the platform. A connection is created by an interaction and of course strengthened by more interactions with that connection. Finally, we value two-way interactions as this is reciprocity, which we know leads to trust and more productive collaboration

SLIDE 7

Finally I want to show you how the two-way connections scores varies amongst the 36  participants to date. Typically, we would look to build the largest and most cohesive Yammer network as possible, though we accept this might not always be the case. While the data shows that the top 4 cohesive networks were relatively small, there are also 3 organisations that have quite large networks with quite respectable two-way connections scores.

So there is definitely something to be learnt here between the participants.

SLIDE 8

So in summing up, as of September we have 36 participants in our benchmark and growing rapidly now. The two-way connections measure, which is arguably the most important predictor of collaborative performance, was also the most varied amongst the participants.

By looking at the patterns between the measures we can start to see emerging patterns. We hope to explore these patterns in more detail with our research partners in the coming year.

Finally, we show that network size should not be seen as a constraint to building a more cohesive network. We have reported previously that another common measure, network activity levels are also an unreliable measure for predicting collaboration performance.

SLIDE 9

In the next video blog we will be looking at Yammer groups in more detail. We are aware that for many organisations, it’s the Yammer groups that form the heart of the network, so it makes sense to take a deeper dive into looking at them.

Thank you for your attention and look forward to seeing you next time.