Friday, January 10, 2014

Unified Index to Information Repositories

Amount of information is doubling every 18 months, and unstructured information volumes grow six times faster than structured.

Employees spend far too much time, about 20% of their time, on average, looking for, not finding and recreating information. Once they find the information, 42% of employees report having used the wrong information, according to a recent survey.

To combat this reality, for years, companies have spent hundreds of thousands, even millions, to move data to centralized systems, in an effort to better manage and access its growing volumes, only to be disappointed as data continues to proliferate outside of that system. Even with a single knowledgebase in place, employees report decrease in critical customer service metrics, due to the inability to quickly locate the right knowledge and information to serve customers.

Despite best efforts to move data to centralized platforms, companies are finding that their knowledgebase runs throughout enterprise systems, departments, divisions and newly acquired subsidiaries. Knowledge is stored offline in PCs and laptops, in emails and archives, intranets, file shares, CRM systems, ERPs, home-grown systems, and many others—across departments and across geographies.

Add to this the proliferation of enterprise applications use (including social networks, wikis, blogs and more) throughout organizations and it is no wonder that efforts to consolidate data into a single knowledgebase, a single "version of the truth" have failed... and at a very high price.

The bottom line is, moving data into a single knowledgebase is a losing battle. There remains a much more successful way to effectively manage your knowledge ecosystem without moving data.

When there are multiple systems containing organization's information are in place, a better approach is to stop moving data by combining structured and unstructured data from virtually any enterprise system, including social networks, into a central, unified index. Think of it as an indexing layer that sits above all enterprise systems, from which services can be provided to multiple departments, each configured to that department’s specific needs.

This approach enables dashboards, focused on various business departments and processes, prospective customers. Such composite views of information provide new, actionable perspectives on many business processes, including overall corporate governance. The resulting juxtaposition of key metrics and information improves decision making and operational efficiency.

This approach allows IT departments to leverage their existing technology, and avoid significant cost associated with system integration and data migration projects. It also helps companies avoid pushing their processes into a one-size-fits-all, cookie-cutter framework.

With configurable dashboards, companies decide how/what/where information and knowledge is presented, workflows are enabled, and for what groups of employees. Information monitoring and alerts facilitate compliance. There is virtually no limit to the type of information and where it is pulled from, into the central, unified and, importantly, highly secure index: structured, unstructured, from all corporate email, files, archives, on desktops and in many CRMs, CMS, knowledgebases, etc.

Enterprise applications have proliferated throughout organizations, becoming rich with content. And yet all of that knowledge and all of that content remain locked within the community, often not even easily available to the members themselves.

Now it is possible to leverage the wisdom of communities in enterprise search efforts. User rankings, best bets and the ability to find people through the content they create are social search elements that provide the context employees and customers have come to expect from their interactions with online networks.

Imagine one of your sales executives attempting to sell one of your company’s largest accounts. They access a composite, 360 degree view of that company, and see not only the account history, sales opportunities, contact details, prior email conversations, proposals, contracts, customer service tickets, that customer’s recent comments to a blog post, complaints about service or questions posed within your customer community.

Armed with this knowledge, your sales executive is in a more informed position to better assist and sell to that customer. Without moving data your sales executive has a single, composite view of information that strategically informs the sales process.

Ubiquitous knowledge access allows employees to search where they work. Once you created the central index, you need to provide your employees with anytime/anywhere access to pertinent information and knowledge.

In many organizations, employees spend a lot of their time in MS Outlook. Other organizations with large sales teams need easy access to information on the road. Also valuable is the ability to conduct secure searches within enterprise content directly from a BlackBerry, including guided navigation. Even when systems are disconnected, including laptops, users can easily find information from these systems, directly from their mobile device. Again, without moving data, organizations can enjoy immediate, instant access to pertinent knowledge and information, anywhere, anytime.

Companies that stopped moving data report favorable results of their unified information index layer from multiple repositories such as faster customer issues resolution time, significant reduction in dedicated support resources, savings in upgrade cost for the legacy system which was replaced, increase in self-service customer satisfaction, and reducing average response time to customers' queries.

There are few applications currently in the market that fulfill these functions. These are enterprise search applications.

However, there is no "one fits all" approach. Any solution should be based on organization's business requirements.

Saturday, December 28, 2013

Tips to Ensure Knowledge Management Success

It is very important for long term success of knowledge management initiative to align it with organizational strategy, especially in times of change. KM initiative can "drift" over time if measures are not taken to align it with organizational mission, new turns in direction, management changes, and different product/service offerings.

Here are useful, actionable tips to ensure that an organization's knowledge management initiative succeeds not just at launch stage but also over the years.

1. Bring knowledge management into mission critical activities. Knowledge management is a great enabler of many business processes, but it can be very relevant to ensure success and continuity of mission critical activities in areas ranging from banking to security. For example, you can leverage knowledge management to acquire, retain, and spread mission critical knowledge in IT global services.

2. Focus on knowledge retention during times of down-size or reorganization. Globalization, aging work forces and economic downturns are leading to loss of valuable knowledge. KM can help to eliminate that gap in the near term and especially in the long term.

3. Use KM to improve understanding and execution of business reorganization. KM sometimes gets put aside during complex organizational restructuring, but can actually be useful in determining how to reorganize effectively. Some companies seem to spend almost half of their time on restructuring, but are not using KM to be more effective or innovative in restructuring.

4. Go beyond connecting to networking. KM at the people level sometimes gets stuck at the stage of people profiles and a bewildering range of discussion forums. It is important to add collaborative tasks on top of such connections, so that actual networking takes place and collective intelligence emerges.

5. Conduct more research on knowledge work. With all the commotion about social media in the enterprise, people tend to forget that knowledge work is essentially built on effective communication. More research is needed about the changing workplace to understand how KM is becoming even more critical to 21st century organizations, and how knowledge seeking/collaboration behaviors of knowledge workers are changing.

6. Pay more attention to design and visualization. In a workplace of increasing information overload and multitasking, it is important to design knowledge interactions and interfaces in a compelling yet effective manner. Effective design can help in sense-making in fast changing and information-intensive environments.

7. Pay attention to the requirements of mobile knowledge workers. BYOD (bring your own mobile device to the office) is now accustomed feature. More and more employees and managers are using mobile devices not just for accessing information but also for full workflow. Knowledge processes should be optimized for mobile devices, and not just in terms of device interface but also in speed of delivery, e.g. fast loading dashboards for sales teams.

8. Blend informal and formal activities in knowledge sharing sessions. For example, a knowledge fair format with each project team presenting its achievements and learning enforces the KM message stronger for all participants. The very act of presenting a KM case study can help employees develop a deeper appreciation of the strengths and opportunities for KM at work in the long term, and instills a sense of pride.

9. Use KM initiative in many different audiences and don't restrict it to only select managers or project managers. The more people who engage with KM in full-time or part-time roles, the more buy-in KM will gain and the more value it will contribute.

10. Highlight KM practitioners across the organization. Don't just showcase the usual super-achievers; also feature the employees who are coming up with their first, unique work insights or first reuse of existing knowledge assets.

11. Don't pitch KM as an extra activity to be done after usual work hours; it should be embedded in regular workflow. Even additional activities such as conferencing and industry meetings should be seen as a way of learning, brainstorming and bench-marking.

12. Avoid too much theory. While the core team certainly needs to be abreast of developments in KM models and research, its recommendations and implementations must be demystified and simplified so that employees are not distracted or confused with more buzzwords.

13. Don't get hung up on the name KM. Some people seem to have a problem with the words knowledge management and even KM. Other terms such as collaborative work or knowledge sharing seem to be in use as well.

14. Use metrics and analytics effectively, and conduct KM course corrections as appropriate. Many KM initiatives stop their outcome studies at the level of activity metrics, but fail to connect them to deeper processes, knowledge insights, people attitudes and overall impacts on productivity and innovation. One company reported that only 40% of its knowledge assets were being used, and some were being viewed only by the creator. At the same time, metrics are not the only assessment.

15. Help ensure long term success of KM by evangelizing it to employees. This helps create awareness in employees about the importance of KM and strengthens the KM initiative.

Thursday, November 21, 2013

Choosing the Right Content and Knowledge Management Tools

Effective content and knowledge management is a combination of theory, practice, and technology. You should not focus too much on the technology part without considering other parts.

However, effective technology deployment are essential to content and knowledge management success. The challenge is that there is no such thing as "content and knowledge management tools" marketplace. Depending on the application, content and knowledge management can include different types of technology, comprising many diverse market segments.

Today, content and knowledge management practitioners need to follow technology developments.

For many years, the main platforms for content and knowledge management revolved around searchable knowledgebases and discussion forums. Enterprise portals emerged to try to present enterprise information via a single dashboard. That didn't usually work out so well, although portal technology still plays a key role for many use cases today.

Similarly, enterprise search held and important part in the enterprise content management where the right information easily retrievable from multiple repositories through the single interface. Search technology still plays a critical role in many cases.

Now content management practitioners have to take into account a wide range of repositories and applications, from enterprise video to social media monitoring and intelligence. The diversity of content management technology is growing and proliferates.

Of course, knowledgebases still remain important, but the way we build and manage them has changed dramatically:
  • Wikis now power some of the most definitive knowledgebases within and beyond the enterprise.
  • Sophisticated social Q&A applications are generating impressive, demand-driven knowledge sets in many environments.
  • Digital community spaces are not new, but richer community platforms with increasingly important facilitation features have made them far more accessible in the enterprise.
  • Ideation (a.k.a., open innovation) applications are also coming of age, amid much healthy experimentation.
For content management practitioners, this means mastering a new set of technologies to address old problems. But the opposit is also true: some older technologies are finding new use within the enterprise.

Digital asset management and media asset management platforms are not new. What changed is their increasing adoption within broader enterprise contexts. More and more of our digital knowledge is not textual any more.

Much of our textual knowledge that does remain still resides in files waiting to get more liberated. Hence the meteoric rise of file sharing services, most of them based in the cloud, and many of them now targeting enterprise scenarios.

The rise of social media monitoring and intelligence has given new life to the field of text analytics, even while exposing the limitations of individual analytics engines.

Not every organization needs all those types of tools. But the savvy content management practitioners can help guide his or her colleagues to the appropriate technology for their organization.

Very often over the past decade, when content management practitioners began new projects, the preferred solution was Microsoft SharePoint - a platform that seemingly can do it all. You need to remember to not to focus too much on any one platform but base decisions on your organization business requirements.

For most organizations, initial investments in social computing have centered on creating social spaces where employees could go to engage in more informal discussions and networking. The actual results have often proved uneven, yet promising enough to sustain further investment and experimentation within most enterprises.

Social features are important to effective enterprise collaboration. More social and collaborative digital workplace experience has become increasingly essential for all enterprise computing. Your colleagues really want a social layer across their digital experience. But it could be the opposite as well. Again, remember your business requirements.

Many of new tools come with their own repositories and, left alone, will lead to more information silos reducing their long-term value. Many vendors argue that search technologies will solve that problem. You need to focus on things like filtering services for activity streams and appropriate levels of information management.

You will also add value by demonstrating that collaboration and knowledge sharing are not places people go, but things they do.

With the rise of mobile, that kind of contextual relevancy has become more urgent. But it is going to require an understanding of a wider choice of technology choices.

Content management practitioners are uniquely positioned to help the organization to put new tools in the context of daily work. Understand the suitability of the right tool for the right job. Advocate for a scenario-based approach to all technology selections. The right tool is not sufficient for content and knowledge management success, but it is an increasingly important condition.

Friday, November 8, 2013

New Trends in Records Management

Records management has changed from what it was before and there are new trends in it.

Records management professionals should adapt to it and be open to collaboration and working cross-functionally with business users to develop their road-map for the identification and exploration of new sources of corporate records.

Technology has evolved since the first wave of records management tools entered the market. Regulations changed and they now cover the broader spectrum of electronically stored information, for example, video, social media or instant messaging, not just the images or office documents that have traditionally been called "records." Information governance is emerging as a term that describes and supports a holistic, life cycle view of the creation, use, and protection of digital information.

Let's look more closely at these new trends.

Records management shifts to information governance.

Work-in-progress documents, structured or semi-structured information require governance, including protection from unauthorized access, use, deletion or disclosure across their life cycle. In other words, the frequently changing data in your system reflects your business activities. Holds and discovery to meet litigation or audit need to be available regardless of the item's record status or storage location, i.e. on-premises or in a cloud service.

Many businesses continue to lack confidence in the progress of their electronic records management programs, compliance initiatives, and e-discovery preparedness. The shift to a more comprehensive and proactive management of information across its entire life cycle has begun.

The concepts of file plans, folders, file parts and cutoffs have imposed constraints on how electronic information is organized and managed. The vocabulary of the paper records center is no longer relevant in a digital-first organization. Today, an information worker can compose an agreement in a cloud application, such as Google Docs, share it with a colleague, edit it on a tablet device and push revisions to a customer collaboration site even while on an airplane. But, business records continue to be generated outside traditional desktop applications.

New vendors are taking fresh approach to addressing compliance, categorization and retention requirements. The shift to a more comprehensive and proactive management of information across its entire business life cycle rather than just its end has begun. They approach governance of information beyond the "record." Technologies with strong roots in search, archiving and retention management offer the capabilities to manage many forms of electronically stored information, such as social activity or rich media, even when such information is not formally flagged as a record.

Cloud and social platforms render "file and declare" ineffective.

Traditional records management tools are slow to make the leap to the cloud. Records managers may be at risk of holding obsolete assumptions about importing or filing content into an on-premises records repository.

Records and compliance managers remain wary of cloud and social platforms. Enterprise architectures and their peers in records management practice groups are not aligned on cloud computing benefits. Cloud providers are increasingly supporting content segregation, security, privacy and data sovereignty requirements to attract regulated industries, and they are offering service-level agreements designed to reduce risks. In spite of that, records managers still cite security, legal and privacy risks as the top three reasons to stall adoption of cloud services and SaaS.

Current records management systems are already missing many forms of electronically stored information. Older types of electronically stored information, such as images, e-mail or office documents, are often captured into traditional records management applications. Newer content types are less likely to have policy-driven life cycle or retention rules applied. Mobile messages, social media posts or websites are important sources of discoverable information, but application of legal holds to that content can be difficult.

Digital preservation forces itself onto the governance agenda.

Digital records that have a long-term retention schedule are at risk when hardware devices, software applications and file formats become obsolete. Obsolete software file format is also a concern. Many first generation business and personal productivity tools are retired, and the inability to retrieve or view older digital records is becoming a reality.

Organizations are slowly waking up to digital preservation concerns. Migration, conversion and adoption of open standards are accepted approaches to solve the problem of accessibility over time. Those approaches, however, are not widely adopted at this time.

Decisions to retire older enterprise applications raise content preservation concerns. As organizations begin infrastructure renewal projects, particularly as new SaaS and cloud-based applications become viable alternatives, IT and records professionals must assess the risk of losing information in those older systems. Decisions to maintain older systems in read-only mode, to migrate data into newer systems or to dispose of older systems all together must be made in accordance with business, legal and compliance needs.

Open standards and open source change the sourcing landscape.

Companies drive significant change in the software acquisition landscape by calling for deliberate adoption of open standards and open source. Governments are hedging against the potential loss of electronic information, software obsolescence and increased costs, as well as demanding more portable data. Between 2011 and 2012, several national governments published directives to help their IT, records and procurement managers to understand, investigate and select more open technology platforms.

Open standards help to address preservation, accessibility and interoperability needs. Open source helps to reduce cost and minimize vendor and platform lock-in. Programs developed by governments around the world have raised the profile and acceptance of open source software. Government in the United Kingdom, United States, and Europe have taken proactive approach for using software systems and file formats based on open standards.

Auto-categorization becomes viable and approachable.

Transactional, regulated and semi-structured content is ready for automated capture, categorization and application of retention policies. The electronic information that uses a consistent structure and embedded metadata, or includes predictable patterns of numbers or text lends itself to content analytics, entity extraction and categorization tools for ingestion and application of retention, disposition and security or privacy access controls. Opportunities to use auto-classification technologies for routine, high-volume, predictable electronic content are increasing as technology matures, more vendors provide integrated offerings, and use cases are identified.

Auto-classification joins compliance needs and business priorities. High-volume, transactional information is a pain point when storage costs escalate and discovery requests are made. Capture, categorization and retention schedule are functions that reduce costs, streamline customer service, and increase digitization of processes. Consistent organization creates a foundation upon which are based content analytics and predictive technology use. Consistent disposal of obsolete information reduces the need for more storage resources, facilitates faster retrieval of data, and lowers the cost of e-discovery.

Big content is as important as big data and requires well-thought governance. Big data gets a lot of attention, but organizations must also cope with information stored in semi-structured or unstructured forms. Tabular data often sits unnoticed and unanalyzed in files created by individuals or small teams. E-mail, spreadsheets and ad hoc databases are used for critical business decisions and often sit under the radar of compliance or audit managers on file shares, collaboration sites or personal computers. 70% of companies use spreadsheets for critical business decisions, but fewer than 34% apply governance or controls to them.

Technology enforces and automates defensible approaches to disposition. Organizations that demonstrate consistent and predictable approaches to information handling, including its final deletion, are more successful when e-discovery orders compel extensive search, retrieval, review and production activities. Automation of routine processes, including scheduled disposal, lends weight to retention programs when challenged by legal counsel or auditors. Auto-classification tools ensure that retention and disposition rules are applied within specific parameters and are supported by documented policy rationale and citations.

Thursday, October 31, 2013

Information Governance With SharePoint

The goals of any enterprise content management (ECM) system are to connect an organization's knowledge workers, streamline its business processes, and manage and store its information.

Microsoft SharePoint has become the leading content management system in today's competitive business landscape as organizations look to foster information transparency and collaboration by providing efficient capture, storage, preservation, management, and delivery of content to end users.

A recent study by the Association for Information and Image Management (AIIM) found that 53% of organizations currently utilize SharePoint for ECM. SharePoint's growth can be attributed to its ease of use, incorporation of social collaboration features, as well as its distributed management approach, allowing for self-service. With the growing trends of social collaboration and enhancements found in the latest release of SharePoint 2013, Microsoft continues to facilitate collaboration among knowledge workers.

As SharePoint continues to evolve, it is essential to have a solution in place that would achieve the vision of efficiency and collaboration without compromising on security and compliance. The growing usage of SharePoint for ECM is not without risk. AIIM also estimated that 60% of organizations utilizing SharePoint for ECM have yet to incorporate it into their existing governance and compliance strategies. It is imperative that organizations establish effective information governance strategies to support secure collaboration.

There are two new nice features in SharePoint 2013 version that would help you with compliance issues. E-discovery center is a SharePoint site that allows to get more control of your data. It allows to identify, hold, search, and export documents needed for e-discovery. "In Place Hold" feature allows to preserve documents and put hold on them while users continue working on them. These features are available for both on-premises and in-cloud solutions.

2013 SharePoint has been integrated with Yammer which provides many social features. This presents new challenge with compliance. Yammer is planning to integrate more security in future releases. But for now, organizations need to create policies and procedures for these social features. Roles like "Community Manager", "Yambassadors", "Group Administrators" might be introduced.

There are 3rd party tools that could be used with SharePoint for compliance and information governance. They are: Metalogix and AvePoint for Governance and Compliance, CipherPoint and Stealth Software for Encryption and Security; ViewDo Labs and Good Data for Yammer analytics and compliance.

In order to most effectively utilize SharePoint for content management, there are several best practices that must be incorporated into information governance strategies as part of an effective risk management lifecycle. The goal of any comprehensive governance strategy is to mitigate risk, whether this entails downtime, compliance violation or data loss. In order to do so, an effective governance plan must be established that includes the following components:

Develop a plan. When developing your plan, it is necessary for organizations to understand the types of content SharePoint contains before establishing governance procedures. It is important to involve the appropriate business owners and gather any regulatory requirements. These requirements will help to drive information governance policies for content security, information architecture and lifecycle management.

When determining the best approach to implement and enforce content management and compliance initiatives, chief privacy officers, chief information security officers, compliance managers, records managers, SharePoint administrators, and company executives will all have to work together to establish the most appropriate processes for their organization as well as an action plan for how to execute these processes. During the planning phase, your organization should perform an assessment, set your organization's goals, and establish appropriate compliance and governance requirements based on the results of the assessment to meet the business objectives.

Implement your governance architecture. Once your organization has developed a good understanding of the various content that will be managed through SharePoint, it is time to implement the governance architecture. In this phase, it is important to plan for technical enforcement, monitoring and training for employees that address areas of risk or noncompliance. It is important to note that while SharePoint is known for its content management functionality, there are specific challenges that come with utilizing the platform as a content management system for which your governance architecture must account: content growth and security management.

In order to implement effective content management, organizations should address and plan to manage growth of sites, files, storage, and the overall volume of content. Organizations without a governance strategy often struggle with proliferation of content with no solutions to manage or dispose of it. This is a huge problem with file servers. Over time, file servers grow to the point where they become a bit like the file cabinet collecting dust in the corner of your office. It is easy to add in a new file, but you will not find it later when you need it. The challenge comes from the planning on how to organize and dispose of out-of-date content.

SharePoint offers the technology to address these challenges, but only if it is enabled as part of your governance plan. Information management policies can be used to automatically delete documents, or you may be using third-party solutions to archive documents, libraries and sites. By default in SharePoint 2013, Shredded Storage is enabled to reduce the overall storage of organizations that are utilizing versioning. Remote BLOB Storage (RBS) can also be enabled in SharePoint or through third-party tools to reduce SharePoint's storage burden on SQL Server.

Tagging and classification plays a key role in information governance. Proper classification can improve content findability. Organizations can utilize SharePoint's extensive document management and classification features, including Content Types and Managed Metadata to tag and classify content. Third-party tools that extend SharePoint's native capabilities can also filter for specified content when applying management policies for storage, deletion, archiving, or preservation. Ultimately, however, the people in your organization will play the biggest role here. As such, your plan should identify who the key data owners are and the areas for which they are responsible. This role is often filled by a "site librarian" or those responsible for risk management in the enterprise.

In order to minimize risk to the organization, it is imperative to ensure information is accessible to the people that should have it, and protected from the people that should not have access. SharePoint has very flexible system of permissions that can accommodate this issue.

Ongoing assessments. In order to ensure that established governance procedures continue to meet your business requirements ongoing assessment is required. Conduct ongoing testing of business solutions, monitoring of system response times, service availability and user activity, as well as assessments to ensure that you have complied with your guidelines and requirements for properly managing the content. The content is essentially your intellectual property, the lifeblood that sustains your organization.

React and revise as necessary. In order to continue to mitigate risk, respond to evolving requirements, and harden security and access controls, we must take information gathered in your ongoing assessments and use that to make more intelligent management decisions. Continue to assess and react and revise as necessary. With each change, continue to validate that your system meets necessary requirements.

The risk has never been higher, especially as more data is created along an growing regulatory compliance mandates requiring organizations to ensure that its content is properly managed.

If you develop a plan, implement a governance architecture that supports that plan, assess the architecture on an ongoing basis, and react and revise as necessary, your organization will have the support and agility necessary to truly use all of the content it possesses to improve business processes, innovation, and competitiveness while lowering total costs.

Monday, October 28, 2013

Meeting the Social Media Challenge

When social media volume is low, it is typically handled manually by one or more people in a company. These people are assigned to check Facebook and/or Twitter a couple of times a day and respond when appropriate.

As the volume of inquiries grows, it becomes expensive to respond manually to the posts and comments, and nearly impossible to do it on a timely basis. After a while, it becomes clear that automation is necessary to respond to the large number of social media comments in appropriate time frames.

During the next few years, organizations of all sizes will need to build a social media technology servicing framework to handle an increasing volume of inquiries, complaints, and comments. As social media is conceptually just another channel, it should be incorporated into the enterprise's overall servicing framework. However, the unique characteristics and demands of social media interactions require specialized solutions and processes, even though the responses should be consistent in all channels.

There are many applications to help organizations handle their social media servicing challenges, and new ones are constantly being introduced. However, currently, there is no single solution that addresses all necessary requirements. Enterprises that want a complete solution need to purchase several applications and integrate them. They should also merge these applications with their existing servicing infrastructure to ensure an excellent customer experience.

The underlying technical components required to build a social media servicing infrastructure are:
  • Tools for monitoring social media sites for brand and company mentions.
  • Data acquisition/capture tools to identify and gather relevant social media interactions for the company.
  • Data extraction tools that separate "noise" from interactions that require immediate or timely responses.
  • An engine for defining business rules that generates alerts, messages, pop-ups, alarms, and events.
  • Integration tools to facilitate application-to-application communication, typically using open protocols such as Web services. Prebuilt integration tools, along with published application programming interfaces, should be provided for contact center applications.
  • Storage to house and access large volumes of historical data, and an automated process to retain and purge both online and archived data. Additional capabilities may include the ability to access archived data via other media, such as a CD-ROM, and the ability to store and retrieve data in a corporate storage facility, such as a network-attached storage or storage area network.
  • Database software for managing large volumes of information.
  • Work flow tools to automate business processes by systematically passing information, documents, tasks, notifications, or alerts to another business process (or person) for additional or supplementary action, follow-up, or expertise.
The core administrative tools needed are:
  • User administration capability with prebuilt tools to facilitate system access, user set-up, user identification and rights (privileges), password administration, and security.
  • Alert management capability that allows thresholds to be set so that alarms, alerts, or notifications can be enabled when predefined levels or time frames are triggered when violations or achievements occur (examples include alerts to signal changes in topics, emerging issues, and sentiment).
  • Metrics management, including the ability to enter, create, and define key performance indicators (KPIs) and associated metrics.
  • System configuration with an integrated environment for managing application set-up, and parameters for contact routing, skill groups, business rules, etc.
The core servicing functionality includes:
  • Skills-based routing tools to deliver identified interactions to agents or other employees with the proficiency to address them.
  • The ability to queue and route transactions (calls, emails, chat/IM, and social media posts) to the appropriate agent, employee, or team.
  • Text analytics software that uses a combination of statistical or linguistic modeling methods to extract information from unstructured textual data.
  • Filtering tools that separate "noise" from social media customer interactions that require immediate or timely responses.
  • Topic categorization software that identifies themes and trends within social media interactions.
  • Root cause analysis, a problem-solving tool that enables users to strip away layers of symptoms to identify the underlying reasons for problems or issues.
  • Search and retrieval abilities that allow large volumes of data to be searched, based on user-defined queries, to retrieve specific instances.
  • Sentiment analysis capability that can identify positive or negative sentiment about a company, person, or product, and assign a numerical score based on linguistic and statistical analysis.
  • A social CRM servicing solution that logs and tracks received social media interactions so that agents or employees can view the post/comment, create a customized response, and issue or post it.
  • Response templates that comprise a library of customizable responses to common social media posts.
  • A social media publishing tool that enables users to publish posts to social media sites.
  • Reporting functionality in which reports can be set up based on collected data, metrics, or KPIs in a preferred presentation format (chart or graph); this should also include the ability to create custom reports based on ad hoc queries.
  • Scorecards/dashboards for all constituents in an organization - agents, supervisors, managers, other departments, and executives.
  • An analytics tool that conducts multidimensional analyses of social media data, used to look for trends and data relationships over time, identify emerging issues and root causes, etc.
  • Recording software to capture social media inputs and responses.
Organizations also need a number of management applications to ensure that their social media teams or departments are properly trained and staffed. These tools are:
  • Quality assurance functionality to measure the quality of social media comments and posts by agents, to ensure that they are adhering to the organization's guidelines.
  • Coaching and e-learning software to deliver appropriate training courses and best practice clips to agents and other employees involved in responding to social media interactions.
  • A workforce management solution to forecast the expected volume of social media interactions that will require agent/employee assistance, and to identify and create optimal schedules (this also tracks adherence to service levels for each inquiry type).
  • Surveying software to determine if customers/comments were satisfied with the company's responses.
  • Desktop analytics to provide an automated and systematic approach to monitor, capture, structure, analyze, report, and react to all agent/employee desktop activity and process workflows.
  • An analytics-oriented performance management module that creates scorecards and dashboards to help contact center and other managers measure performance against preset goals.
Social media is going to change the servicing landscape for many organizations within the next five to eight years. This is because the volume of social media comments and posts is expected to grow rapidly, comprising 50 percent of all service interactions. Companies that build a servicing strategy incorporating social media will have a major advantage over their competitors.

Companies do not need all of the solutions identified above, they need to select the ones that allow them to incorporate social media into their servicing strategy and infrastructure so that customers can interact with them in their preferred channel.

Saturday, October 5, 2013

Knowledge Management Adoption Through Gamification

One of the most important components of a successful knowledge management program is its ability to promote and support a culture of collaboration and knowledge sharing.

Tools, processes and organizational policies are important elements but they will only get you so far. Culture is the cornerstone that will determine the willingness of your employees to participate in knowledge management.

How do you influence employees in your organization to adopt productive behaviors around collaboration and knowledge sharing? The answer may be found in a new concept called gamification.

What is gamification? It is a new and rapidly evolving area, but the following description is a good starting point: gamification is the use of game elements and game design techniques in non-game contexts.

That definition of gamification contains three distinct elements:
  • Game elements - this is about leveraging the components, design patterns, and feedback mechanisms that you would typically find in video games, such as points, badges and leader-boards. It is sometimes referred to as the engineering side of gamification.
  • Game design techniques - this is the artistic, experimental side of gamification. It includes aesthetics, narrative, player journey, progression, surprise, and, of course, fun. Games are not just a collection of elements, they are a way of thinking about and approaching challenges like a games designer.
  • Non-game contexts - some common areas in which gamification has taken hold include health and wellness, education, sustainability, and collaboration and knowledge sharing in the enterprise.
There are three key types of knowledge management behavior:
  • connect: how people connect to the content and communities they need to do their job;
  • contribute: the level at which people are contributing their knowledge and the impact of those contributions on other people;
  • cultivate: the willingness to interact with and build upon the ideas and perspectives of other employees, to help nurture a spirit of collaboration.
The unique selling point of gamification is the potential to learn from games and to draw on what makes games so engaging and attractive and to apply those components in other contexts. What is behind this philosophy? While people can be drawn in to collaborate and share via extrinsic motivation, the more you can tap into their intrinsic motivations and help people realize the inherent benefits of collaboration, the more successful and sustained that engagement will be.

We can identify three ways to affect intrinsic motivation: mastery, autonomy, and purpose.

Mastery

Getting really good at something, be it a skill, sport or mental discipline, has its own benefits. The goal of gamifying collaboration is to help people get good at it and, therefore, realize its inherent benefits. As participants progress through the "game", they gradually learn the skills to find expertise, build their network, and share their knowledge in a way that makes them more effective, and advances their careers.

Autonomy

Autonomy is about giving people the freedom to make meaningful choices. Instead of dictating a prescribed path, an autonomous approach allows them to set their own goals, choosing how they wish to collaborate, and ultimately providing a sense of ownership. The more individuals feel that they are in control, the better engaged they are going to be. Participants can share a document, write a blog, post a microblog or create a video. It is about giving participants choices, equipping them with the tools, and rewarding them for their knowledge sharing behaviors regardless of the specific mechanism they used.

Purpose

While there are plenty of personal benefits to collaboration, people are more engaged when they feel socially connected to others as part of a larger purpose. As part of that wider organization, they can take pride in the fact that they are making a broader impact on their organization and collaboration is a key part of that experience.

The use of gamification assumes that you already have knowledge management program in place. Assuming gamification can magically transform absence of knowledge management program into something engaging is a common error. A well thought-out and sustainable approach to gamification offers significant potential to make collaboration fun and engaging.

Gamification Tips

Don't lose sight of your objectives

Start with your business objectives in terms of their outcomes and keep your eyes on those objectives and validate them as you design, develop and implement your knowledge management program.

Focus on behaviors, not activities

It is very easy to get caught up in focusing exclusively on activities and end up having people busy doing "stuff". Similar to objectives, keep a focus on the behaviors you want your people to adopt and identify activities that are indicators of those behaviors.

Data is king

You need to be able to capture, store and retrieve data. Without a way to quantify and measure it, you will be stuck in the first step.

Spread the recognition

Don't limit the number of people who can be recognized through your program. In addition, recognize people's efforts in a variety of meaningful ways. Some examples of recognition are:
  • e-cards with 100 recognition points (monetary value of $100);
  • thank-you notes from leadership;
  • shout-outs in internal corporate communications;
  • badges on employees' profile pages;
  • feedback during the employee's performance review process.
People will game the system

You will need to pay attention to people who want to "game" the system. Where possible, build in approaches to limit the ability of people to do so.

Start small and evolve

Gamifying collaboration is not just something you build at once. To arrive at a good and sustainable knowledge management program, you need to be iterative, creating rough versions and play-testing continuously.

No silver bullet exists

Gamification is not a silver bullet. All the available evidence suggests that it can be leveraged further to embed the collaborative behaviors that go to make up a meaningful culture of collaboration and knowledge sharing across any organization.

Wednesday, September 18, 2013

The Mystery of How Enterprise Search Works (For You!)

Enterprise search starts with a user looking for information and submitting a search query. A search query would be a list of keywords (terms) or a phrase. The search engine would look for all records that match the request and return a list to the user. The list would contain results that are ranked in order of most relevant to least relevant for the request.

Let's look at search in more detail.
Performance Measures

There are two performance measures for evaluating the quality of query results: precision and recall.

Precision refers to the fraction of relevant documents from all documents retrieved. Recall is the fraction of relevant documents retrieved by a search from the total number of all relevant documents in the collection. It is said that precision is a measure of usefulness of a result while recall is a measure of the completeness of the result.

Modern search engines provide a high recall with good precision. It is easy to achieve high recall by simply returning all documents in the collection for every query. However, the precision in this case would be poor. A key challenge is how to increase precision without sacrificing recall. For example, most web search engines today provide reasonably good recall but poor precision. In other words, a user gets some relevant results, usually in the first 10 to 20 results, along with many non-relevant results.

Relevancy

Relevancy is a numerical score assigned to a search result representing how well the result meets the information the user who submitted the query is looking for. Relevancy is therefore a subjective measure of the quality of the results as defined by the user. The higher the score, the higher the relevance.

For every document in a result, a search engine calculates and assigns a relevancy score. TF-IDF is the standard relevancy heuristic used for all search engines.It compares TF and IDF variables to provide a ranking score for each document.

TF stands for Term Frequency. This is the number of times a word (or term) appears in a single document as percentage of total number of terms in the document. Term frequency assumes that when evaluating two documents, document A and document B, the one that contains more occurrences of the search term is probably also more "relevant" to the user.

IDF stands for Inverse Document Frequency. This is a measure of the general importance of the term which is the ratio of all documents in the set to the documents that contain the term. IDF prevents a bias towards longer documents.

Additional techniques may put more emphasis other attributes to determine relevancy, for example, freshness - when was the document created or last updated or what part of the document matched the term - document title or author may score higher than finding the term in the text body.

Modern search engines provide good relevancy scoring across a wide range of document formats, but more importantly, allow users to create and use their own relevancy scoring profiles optimized for their queries. These user-defined weights, also called boosting, can be set up and run for a user, group of users, or per query. This is extremely helpful for personalizing the search experience by roles or departments within the organization.

Linguistics

Linguistics is a vital component of any search solution. It refers to the processing and understanding of text in unstructured documents or text fields. There are two parts to linguistics: syntax and semantics.

Syntax is about breaking text into words and numbers which is also called tokenization. Semantics is the process of finding the meaning behind text, from the levels of words and phrases to the level of paragraphs, a document or a set of documents. Semantic analysis often involves grammatical description and deconstruction, morphology, phonology, and pragmatics. One major challenge is ambiguity of language.

Linguistics therefore improves relevancy and affects precision and recall. Common linguistic features in a search solution include stemming and lemmatization of words (reducing words to their root or stem form), phrasing (the recognition and grouping of idioms), removal of stop words (words that appear often in documents but contain little meaning, for example articles), spelling corrections, etc.

Navigation

One way to overcome the challenges of semantics and language ambiguity used by search engines is navigation. In this case, the search engine is using linguistics features, such as extraction of entities (nouns and noun phrases, places, people, concepts, etc.) and predefined taxonomy to narrow the results by clustering related documents together or providing useful dimensions, called facets, to slice the data, for example using price, name, etc. to narrow down the search results.

The Search Index

At the heart of every search engine is the search index. An index is a searchable catalog of documents created by the search engine. The search engine receives content for all source system to place in the index. This process is called ingestion. The search engine then accepts search queries to match against the index. The index is used to quickly find relevant documents for a search query out of collection of documents.

A common index structure is the inverted index which maps every term in the collection to all of its locations in this collection. For example, a search for the term "A" would check the entry for "A" in the index that contains links to all the documents that include "A".

Wednesday, July 31, 2013

ISO 9001 and Documentation

ISO 9001 compliance becomes increasingly important in regulated industries. How does it affect documentation? Here is how...

What is Document Control?

Document control means that the right persons have the current version of the documents they need, while unauthorized persons are prevented from use.

We all handle many documents every day. These documents include forms that we fill out, instructions that we follow, invoices that we enter into the computer system, holiday schedules that we check for the next day off, rate sheets that we use to bill our customers, and many more.

An error on any of these documents could lead to problems. Using an outdated version could lead to problems. Not knowing if we have the latest version or not could lead to problems. Just imagine us setting up a production line to outdated specifications or making strategic decisions based on a wrong financial statement.

ISO 9001 gives us tools (also referred to as "requirements") that show us how to control our documents.

ISO 9001 Documents

There are no "ISO 9001 documents" that need to be controlled, and "non ISO 9001 documents" that don't need control. The ISO 9001 system affects an entire company, and all business-related documents must be controlled. Only documents that don't have an impact on products, services or company don't need to be controlled - all others need control. This means, basically, that any business-related document must be controlled.

However, how much control you apply really depends on the document.

The extent of your approval record, for example, may vary with the importance of the document (remember, documents are approved before they are published for use).

The Quality Policy, an important corporate policy document, shows the signatures of all executives.

Work instructions often just show a note in the footer indicating approval by the department manager.

Some documents don't even need any approval record: if the person who prepared a document is also responsible for its content (e.g., the Quality Manager prepares instructions for his auditors), a separate approval is superfluous.

On the other hand, identifying a document with a revision date, source and title is basic. It really should be done as a good habit for any document we create.

Please note that documents could be in any format: hard copy or electronic. This means that, for example, the pages on the corporate internet need to be controlled.

Responsibility for Document Control

Document control is the responsibility of all employees. It is important that all employees understand the purpose of document control and how to control documents in accordance with ISO 9001.

Please be aware that if you copy a document or print one out from the Intranet and then distribute it, you are responsible for controlling its distribution! The original author will not know that you distributed copies of this documents, so the original author can't control your distribution.

Dating Documents

ISO 9001 requires to show on every document when it was created or last updated. Many of us may have thought to use our word processor's automatic date function for this, but... should we use the automatic date field on documents?

Generally not. If you enter the automatic date field into a document, the field will automatically be updated to always show the current date, no matter when you actually created or updated the document.

Example: For example, if you use the automatic date field in a fax and you save the fax on your computer for future reference, you won't be able to tell when you wrote the fax: when you open the fax on your computer, it will always show today's date.

The automatic date field is not suitable for document control. Therefore, as a general rule, don't use the automatic date field to identify revision status.

ISO 9001 Documentation

ISO 9001 documentation includes:
  • the Quality Procedures Manual, which also includes corporate policies and procedures affecting the entire company;
  • work instructions, which explain in detail how to perform a work process;
  • records, which serve as evidence of how you meet ISO 9001 requirements.
Policies and Procedures

Our ISO 9001 Quality Manual includes the corporate Quality Policy and all required ISO 9001 Procedures. While most procedures affect only managers, every employee must be familiar with the Quality Policy and with the Document Control procedures. The Quality Policy contains the corporate strategy related to quality and customer satisfaction; all other ISO 9001 documents must follow this policy. The Document Control procedures shows how to issue documents, as well as how to use and control documents.

Continuous Improvement

Implementing ISO 9001 is not a one-time benefit to a company. While you are utilizing the quality manual, quality procedures and work instructions in daily business activities, you are not only benefiting from better quality and increased efficiency but you are also continually improving. In fact, the ISO 9001 requirements are designed to make you continually improve. This is a very important aspect because companies that don't continue to improve are soon overtaken by the competition.

Thursday, June 20, 2013

Intelligent Search and Automated Metadata

The inability to identify the value in unstructured content is the primary challenge in any application that requires the use of metadata. Search cannot find and deliver relevant information in the right context, at the right time without good quality metadata.

An information governance approach that creates the infrastructure framework to encompass automated intelligent metadata generation, auto-classification, and the use of goal and mission-aligned taxonomies is required. From this framework, intelligent metadata enabled solutions can be rapidly developed and implemented. Only then can organizations leverage their knowledge assets to support search, litigation, e-discovery, text mining, sentiment analysis and open source intelligence.

Manual tagging is still the primary approach used to identify the description of content, and often lacks any alignment with enterprise business goals. This subjectivity and ambiguity is applied to search, resulting in inaccuracy and the inability to find relevant information across the enterprise.

Metadata used by search engines may be comprised of end user tags, pre-defined tags, or generated using system defined metadata, keyword and proximity matching, extensive rule building, end-user ratings, or artificial intelligence. Typically, search engines provide no way to rapidly adapt to meet organizational needs or account for an organization’s unique nomenclature.

More effective is implementing an enterprise metadata infrastructure that consistently generates intelligent metadata using concept identification. A profoundly different approach, relevant documents, regardless of where they reside, will be retrieved even if they don’t contain the exact search terms, because the concepts and relationships between similar content has been identified. The elimination of end-user tagging and the resulting organizational ambiguity enables the enriched metadata to be used by any search engine index, for example, ConceptSearch, SharePoint, Solr, Autonomy or Google Search Appliance.

Only when metadata is consistently accurate and trusted by the organization can improvements be achieved in text analytics, e-discovery and litigation support.

In the exploding age of big data, and more specifically text analytics, sentiment analysis and even open source intelligence, the ability to harness the meaning of unstructured content in real time improves decision-making and enables organizations to proactively act with greater certainty on rapidly changing business complexities.

To achieve an effective information governance strategy for unstructured content, results are predicated on the ability to find information and eliminate inappropriate information. The core enterprise search component must be able to incorporate and digest content from any repository, including faxes, scanned content, social sites (blogs, wikis, communities of interest, Twitter), emails, and websites. This provides a 360-degree corporate view of unstructured content, regardless of where it resides or how it was acquired.

Ensuring that the right information is available to end users and decision makers is fundamental to trusting the accuracy of the information and is another key requirement in intelligent search. Organizations can then find the descriptive needles in the haystack to gain competitive advantage and increase business agility.

An intelligent metadata enabled solution for text analytics analyzes and extracts highly correlated concepts from very large document collections. This enables organizations to attain an ecosystem of semantics that delivers understandable and trusted results that is continually updated in real time.

Applying the concept of intelligent search to e-discovery and litigation, traditional information retrieval systems use "keyword searches" of text and metadata as a means of identifying and filtering documents. The challenges and escalating costs of e-discovery and litigation support continue to increase. The use of intelligent search reduces costs and alleviates many of the challenges.

Content can be presented to knowledge professionals in a manner that enables them to more rapidly identify relevant information and increase accuracy. Significant benefits can be achieved by removing the ambiguity in content and the identification of concepts within a large corpus of information. This methodology delivers expediencies, and reduces costs, offering an effective solution that overcomes many of the challenges typically not solved in e-discovery and litigation support.

Organizations must incorporate an approach that addresses the lack of an intelligent metadata infrastructure. Intelligent search, a by-product of the infrastructure, must encourage, not hamper, the use and reuse of information and be rapidly extendable to address text mining, sentiment analysis, e-discovery, and litigation support.

The additional components of auto-classification and taxonomies complete the core infrastructure to deploy intelligent metadata enabled solutions, including records management, data privacy, and migration. Search can no longer be evaluated on features, but on proven results that deliver insight into all unstructured content.