Tuesday, March 25, 2014

Search Applications - Vivisimo

Vivisimo was a privately held technology company that worked on the development of computer search engines. The company product Velocity provides federated search and document clustering. Vivisimo's public web search engine Clusty was a metasearch engine with document clustering; it was sold to Yippy, Inc. in 2010.

The company was acquired by IBM in 2012 and Vivisimo Velocity Platform is now IBM InfoSphere Data Explorer. It stays true to its heritage of providing federated navigation, discovery and search over a broad range of enterprise content. It covers broad range of data sources and types, both inside and outside an organization.

In addition to the core indexing, discovery, navigation and search engine the software includes a framework for developing information-rich applications that deliver a comprehensive, contextually-relevant view of any topic for business users, data scientists, and a variety of targeted business functions.

InfoSphere Data Explorer solutions improve return on all types of information, including structured data in databases and data warehouses, unstructured content such as documents and web pages, and semi-structured information such as XML.

InfoSphere Data Explorer provides analytics on text and metadata that can be accessed through its search capabilities. Its focus on scalable but secure search is part of why it became one of the leaders in enterprise search. The software’s security features are critical, as organizations do not want to make it faster for unauthorized users to access information.

Also key is the platform’s flexibility at integrating sources across the enterprise. It also supports mobile technologies such as smart phones to make it simpler to get to and access information from any platform.

Features and benefits

1. Secure, federated discovery, navigation and search over a broad range of applications, data sources and data formats.
  • Provides access to data stored a wide variety of applications and data sources, both inside and outside the enterprise, including: content management, customer relationship management, supply chain management, email, relational database management systems, web pages, networked file systems, data warehouses, Hadoop-based data stores, columnar databases, cloud and external web services.
  • Includes federated access to non-indexed systems such as premium information services, supplier or partner portals and legacy applications through the InfoSphere Data Explorer Query Routing feature.
  • Relevance model accommodates diverse document sizes and formats while delivering more consistent search and navigation results. Relevance parameters can be tuned by the system administrator.
  • Security framework provides user authentication and observes and enforces the access permissions of each item at the document, section, row and field level to ensure that users can only view information they are authorized to view in the source systems.
  • Provides rich analytics and natural language processing capabilities such as clustering, categorization, entity and metadata extraction, faceted navigation, conceptual search, name matching and document de-duplication.
2. Rapid development and deployment framework to enable creation of information-rich applications that deliver a comprehensive view of any topic.
  • InfoSphere Data Explorer Application Builder enables rapid deployment of information-centric applications that combine information and analytics from multiple sources for a comprehensive, contextually-relevant view of any topic, such as a customer, product or physical asset.
  • Widget-based framework enables users to select the information sources and create a personalized view of information needed to perform their jobs.
  • Entity pages enable presentation of information and analytics about people, customers, products and any other topic or entity from multiple sources in a single view.
  • Activity Feed enables users to "follow" any topics such as a person, company or subject and receive the most current information, as well as post comments and view comments posted by other users.
  • Comprehensive set of Application Programming Interfaces (APIs) enables programmatic access to key capabilities as well as rapid application development and deployment options.
3.Distributed, highly scalable architecture to support large-scale deployments and big data projects.
  • Compact, position-based index structure includes features such as rapid refresh, real-time searching and field-level updates.
  • Updates can be written to indices without taking them offline or re-writing the entire index, and are instantly available for searching.
  • Provides highly elastic, fault-tolerant, vertical and horizontal scalability, master-master replication and “shared nothing“ deployment.
4. Flexible data fusion capabilities to enable presentation of information from multiple sources.
  • Information from multiple sources can be combined into “virtual documents“ which contain information from multiple sources.
  • Large documents can be automatically divided into separate objects or sub-documents that remain related to a master document for easier navigation and comprehension by users.
  • Enables creation of dynamic "entity pages" that allow users to browse a comprehensive, 360-degree view of a customer, product or other item.
5. Collaboration features to support information-sharing and improved re-use of information throughout the organization.
  • Users can tag, rate and comment on information.
  • Tags, comments and ratings can be used in searching, navigation and relevance ranking to help users find the most relevant and important information.
  • Users can create virtual folders to organize content for future use and optionally share folders with other users.
  • Navigation and search results can return pointers to people to enable location of expertise within an organization and encourage collaboration.
  • Shared Spaces allow users to collaborate about items and topics that appear in their individualized views.

Thursday, March 13, 2014

Compliance With Privacy Regulations

Recently, high-profile cases involving breaches of privacy revealed the ongoing need to ensure that personal information is properly protected. The issue is multidimensional, involving regulations, corporate policies, reputation concerns, and technology development.

Organizations often have an uneasy truce with privacy regulations, viewing them as an obstacle to the free use of information that might help the organization in some way.

But like many compliance and governance issues, managing privacy will offer benefits, protecting organizations from breaches that violate laws and damage an organization's reputation. Sometimes the biggest risks in privacy compliance arise from the failure to take some basic steps. A holistic view is beneficial.

Privacy Compliance Components

Rather than being in conflict with the business objectives, privacy should be fully integrated with it. Privacy management should be part of knowledge management program.

An effective privacy management program has three major components: establish clear policies and procedures, follow procedures to make sure that organization's operation is in compliance with those policies, and provide an oversight to ensure accountability. Example of questions to consider: is data being shared with third parties, why the information is being collected, and what is being done with it.

Expertise about privacy compliance varies widely across industries, corresponding to some degree with the size of an organization. Although large companies are far from immune to privacy violations, they might at least be aware and knowledgeable about the issue.

The biggest mistake that organizations make in handling privacy is to collect data without a clear purpose. You should know not just how you are protecting personal information but also why you are collecting it. It is important for organizations to identify and properly classify all their data.

International Considerations

Increasingly, organizations must consider the different regulations that apply in countries throughout the world, as well as the fact that the regulations are changing. For example, on March 12, 2014, the Australian Privacy Principles (APPs) will replace the existing National Privacy Principles and Information Privacy Principles.

The new principles will apply to all organizations, whether public or private, and contain a variety of requirements including open and transparent management of personal information. Of particular relevance to global companies are principles on the use and disclosure of personal information for direct marketing, and cross-border disclosure of personal information.

It is important to consider international regulations in those countries where an organization has operations.

Technology Role

The market for privacy management software products is still relatively small. The market for this software is expected to grow rapidly over the coming years. The current reform process for data protection has created a need for privacy managing technology.

Products from companies such as Compliance 360 automate the process of testing the risk for data breaches, which is required for the audits mandated by the Economic Stimulus Act of 2009. This act expanded the Health Insurance Portability and Accountability Act (HIPAA) of 1996 requirements through its Health Information Technology for Economic and Clinical Health (HITECH) provisions.

These provisions include increased requirements for patient confidentiality and new levels of enforcement and penalties. In the absence of suitable software products, organizations must carry out the required internal audits and other processes manually, which is time consuming and subject to errors.

Enterprise content management (ECM), business process management (BPM) and business intelligence (BI) technology have important role in privacy compliance because content, processes, and reporting are critical aspects of managing sensitive information.

As generic platforms, they can be customized, which has both advantages and disadvantages. They have a broad reach throughout the enterprise, and can be used for many applications beyond privacy compliance. However, they are generally higher priced and require development to allow them to perform that function.

Privacy in the Cloud

Cloud applications and data storage have raised concerns about security in general, and personally identifiable information (PII) in particular. Although many customers of cloud services have concluded that cloud security is as good or better than the security they provide in-house, the idea that personally identifiable information could be "out there" is unsettling.

PerspecSys offers a solution for handling sensitive data used in cloud-based applications that allows storage in the cloud while filtering out personal information and replacing it with an indecipherable token or encrypted value.

The sensitive data is replaced by a token or encrypted value that takes its place in the cloud-based application. The "real" data is retrieved from local storage when the token or encrypted value is retrieved from the cloud. Thus, even though the application is in the cloud, the sensitive information is neither stored in the cloud nor viewable there. It physically resides behind the firewall and can only be seen from there.

This feature is especially useful in an international context where data residency and sovereignty requirements often specify that data needs to stay within a specific geographic area.

Challenges for Small Organizations

Small to medium-sized organizations generally do not have a dedicated compliance or privacy officer, and may be at a loss as to where to start.

Information Shield provides a set of best practices including a policy library with prewritten policies, detailed information on U.S. and international privacy laws, checklists and templates, as well as a discussion of the Organization for Economic Co-operation and Development (OECD) Fair Information Principles. Those resources are aimed at companies that may not have privacy policies in place but need to do so to provide services to larger healthcare or financial services organizations.

Among the resources is a list of core privacy principles based on OECD principles. Each principle has a question, brief discussion and suggested policy. For example, the purpose specification principle states, "The purposes for which personal information is collected should be specified no later than the time of data collection, and the subsequent use should be limited to fulfilling those purposes or such others that are specified to the individuals at the time of the change of purpose." The discussion includes comments on international laws and a citation of several related rulings.

Plans for Future

Business users and consumers alike have become accustomed to the efficiency and speed of digital data. However, more strict regulations are inevitable. Organizations should become more aware of having to prevent privacy breaches, and to make sure they have the systems in place to do this. Companies should also be concerned about reputation damage, which can severely affect business. Along with reliable technology, the best way forward is to follow best practices with respect to data privacy. Technology is essential, but it also has to be supported by people and processes.

Tuesday, February 25, 2014

Unified Knowledge Management

This scenario might be familiar to many organizations.

Inside an organization, valuable information is not being used. It is scattered in pieces across multiple repositories and siloed organization where no one even bothers to look for it. Valuable content also resides outside your organization: in social media, communities, etc., created by your customers and industry experts, which is used and shared by other customers when they need answers.

In many organizations, employees spend a significant amount of time trying to find and process information, often at a high cost. Recent report found that knowledge workers spend anywhere from 15% to 35% of their time searching for, assembling, and then (unfortunately) recreating information that already exists. And studies show that much of this time is spent not only looking for content, but also looking for experts. Most companies are unable to reuse the majority of work that is created every day.

This is the growing challenge of knowledge management today: how to leverage meaningful knowledge through constant reuse by each and every employee and each and every customer when they need it, no matter where it resides.

Return on Knowledge

These are few points to consider:
  • Data on its own is meaningless. It must be organized into information before it can be used.
  • Data is factual information: measurements, statistics or facts. In and of itself, data provides limited value.
  • Information is data in context: organized, categorized or condensed.
  • Knowledge is a human capability to process information to make decisions and take action.
Knowledge keeps organizations competitive and innovative, and is the most valuable intangible asset. Yet, knowledge is one of the most difficult assets to generate a return on (with repeated access, use and re-use), simply because information is so widespread, fractured, and changing at an accelerated pace.

Connecting the dots between relevant content and associated experts on that content is critical to leveraging the collective knowledge of an organization's ecosystem for the greatest return.

How to Get a Higher Return on Knowledge

The key to a higher return on knowledge is accessibility to information from anywhere, presented within any system, and personalized for the user's context.

The following tips would allow your organization to bring the return on investment in managing the knowledge throughout your organization.

1. Consolidate the knowledge ecosystem. Bring together information from enterprise systems and data sources, employees and customer social networks, social media such as Twitter, Chatter and more. Connect overwhelming amounts of enterprise and social information to get a complete picture of your customers, their interaction histories, products, levels of satisfaction, etc.

2. Connect people to knowledge in context. Connect users to the information they need (no matter where it resides) within their context.

3. Connect people to experts in context. Connect the people (the experts) associated with the contextually relevant content to assist in solving a case, answer a key challenge or provide additional insight to a particular situation.

4. Empower contribution. Allow users to create, rate content, and share knowledge about customers, cases, products, etc.

5. Personalize information access. Present employees and customers with information and people: connections that are relevant, no matter where they are, and no matter what they are working on. Just like the suggestive items on the e-commerce websites you visit, the experience is personalized, because it knows what you are working on.

Bringing this content to the fingertips of your employees and customers will increase organizational productivity, result in more innovative and customer-pleasing products, create happy employees, and drive customer satisfaction as well as profitability.

Unified Indexing

Unified indexing and insight technology is the way that forward-thinking companies will access knowledge in the 21st century. The technology brings content into context: assembling fragments of structured and unstructured information on demand and presenting them, in context, to users.

Designed for the enterprise, unified indexing and insight technology works in a similar way to Google on the Internet, but on the heterogeneous systems (e.g. email, databases, CRM, ERP, social media, etc.), locations (cloud and on-premise), and varied data formats of business today. The technology securely crawls those sources, unifies the information in a central index, normalizes the information and performs mash-ups on demand, within the user's context. The user creates the context based on his or her needs and interests.

Advantages of Unified Indexing:
  • Customers will see a personalized and relevant view of information from the entire knowledge ecosystem (from inside or outside your company) intuitively presented so they can solve their own challenges.
  • Service and support agents can solve cases faster. No longer support agents need to search across multiple systems or waste time trying to find the right answer or someone who knows the answer. They will have relevant information about the customer or case at hand, right at their fingertips: suggested solutions, recommended knowledge base articles, similar cases, experts who can help, virtual communication timelines and more.
  • Knowledge workers can stop reinventing the wheel. When every employee can access relevant information, locate experts across the enterprise, and know what does and does not exist, they can finally stop reinventing the wheel.
The new age of knowledge is here and it is powered by instantly accessible, collective, crowd-sourced and contextually relevant information that comes from everywhere and is presented as knowledge workers go about their work and customers look for information they need.

Friday, January 31, 2014

Unified Data Strategy

The amount of data being created, captured, and managed worldwide is increasing at a rate that was inconceivable a few years ago. Data is a collection of discrete units of information but like the stars in the night sky taken together form an organized structure.

Unstructured data comes in many different formats including pictures, videos, audio, PDF files, spreadsheets, documents, email, and many other formats. 

Sometimes unstructured data lives within a database. Sometimes the database acts as an index for the unstructured data. Often the metadata (information about the data) associated with the unstructured data is larger than the data itself. Consider the example of a set of videos. Although the files may be small in size, the information stored regarding the content within a particular video may be very big. Often unstructured data is also called big data.

Certain business functions require analysis of massive amounts of data.

Multiple systems are being utilized to manage different forms of disparate data. Companies need to adopt a comprehensive and holistic approach to managing these many systems and incorporating them into a combined system.

Modern IT systems should be able to ingest, access, store, manipulate and protect data within a wide variety of disparate formats. These multiple data formats may exclude the necessary flexibility, elasticity and alacrity that many modern business functions require. There are situations when data must be accessed so quickly and data management systems should be able to accommodate such situations. Each of these systems recognizes a particular style of data with a fairly well-defined set of attributes and manages that data to satisfy a particular business function.

A Unified Data Strategy (UDS) is a broad concept that describes how massive amounts of data in a multitude of forms can and should be understood and managed. UDS is also a specific individualized methodology developed by each data owner to manage that data in all its forms in a comprehensive but interrelated manner.

By adopting a UDS, data owners will be able to develop comprehensive, customized methodologies to manage their data. By taking into account the interconnected nature of the various sources of data and tailoring the management of that data to the specific business requirements the maximum value can be achieved.

UDS can be used to address the task of comprehensive data management. Cloud computing may provide the solution to this data management and recognition problem. Virtualization, the foundation of cloud computing, is the cornerstone of this strategy. The capabilities and architecture enabled via a virtual/cloud infrastructure can help companies to develop a UDS to address the movement in data management and practice.

Exciting new technologies and methodologies are evolving to address this phenomenon of science and culture creating huge new opportunities. These new technologies are also fundamentally changing the way we look at and use data.

The rush to monetize big data makes various solutions appealing. But companies should perform proper due diligence to fully understand the current state of their data management systems. Companies must learn to recognize the various forms of disparate and seemingly extraneous forms of information as data and develop a plan to manage and utilize all their data assets as a single, more powerful whole.

The transition from traditional relationally-structured data to a UDS could be complicated, but can be navigated effectively with an organized and managed approach to this effort.

To successfully adopt a Unified Data Strategy, companies should focus on the following:

1. Develop a thorough understanding of how the business consumes, produces, manipulates and uses information of all types.

2. Determine how the business can use data to both understand external factors and to assist in making internal decisions, as well as to understand how the data itself is relevant to influencing the business.

3. Analyze the "personality" of each data form so that it can be matched with tools that appropriately acquire, filter, store, safeguard and disperse the data into useful information.

4. Select infrastructure and tools that automate or eliminate traditional high-cost tasks such as import, provisioning, scalability, and disaster tolerance. A highly virtualized infrastructure with complementary tools should provide the majority of these capabilities.

5. Commit to the process of learning as an entirely new approach to technology, and to adopting it in risk-appropriate increments.

Any organization with a significant data infrastructure should be aware of the pitfalls that could occur if a company rushes into acquiring new technologies without understanding their requirements. Thorough analysis will lead to an understanding of the current state of their data management systems, and subsequently to better control of their existing data.

Ultimately, organizations should be able to recognize, manage, and utilize new forms of disparate and seemingly extraneous information as data. Companies, that develop a plan to comprehensively address all their issues around managing and utilizing all useful data, will gain significant strategic advantages.

Friday, January 10, 2014

Unified Index to Information Repositories

Amount of information is doubling every 18 months, and unstructured information volumes grow six times faster than structured.

Employees spend far too much time, about 20% of their time, on average, looking for, not finding and recreating information. Once they find the information, 42% of employees report having used the wrong information, according to a recent survey.

To combat this reality, for years, companies have spent hundreds of thousands, even millions, to move data to centralized systems, in an effort to better manage and access its growing volumes, only to be disappointed as data continues to proliferate outside of that system. Even with a single knowledgebase in place, employees report decrease in critical customer service metrics, due to the inability to quickly locate the right knowledge and information to serve customers.

Despite best efforts to move data to centralized platforms, companies are finding that their knowledgebase runs throughout enterprise systems, departments, divisions and newly acquired subsidiaries. Knowledge is stored offline in PCs and laptops, in emails and archives, intranets, file shares, CRM systems, ERPs, home-grown systems, and many others—across departments and across geographies.

Add to this the proliferation of enterprise applications use (including social networks, wikis, blogs and more) throughout organizations and it is no wonder that efforts to consolidate data into a single knowledgebase, a single "version of the truth" have failed... and at a very high price.

The bottom line is, moving data into a single knowledgebase is a losing battle. There remains a much more successful way to effectively manage your knowledge ecosystem without moving data.

When there are multiple systems containing organization's information are in place, a better approach is to stop moving data by combining structured and unstructured data from virtually any enterprise system, including social networks, into a central, unified index. Think of it as an indexing layer that sits above all enterprise systems, from which services can be provided to multiple departments, each configured to that department’s specific needs.

This approach enables dashboards, focused on various business departments and processes, prospective customers. Such composite views of information provide new, actionable perspectives on many business processes, including overall corporate governance. The resulting juxtaposition of key metrics and information improves decision making and operational efficiency.

This approach allows IT departments to leverage their existing technology, and avoid significant cost associated with system integration and data migration projects. It also helps companies avoid pushing their processes into a one-size-fits-all, cookie-cutter framework.

With configurable dashboards, companies decide how/what/where information and knowledge is presented, workflows are enabled, and for what groups of employees. Information monitoring and alerts facilitate compliance. There is virtually no limit to the type of information and where it is pulled from, into the central, unified and, importantly, highly secure index: structured, unstructured, from all corporate email, files, archives, on desktops and in many CRMs, CMS, knowledgebases, etc.

Enterprise applications have proliferated throughout organizations, becoming rich with content. And yet all of that knowledge and all of that content remain locked within the community, often not even easily available to the members themselves.

Now it is possible to leverage the wisdom of communities in enterprise search efforts. User rankings, best bets and the ability to find people through the content they create are social search elements that provide the context employees and customers have come to expect from their interactions with online networks.

Imagine one of your sales executives attempting to sell one of your company’s largest accounts. They access a composite, 360 degree view of that company, and see not only the account history, sales opportunities, contact details, prior email conversations, proposals, contracts, customer service tickets, that customer’s recent comments to a blog post, complaints about service or questions posed within your customer community.

Armed with this knowledge, your sales executive is in a more informed position to better assist and sell to that customer. Without moving data your sales executive has a single, composite view of information that strategically informs the sales process.

Ubiquitous knowledge access allows employees to search where they work. Once you created the central index, you need to provide your employees with anytime/anywhere access to pertinent information and knowledge.

In many organizations, employees spend a lot of their time in MS Outlook. Other organizations with large sales teams need easy access to information on the road. Also valuable is the ability to conduct secure searches within enterprise content directly from a BlackBerry, including guided navigation. Even when systems are disconnected, including laptops, users can easily find information from these systems, directly from their mobile device. Again, without moving data, organizations can enjoy immediate, instant access to pertinent knowledge and information, anywhere, anytime.

Companies that stopped moving data report favorable results of their unified information index layer from multiple repositories such as faster customer issues resolution time, significant reduction in dedicated support resources, savings in upgrade cost for the legacy system which was replaced, increase in self-service customer satisfaction, and reducing average response time to customers' queries.

There are few applications currently in the market that fulfill these functions. These are enterprise search applications.

However, there is no "one fits all" approach. Any solution should be based on organization's business requirements.

Saturday, December 28, 2013

Tips to Ensure Knowledge Management Success

It is very important for long term success of knowledge management initiative to align it with organizational strategy, especially in times of change. KM initiative can "drift" over time if measures are not taken to align it with organizational mission, new turns in direction, management changes, and different product/service offerings.

Here are useful, actionable tips to ensure that an organization's knowledge management initiative succeeds not just at launch stage but also over the years.

1. Bring knowledge management into mission critical activities. Knowledge management is a great enabler of many business processes, but it can be very relevant to ensure success and continuity of mission critical activities in areas ranging from banking to security. For example, you can leverage knowledge management to acquire, retain, and spread mission critical knowledge in IT global services.

2. Focus on knowledge retention during times of down-size or reorganization. Globalization, aging work forces and economic downturns are leading to loss of valuable knowledge. KM can help to eliminate that gap in the near term and especially in the long term.

3. Use KM to improve understanding and execution of business reorganization. KM sometimes gets put aside during complex organizational restructuring, but can actually be useful in determining how to reorganize effectively. Some companies seem to spend almost half of their time on restructuring, but are not using KM to be more effective or innovative in restructuring.

4. Go beyond connecting to networking. KM at the people level sometimes gets stuck at the stage of people profiles and a bewildering range of discussion forums. It is important to add collaborative tasks on top of such connections, so that actual networking takes place and collective intelligence emerges.

5. Conduct more research on knowledge work. With all the commotion about social media in the enterprise, people tend to forget that knowledge work is essentially built on effective communication. More research is needed about the changing workplace to understand how KM is becoming even more critical to 21st century organizations, and how knowledge seeking/collaboration behaviors of knowledge workers are changing.

6. Pay more attention to design and visualization. In a workplace of increasing information overload and multitasking, it is important to design knowledge interactions and interfaces in a compelling yet effective manner. Effective design can help in sense-making in fast changing and information-intensive environments.

7. Pay attention to the requirements of mobile knowledge workers. BYOD (bring your own mobile device to the office) is now accustomed feature. More and more employees and managers are using mobile devices not just for accessing information but also for full workflow. Knowledge processes should be optimized for mobile devices, and not just in terms of device interface but also in speed of delivery, e.g. fast loading dashboards for sales teams.

8. Blend informal and formal activities in knowledge sharing sessions. For example, a knowledge fair format with each project team presenting its achievements and learning enforces the KM message stronger for all participants. The very act of presenting a KM case study can help employees develop a deeper appreciation of the strengths and opportunities for KM at work in the long term, and instills a sense of pride.

9. Use KM initiative in many different audiences and don't restrict it to only select managers or project managers. The more people who engage with KM in full-time or part-time roles, the more buy-in KM will gain and the more value it will contribute.

10. Highlight KM practitioners across the organization. Don't just showcase the usual super-achievers; also feature the employees who are coming up with their first, unique work insights or first reuse of existing knowledge assets.

11. Don't pitch KM as an extra activity to be done after usual work hours; it should be embedded in regular workflow. Even additional activities such as conferencing and industry meetings should be seen as a way of learning, brainstorming and bench-marking.

12. Avoid too much theory. While the core team certainly needs to be abreast of developments in KM models and research, its recommendations and implementations must be demystified and simplified so that employees are not distracted or confused with more buzzwords.

13. Don't get hung up on the name KM. Some people seem to have a problem with the words knowledge management and even KM. Other terms such as collaborative work or knowledge sharing seem to be in use as well.

14. Use metrics and analytics effectively, and conduct KM course corrections as appropriate. Many KM initiatives stop their outcome studies at the level of activity metrics, but fail to connect them to deeper processes, knowledge insights, people attitudes and overall impacts on productivity and innovation. One company reported that only 40% of its knowledge assets were being used, and some were being viewed only by the creator. At the same time, metrics are not the only assessment.

15. Help ensure long term success of KM by evangelizing it to employees. This helps create awareness in employees about the importance of KM and strengthens the KM initiative.

Thursday, November 21, 2013

Choosing the Right Content and Knowledge Management Tools

Effective content and knowledge management is a combination of theory, practice, and technology. You should not focus too much on the technology part without considering other parts.

However, effective technology deployment are essential to content and knowledge management success. The challenge is that there is no such thing as "content and knowledge management tools" marketplace. Depending on the application, content and knowledge management can include different types of technology, comprising many diverse market segments.

Today, content and knowledge management practitioners need to follow technology developments.

For many years, the main platforms for content and knowledge management revolved around searchable knowledgebases and discussion forums. Enterprise portals emerged to try to present enterprise information via a single dashboard. That didn't usually work out so well, although portal technology still plays a key role for many use cases today.

Similarly, enterprise search held and important part in the enterprise content management where the right information easily retrievable from multiple repositories through the single interface. Search technology still plays a critical role in many cases.

Now content management practitioners have to take into account a wide range of repositories and applications, from enterprise video to social media monitoring and intelligence. The diversity of content management technology is growing and proliferates.

Of course, knowledgebases still remain important, but the way we build and manage them has changed dramatically:
  • Wikis now power some of the most definitive knowledgebases within and beyond the enterprise.
  • Sophisticated social Q&A applications are generating impressive, demand-driven knowledge sets in many environments.
  • Digital community spaces are not new, but richer community platforms with increasingly important facilitation features have made them far more accessible in the enterprise.
  • Ideation (a.k.a., open innovation) applications are also coming of age, amid much healthy experimentation.
For content management practitioners, this means mastering a new set of technologies to address old problems. But the opposit is also true: some older technologies are finding new use within the enterprise.

Digital asset management and media asset management platforms are not new. What changed is their increasing adoption within broader enterprise contexts. More and more of our digital knowledge is not textual any more.

Much of our textual knowledge that does remain still resides in files waiting to get more liberated. Hence the meteoric rise of file sharing services, most of them based in the cloud, and many of them now targeting enterprise scenarios.

The rise of social media monitoring and intelligence has given new life to the field of text analytics, even while exposing the limitations of individual analytics engines.

Not every organization needs all those types of tools. But the savvy content management practitioners can help guide his or her colleagues to the appropriate technology for their organization.

Very often over the past decade, when content management practitioners began new projects, the preferred solution was Microsoft SharePoint - a platform that seemingly can do it all. You need to remember to not to focus too much on any one platform but base decisions on your organization business requirements.

For most organizations, initial investments in social computing have centered on creating social spaces where employees could go to engage in more informal discussions and networking. The actual results have often proved uneven, yet promising enough to sustain further investment and experimentation within most enterprises.

Social features are important to effective enterprise collaboration. More social and collaborative digital workplace experience has become increasingly essential for all enterprise computing. Your colleagues really want a social layer across their digital experience. But it could be the opposite as well. Again, remember your business requirements.

Many of new tools come with their own repositories and, left alone, will lead to more information silos reducing their long-term value. Many vendors argue that search technologies will solve that problem. You need to focus on things like filtering services for activity streams and appropriate levels of information management.

You will also add value by demonstrating that collaboration and knowledge sharing are not places people go, but things they do.

With the rise of mobile, that kind of contextual relevancy has become more urgent. But it is going to require an understanding of a wider choice of technology choices.

Content management practitioners are uniquely positioned to help the organization to put new tools in the context of daily work. Understand the suitability of the right tool for the right job. Advocate for a scenario-based approach to all technology selections. The right tool is not sufficient for content and knowledge management success, but it is an increasingly important condition.

Friday, November 8, 2013

New Trends in Records Management

Records management has changed from what it was before and there are new trends in it.

Records management professionals should adapt to it and be open to collaboration and working cross-functionally with business users to develop their road-map for the identification and exploration of new sources of corporate records.

Technology has evolved since the first wave of records management tools entered the market. Regulations changed and they now cover the broader spectrum of electronically stored information, for example, video, social media or instant messaging, not just the images or office documents that have traditionally been called "records." Information governance is emerging as a term that describes and supports a holistic, life cycle view of the creation, use, and protection of digital information.

Let's look more closely at these new trends.

Records management shifts to information governance.

Work-in-progress documents, structured or semi-structured information require governance, including protection from unauthorized access, use, deletion or disclosure across their life cycle. In other words, the frequently changing data in your system reflects your business activities. Holds and discovery to meet litigation or audit need to be available regardless of the item's record status or storage location, i.e. on-premises or in a cloud service.

Many businesses continue to lack confidence in the progress of their electronic records management programs, compliance initiatives, and e-discovery preparedness. The shift to a more comprehensive and proactive management of information across its entire life cycle has begun.

The concepts of file plans, folders, file parts and cutoffs have imposed constraints on how electronic information is organized and managed. The vocabulary of the paper records center is no longer relevant in a digital-first organization. Today, an information worker can compose an agreement in a cloud application, such as Google Docs, share it with a colleague, edit it on a tablet device and push revisions to a customer collaboration site even while on an airplane. But, business records continue to be generated outside traditional desktop applications.

New vendors are taking fresh approach to addressing compliance, categorization and retention requirements. The shift to a more comprehensive and proactive management of information across its entire business life cycle rather than just its end has begun. They approach governance of information beyond the "record." Technologies with strong roots in search, archiving and retention management offer the capabilities to manage many forms of electronically stored information, such as social activity or rich media, even when such information is not formally flagged as a record.

Cloud and social platforms render "file and declare" ineffective.

Traditional records management tools are slow to make the leap to the cloud. Records managers may be at risk of holding obsolete assumptions about importing or filing content into an on-premises records repository.

Records and compliance managers remain wary of cloud and social platforms. Enterprise architectures and their peers in records management practice groups are not aligned on cloud computing benefits. Cloud providers are increasingly supporting content segregation, security, privacy and data sovereignty requirements to attract regulated industries, and they are offering service-level agreements designed to reduce risks. In spite of that, records managers still cite security, legal and privacy risks as the top three reasons to stall adoption of cloud services and SaaS.

Current records management systems are already missing many forms of electronically stored information. Older types of electronically stored information, such as images, e-mail or office documents, are often captured into traditional records management applications. Newer content types are less likely to have policy-driven life cycle or retention rules applied. Mobile messages, social media posts or websites are important sources of discoverable information, but application of legal holds to that content can be difficult.

Digital preservation forces itself onto the governance agenda.

Digital records that have a long-term retention schedule are at risk when hardware devices, software applications and file formats become obsolete. Obsolete software file format is also a concern. Many first generation business and personal productivity tools are retired, and the inability to retrieve or view older digital records is becoming a reality.

Organizations are slowly waking up to digital preservation concerns. Migration, conversion and adoption of open standards are accepted approaches to solve the problem of accessibility over time. Those approaches, however, are not widely adopted at this time.

Decisions to retire older enterprise applications raise content preservation concerns. As organizations begin infrastructure renewal projects, particularly as new SaaS and cloud-based applications become viable alternatives, IT and records professionals must assess the risk of losing information in those older systems. Decisions to maintain older systems in read-only mode, to migrate data into newer systems or to dispose of older systems all together must be made in accordance with business, legal and compliance needs.

Open standards and open source change the sourcing landscape.

Companies drive significant change in the software acquisition landscape by calling for deliberate adoption of open standards and open source. Governments are hedging against the potential loss of electronic information, software obsolescence and increased costs, as well as demanding more portable data. Between 2011 and 2012, several national governments published directives to help their IT, records and procurement managers to understand, investigate and select more open technology platforms.

Open standards help to address preservation, accessibility and interoperability needs. Open source helps to reduce cost and minimize vendor and platform lock-in. Programs developed by governments around the world have raised the profile and acceptance of open source software. Government in the United Kingdom, United States, and Europe have taken proactive approach for using software systems and file formats based on open standards.

Auto-categorization becomes viable and approachable.

Transactional, regulated and semi-structured content is ready for automated capture, categorization and application of retention policies. The electronic information that uses a consistent structure and embedded metadata, or includes predictable patterns of numbers or text lends itself to content analytics, entity extraction and categorization tools for ingestion and application of retention, disposition and security or privacy access controls. Opportunities to use auto-classification technologies for routine, high-volume, predictable electronic content are increasing as technology matures, more vendors provide integrated offerings, and use cases are identified.

Auto-classification joins compliance needs and business priorities. High-volume, transactional information is a pain point when storage costs escalate and discovery requests are made. Capture, categorization and retention schedule are functions that reduce costs, streamline customer service, and increase digitization of processes. Consistent organization creates a foundation upon which are based content analytics and predictive technology use. Consistent disposal of obsolete information reduces the need for more storage resources, facilitates faster retrieval of data, and lowers the cost of e-discovery.

Big content is as important as big data and requires well-thought governance. Big data gets a lot of attention, but organizations must also cope with information stored in semi-structured or unstructured forms. Tabular data often sits unnoticed and unanalyzed in files created by individuals or small teams. E-mail, spreadsheets and ad hoc databases are used for critical business decisions and often sit under the radar of compliance or audit managers on file shares, collaboration sites or personal computers. 70% of companies use spreadsheets for critical business decisions, but fewer than 34% apply governance or controls to them.

Technology enforces and automates defensible approaches to disposition. Organizations that demonstrate consistent and predictable approaches to information handling, including its final deletion, are more successful when e-discovery orders compel extensive search, retrieval, review and production activities. Automation of routine processes, including scheduled disposal, lends weight to retention programs when challenged by legal counsel or auditors. Auto-classification tools ensure that retention and disposition rules are applied within specific parameters and are supported by documented policy rationale and citations.

Thursday, October 31, 2013

Information Governance With SharePoint

The goals of any enterprise content management (ECM) system are to connect an organization's knowledge workers, streamline its business processes, and manage and store its information.

Microsoft SharePoint has become the leading content management system in today's competitive business landscape as organizations look to foster information transparency and collaboration by providing efficient capture, storage, preservation, management, and delivery of content to end users.

A recent study by the Association for Information and Image Management (AIIM) found that 53% of organizations currently utilize SharePoint for ECM. SharePoint's growth can be attributed to its ease of use, incorporation of social collaboration features, as well as its distributed management approach, allowing for self-service. With the growing trends of social collaboration and enhancements found in the latest release of SharePoint 2013, Microsoft continues to facilitate collaboration among knowledge workers.

As SharePoint continues to evolve, it is essential to have a solution in place that would achieve the vision of efficiency and collaboration without compromising on security and compliance. The growing usage of SharePoint for ECM is not without risk. AIIM also estimated that 60% of organizations utilizing SharePoint for ECM have yet to incorporate it into their existing governance and compliance strategies. It is imperative that organizations establish effective information governance strategies to support secure collaboration.

There are two new nice features in SharePoint 2013 version that would help you with compliance issues. E-discovery center is a SharePoint site that allows to get more control of your data. It allows to identify, hold, search, and export documents needed for e-discovery. "In Place Hold" feature allows to preserve documents and put hold on them while users continue working on them. These features are available for both on-premises and in-cloud solutions.

2013 SharePoint has been integrated with Yammer which provides many social features. This presents new challenge with compliance. Yammer is planning to integrate more security in future releases. But for now, organizations need to create policies and procedures for these social features. Roles like "Community Manager", "Yambassadors", "Group Administrators" might be introduced.

There are 3rd party tools that could be used with SharePoint for compliance and information governance. They are: Metalogix and AvePoint for Governance and Compliance, CipherPoint and Stealth Software for Encryption and Security; ViewDo Labs and Good Data for Yammer analytics and compliance.

In order to most effectively utilize SharePoint for content management, there are several best practices that must be incorporated into information governance strategies as part of an effective risk management lifecycle. The goal of any comprehensive governance strategy is to mitigate risk, whether this entails downtime, compliance violation or data loss. In order to do so, an effective governance plan must be established that includes the following components:

Develop a plan. When developing your plan, it is necessary for organizations to understand the types of content SharePoint contains before establishing governance procedures. It is important to involve the appropriate business owners and gather any regulatory requirements. These requirements will help to drive information governance policies for content security, information architecture and lifecycle management.

When determining the best approach to implement and enforce content management and compliance initiatives, chief privacy officers, chief information security officers, compliance managers, records managers, SharePoint administrators, and company executives will all have to work together to establish the most appropriate processes for their organization as well as an action plan for how to execute these processes. During the planning phase, your organization should perform an assessment, set your organization's goals, and establish appropriate compliance and governance requirements based on the results of the assessment to meet the business objectives.

Implement your governance architecture. Once your organization has developed a good understanding of the various content that will be managed through SharePoint, it is time to implement the governance architecture. In this phase, it is important to plan for technical enforcement, monitoring and training for employees that address areas of risk or noncompliance. It is important to note that while SharePoint is known for its content management functionality, there are specific challenges that come with utilizing the platform as a content management system for which your governance architecture must account: content growth and security management.

In order to implement effective content management, organizations should address and plan to manage growth of sites, files, storage, and the overall volume of content. Organizations without a governance strategy often struggle with proliferation of content with no solutions to manage or dispose of it. This is a huge problem with file servers. Over time, file servers grow to the point where they become a bit like the file cabinet collecting dust in the corner of your office. It is easy to add in a new file, but you will not find it later when you need it. The challenge comes from the planning on how to organize and dispose of out-of-date content.

SharePoint offers the technology to address these challenges, but only if it is enabled as part of your governance plan. Information management policies can be used to automatically delete documents, or you may be using third-party solutions to archive documents, libraries and sites. By default in SharePoint 2013, Shredded Storage is enabled to reduce the overall storage of organizations that are utilizing versioning. Remote BLOB Storage (RBS) can also be enabled in SharePoint or through third-party tools to reduce SharePoint's storage burden on SQL Server.

Tagging and classification plays a key role in information governance. Proper classification can improve content findability. Organizations can utilize SharePoint's extensive document management and classification features, including Content Types and Managed Metadata to tag and classify content. Third-party tools that extend SharePoint's native capabilities can also filter for specified content when applying management policies for storage, deletion, archiving, or preservation. Ultimately, however, the people in your organization will play the biggest role here. As such, your plan should identify who the key data owners are and the areas for which they are responsible. This role is often filled by a "site librarian" or those responsible for risk management in the enterprise.

In order to minimize risk to the organization, it is imperative to ensure information is accessible to the people that should have it, and protected from the people that should not have access. SharePoint has very flexible system of permissions that can accommodate this issue.

Ongoing assessments. In order to ensure that established governance procedures continue to meet your business requirements ongoing assessment is required. Conduct ongoing testing of business solutions, monitoring of system response times, service availability and user activity, as well as assessments to ensure that you have complied with your guidelines and requirements for properly managing the content. The content is essentially your intellectual property, the lifeblood that sustains your organization.

React and revise as necessary. In order to continue to mitigate risk, respond to evolving requirements, and harden security and access controls, we must take information gathered in your ongoing assessments and use that to make more intelligent management decisions. Continue to assess and react and revise as necessary. With each change, continue to validate that your system meets necessary requirements.

The risk has never been higher, especially as more data is created along an growing regulatory compliance mandates requiring organizations to ensure that its content is properly managed.

If you develop a plan, implement a governance architecture that supports that plan, assess the architecture on an ongoing basis, and react and revise as necessary, your organization will have the support and agility necessary to truly use all of the content it possesses to improve business processes, innovation, and competitiveness while lowering total costs.

Monday, October 28, 2013

Meeting the Social Media Challenge

When social media volume is low, it is typically handled manually by one or more people in a company. These people are assigned to check Facebook and/or Twitter a couple of times a day and respond when appropriate.

As the volume of inquiries grows, it becomes expensive to respond manually to the posts and comments, and nearly impossible to do it on a timely basis. After a while, it becomes clear that automation is necessary to respond to the large number of social media comments in appropriate time frames.

During the next few years, organizations of all sizes will need to build a social media technology servicing framework to handle an increasing volume of inquiries, complaints, and comments. As social media is conceptually just another channel, it should be incorporated into the enterprise's overall servicing framework. However, the unique characteristics and demands of social media interactions require specialized solutions and processes, even though the responses should be consistent in all channels.

There are many applications to help organizations handle their social media servicing challenges, and new ones are constantly being introduced. However, currently, there is no single solution that addresses all necessary requirements. Enterprises that want a complete solution need to purchase several applications and integrate them. They should also merge these applications with their existing servicing infrastructure to ensure an excellent customer experience.

The underlying technical components required to build a social media servicing infrastructure are:
  • Tools for monitoring social media sites for brand and company mentions.
  • Data acquisition/capture tools to identify and gather relevant social media interactions for the company.
  • Data extraction tools that separate "noise" from interactions that require immediate or timely responses.
  • An engine for defining business rules that generates alerts, messages, pop-ups, alarms, and events.
  • Integration tools to facilitate application-to-application communication, typically using open protocols such as Web services. Prebuilt integration tools, along with published application programming interfaces, should be provided for contact center applications.
  • Storage to house and access large volumes of historical data, and an automated process to retain and purge both online and archived data. Additional capabilities may include the ability to access archived data via other media, such as a CD-ROM, and the ability to store and retrieve data in a corporate storage facility, such as a network-attached storage or storage area network.
  • Database software for managing large volumes of information.
  • Work flow tools to automate business processes by systematically passing information, documents, tasks, notifications, or alerts to another business process (or person) for additional or supplementary action, follow-up, or expertise.
The core administrative tools needed are:
  • User administration capability with prebuilt tools to facilitate system access, user set-up, user identification and rights (privileges), password administration, and security.
  • Alert management capability that allows thresholds to be set so that alarms, alerts, or notifications can be enabled when predefined levels or time frames are triggered when violations or achievements occur (examples include alerts to signal changes in topics, emerging issues, and sentiment).
  • Metrics management, including the ability to enter, create, and define key performance indicators (KPIs) and associated metrics.
  • System configuration with an integrated environment for managing application set-up, and parameters for contact routing, skill groups, business rules, etc.
The core servicing functionality includes:
  • Skills-based routing tools to deliver identified interactions to agents or other employees with the proficiency to address them.
  • The ability to queue and route transactions (calls, emails, chat/IM, and social media posts) to the appropriate agent, employee, or team.
  • Text analytics software that uses a combination of statistical or linguistic modeling methods to extract information from unstructured textual data.
  • Filtering tools that separate "noise" from social media customer interactions that require immediate or timely responses.
  • Topic categorization software that identifies themes and trends within social media interactions.
  • Root cause analysis, a problem-solving tool that enables users to strip away layers of symptoms to identify the underlying reasons for problems or issues.
  • Search and retrieval abilities that allow large volumes of data to be searched, based on user-defined queries, to retrieve specific instances.
  • Sentiment analysis capability that can identify positive or negative sentiment about a company, person, or product, and assign a numerical score based on linguistic and statistical analysis.
  • A social CRM servicing solution that logs and tracks received social media interactions so that agents or employees can view the post/comment, create a customized response, and issue or post it.
  • Response templates that comprise a library of customizable responses to common social media posts.
  • A social media publishing tool that enables users to publish posts to social media sites.
  • Reporting functionality in which reports can be set up based on collected data, metrics, or KPIs in a preferred presentation format (chart or graph); this should also include the ability to create custom reports based on ad hoc queries.
  • Scorecards/dashboards for all constituents in an organization - agents, supervisors, managers, other departments, and executives.
  • An analytics tool that conducts multidimensional analyses of social media data, used to look for trends and data relationships over time, identify emerging issues and root causes, etc.
  • Recording software to capture social media inputs and responses.
Organizations also need a number of management applications to ensure that their social media teams or departments are properly trained and staffed. These tools are:
  • Quality assurance functionality to measure the quality of social media comments and posts by agents, to ensure that they are adhering to the organization's guidelines.
  • Coaching and e-learning software to deliver appropriate training courses and best practice clips to agents and other employees involved in responding to social media interactions.
  • A workforce management solution to forecast the expected volume of social media interactions that will require agent/employee assistance, and to identify and create optimal schedules (this also tracks adherence to service levels for each inquiry type).
  • Surveying software to determine if customers/comments were satisfied with the company's responses.
  • Desktop analytics to provide an automated and systematic approach to monitor, capture, structure, analyze, report, and react to all agent/employee desktop activity and process workflows.
  • An analytics-oriented performance management module that creates scorecards and dashboards to help contact center and other managers measure performance against preset goals.
Social media is going to change the servicing landscape for many organizations within the next five to eight years. This is because the volume of social media comments and posts is expected to grow rapidly, comprising 50 percent of all service interactions. Companies that build a servicing strategy incorporating social media will have a major advantage over their competitors.

Companies do not need all of the solutions identified above, they need to select the ones that allow them to incorporate social media into their servicing strategy and infrastructure so that customers can interact with them in their preferred channel.