Wednesday, March 20, 2019

Content Naming Conventions

File Naming Conventions (FNC) is a framework for systematic naming files in a way that describes what they contain and how they relate to other files. Developing FNC is done through identifying the key elements of the file, the important differences and commonalities between files.

How files are organized and named has a big impact on the ability to find those files later and to understand what they contain. File names should be consistent and descriptive in naming and organizing so that it is obvious where to find specific files and what the files contain.

To ensure maximum access to files, it is necessary to establish a naming convention for files and use it consistently.

Here is the situation: you need to review the most recent version of a document. You login to your organization’s content management system (CMS) and look for the file, but you can’t tell which of the documents you should be reviewing because files' names are meaningless.

This is just one small example of an information management weakness that can cause a lot of unnecessary frustration. Imagine how much more productive you and your colleagues could be if you knew what each file contained before you opened it!

Therefore file naming conventions are very important because the more organized you can be with managing your information, the more effective and efficient you can be your work.

A file naming convention is a systematic method for naming files. Your file naming convention will always be your most powerful and easy method for organizing and retrieving your documents. You want to get this right the first time, so it is important to invest enough time to think about this carefully.

A consistent and descriptive file naming convention serves many purposes, often related to branding, information management, and usability. The overall goal of optimal file naming is to increase readability in file names. It empowers people to navigate files more easily, makes searching and finding documents easier by having your file names reflect file contents, and guides file authors to develop each document around a single, concrete purpose, which reduces clutter. More concretely, it allows you to:
  • know the content of a document without opening it;
  • retrieve and filter documents very quickly using the search/filter function of your computer;
  • store documents in a single folder without losing their context, if you need to;
  • find and identify documents even if they are no longer in their original folder;
  • easily browse long lists of files to inventory or check for missing documents;
  • manage documents more easily on your website.
As you can see, there are many situations in which it is helpful to have file naming conventions. It is necessary to have it in order to organize your organization’s files so that users can find what they are looking for.

One way to know if you need to pay some extra attention to the way you are naming your files is to take golden test of a good file naming convention: imagine if you take all your files from your whole organization, and put them into one single folder.

Can you still quickly filter down to what you want by scrolling through the file list? Or by searching for elements of file names? If the answer is yes, your file naming is good. If not, your file naming still needs some work.

Tips for designing your file naming convention

1. Consider how you want to retrieve the files

How you want to retrieve the files will help determine the right file naming convention for that file type. Keep in mind that file sorting will read from left to right.

Starting your file name with the most important parameter/component will allow you to organize documents alphabetically (or chronologically) with that parameter without having to do any searching.

For example, if your primary method of accessing a litigation case file is its number, then this should be the first element in your file naming convention: when you sort your documents in the file manager, you will see them by case number first.

To ensure that files are sorted in proper chronological order the most significant date and time components should appear first followed with the least significant components. If all the other words in the file name are the same, this convention will allow you to sort documents by year, then month, then date.

Some conventions have the date in the front of every filename because that is the most logical way for their team to retrieve files. If the document will be maintained over time, use the convention v1, v2, v3, etc. to depict its place in the sequence of versions. You may want to separate the “v” from the content type with an underscore (“_”). As versions are made and updated, change the version #, but keep the file name the same.

Make sure that, if there are going to be more than 9 files with that name (versioning, photos), it should be 01, 02, 03,.. so that it can be sorted in chronological order. Same if it is more than 99 files, it should be -001, -060, -099, -100.

2. Use relevant components in your file name to provide description and context. The file name should contain the essential elements of each file, dependent upon what is suitable for your retrieval needs. File names should outlast the records creator who originally named the file, so think about what information would be helpful to someone in 15 years.

Keep in mind you will most likely want to use agreed-upon abbreviations for these components in order to keep the file names short.

For example, a file naming convention may include the following components, in the following order [YYMMDD]_[Project]_[Country]_[Event]-[number].xxx

Examples of filenames based on this convention:

160301_HRC_Geneva_launch-001.jpg
151208_HURIDOCS_Casebox_Improvements.pdf
160126_HURIDOCS_EHRAC_meeting_notes.rtf 160219_SRJI_Moscow_meeting-001.jpg

3. Keep the filename a reasonable length

Long file names do not work well so it’s best to keep them short. To achieve this, you could consider:
  • shortening the year to 2 numbers;
  • abbreviate file name components (e.g. use “inv” instead of “invoice”, or “fr” instead of “francais”);
  • use as few words as possible to convey the identity of the document;
4. Avoid special characters and spaces Special characters such as ~ ! @ # $ % ^ * ( ) ` ; ? , [ ] { } ‘ ” | should be avoided. Do not use spaces. Some software will not recognize file names with spaces. Use these alternatives instead: Underscores (e.g. file_name.xxx), Dashes (e.g. file-name.xxx), No separation (e.g. filename.xxx), Camel case, where the first letter of each section of text is capitalized (e.g. FileName.xxx).

5. Do not start the file name with special characters under any circumstances.

6. Document and share your file naming convention, and get your team on-board.

Make it easy to understand, use and find the file naming conventions by documenting them and putting them in a place that is easy to find.

Hold a short and fun internal training session to explain why the new file naming conventions are so important to use, and how they work. Create a video that goes through the key points of these conventions.

Example of a digital photo file naming convention

Professional photographers also use file naming conventions to organize their photos. A photographer may take thousands photos in a single shoot, and they do not depend on file names produced by their camera, or rely on folder structures. Rather, they typically use a file naming convention, such as: [Date] – [place or event] – [number] – [comment].

Examples:

2011.11.11-kampala-riot-000001.tiff
2011.11.11-kampala-riot-000002.tiff
2011.11.11-kampala-riot-000003.tiff
2011.11.11-kampala-riot-000004.tiff
2011.11.11-kampala-riot-000004-cropped-slider660x510.jpg

As you can see, the photos above relate to riots that took place in Kampala on 11 November 2011. They were shot in TIFF format. The last photo is derivative of the previous one: it is an image cropped for the slider. Even if there are tens of thousands of photos in the same folder, it’s easy to filter for “kampala” and “riot”. Photography software like Adobe Lightroom or Adobe Photoshop allows you to batch rename files as above.

Example of using a file naming convention when scanning documents

If you’re using a scanner to digitize documents, it will typically produce PDF documents with filenames like 20120202095112663.pdf. This is not helpful for browsing thousands of documents! Instead, using a file naming convention will result with document names like the following: ICJ-submission-CAT47-Greece-20111011.pdf.

You can guess what this document is about without opening it. In this case, it is a submission by the International Commission of Jurists to the Committee Against Torture at its 47th session, on 11 October 2011, concerning Greece.

Enjoying the fruits of your labor: how to find your file

Using consistent file naming conventions will let you find content you are looking for.

Galaxy Consulting has 20 years experience in content naming conventions. Please contact us for a free consultation.

Friday, February 22, 2019

Taxonomy Governance

When organizations have the need for a taxonomy, they focus on taxonomy development and they do not take into consideration the need for taxonomy governance. Taxonomy governance is part of information governance and should be taken seriously.

Taxonomies exist to support business processes and the associated organizational goals. A well-managed taxonomy provides the structure needed to manage content across multiple internal systems and gives users options and flexibility for how content is accessed and displayed. Taxonomy governance plans ensure that the taxonomies are maintained in a way that satisfies current and future needs and provides the maximum return on investment.

Taxonomy governance consists of the policies, procedures and documentation required for management and use of taxonomies within an organization. Successful taxonomy governance establishes long-term ownership and responsibility for taxonomies, responds to feedback from taxonomy users, and assures the sustainable evolution of taxonomies in response to changes in user and business needs.

Taxonomies are never “finished.” Rather, they are living systems that grow and evolve with the business. Taxonomy governance ensures that growth happens in a managed, predictable way.

Taxonomy governance answers the following questions:
  • Who are the taxonomy stakeholders?
  • What are their respective responsibilities?
  • Who is responsible for making changes?
  • What is the process for making changes?
  • How are prospective changes evaluated and prioritized?
  • When are changes made?
  • When are processes reviewed and updated?
The goals of taxonomy governance are similar across organizations but it is important to remember that there is no universal taxonomy governance solution. Successful taxonomy governance works within the context of the organization.

Many of the principles and goals of taxonomy governance are shared with information governance.

A good first step when developing taxonomy governance policies is to examine related information governance policies that already exist within an organization. Re-purposing familiar policies and systems makes both adoption and compliance easier for taxonomy users.

The best governance policies take advantage of existing structure, workflows and management processes while accounting for human and technical resources and constraints. Governance policies provide a strategic framework to guide day-to-day taxonomy management.

The main components of this framework are the taxonomy management organization and the operations they perform. Governance has a role at both strategic and operational levels by defining roles and responsibilities of taxonomy organization members, articulating communication, decision-making and escalation policies and providing protocols for taxonomy maintenance operations. Above all, governance provides accountability for decision-making and operations on both a large and small scale.

Taxonomy Management

Ongoing maintenance and development of a taxonomy is best achieved by a formal organization with well-defined and clearly documented roles, responsibilities, and processes. The Taxonomy Management team should be responsible for both strategic direction and routine administration of taxonomy operations. This team should include high-level decision-makers as well as trained taxonomists and IT if needed. End users of the taxonomy should also be represented in the Taxonomy Management team.

The role of a taxonomy governance team is to ensure that taxonomy management occurs in a systematic, measurable, and reproducible way. It provides a mechanism for managing the needs and concerns of all taxonomy stakeholders and helps maximize the value of taxonomy resources by establishing organization-wide policies for taxonomy development, maintenance and use.

Taxonomy Management Team manages taxonomy administration and development. As with governance policies in general, the specific makeup and divisions between teams as well as the terminology used to describe them will vary depending on the particulars of organizational structure, history and goals.

Taxonomy governance focuses on strategic goals and company-wide policies for taxonomy management and use as well as levels of responsibility for different taxonomy stakeholders. These goals and policies are developed by the Taxonomy Governance Team.

Identifying and documenting organization-wide taxonomy use cases is very important task of taxonomy governance activities. Taxonomies can potentially be used in multiple business areas. Content strategy, web design and user experience, marketing, customer support, site search and business intelligence are a few examples. Developing tangible, specific use cases helps communicate the taxonomy’s value throughout the organization and is necessary when prioritizing taxonomy-related investments.

Governance policies should also be developed that define taxonomy success, performance and quality. Metrics should validate the quality of a taxonomy implementation through quantifiable, direct measurement of taxonomy performance. Regular assessment ensures that the taxonomy meets business and user needs over the long term.

The ability to share data across systems, improved quality of search results, improved user experience of websites and regulatory compliance resulting from effective record keeping and document management are all examples of benefits that can result from effective taxonomy implementation and management. A goal of governance should be to identify and document benefits of this type that are relevant to the specific organization.

Taxonomy Operations and Maintenance

Ongoing maintenance is very important aspect of a taxonomy project. Taxonomies must be continually updated to reflect changes in content, competition, and business goals. In the absence of maintenance taxonomies atrophy and the value they provide will be greatly diminished.

Organizations must anticipate the resources needed to maintain the taxonomy and develop effective management processes to realize the maximum value from their taxonomy investment. At this level governance is primarily focused on operational details. It provides the framework for taxonomy operations in the form of guidelines, processes, documentation and a defined organizational structure.

The specific tasks performed as part of taxonomy maintenance consist of a wide range of large and small-scale changes to the taxonomy. Taxonomy staff are also typically responsible for providing training, preparing documentation materials, interacting with IT groups to ensure smooth operation of taxonomy systems and providing expert advice and feedback to business leaders to inform strategic decision-making.

The Taxonomy Change Process

One of the most important purposes of taxonomy governance is to define the organizational taxonomy change process. Governance policies define and document specific taxonomy changes and provide guidance to taxonomy administrators on making those changes.

It is especially important to provide guidance on decision-making authority and escalation processes. Defining and documenting different change types allows rational decisions to be made as to which changes can be routinely handled at the discretion of taxonomy administrators and which changes require higher-level consensus and approval. The first step in defining a taxonomy change process is to categorize taxonomy changes by impact and scale.

An important consideration in categorizing the impact of changes to the taxonomy is that taxonomy data is often used by multiple internal tools and systems. Content management, marketing, web analytics and SEO, product inventory and web publishing systems are just a few potential consumers of an enterprise taxonomy.

Experience shows that the level of engagement with the taxonomy team varies widely between users. To avoid unpleasant surprises, taxonomy administrators should be proactive in tracking users and systems where taxonomies are used. Understanding and documenting both the technical details of how taxonomy data flows to these systems and the specific business use case of various users is an important part of the taxonomy change process and should be addressed in both change processes and communication plans.

Small-scale changes will affect only a single term or small number of terms and will have a minimal impact on users and systems where they are used. Typical small-scale changes are spelling corrections or the addition of individual terms to existing vocabularies.

Taxonomy management staff is usually empowered to make this type of changes as part of routine taxonomy administration. In contrast, large-scale changes will impact large numbers of taxonomy consumers, multiple consuming systems and/or require a significant commitment of taxonomy management resources for an extended period of time. They require high-level approval with input from the entire information governance team.

Change Request Process

Typical sources of taxonomy change requests are users feedback, routine maintenance by taxonomy administrators, and new business needs.

User feedback is usually the largest and most important source of small-scale taxonomy change requests. A channel is needed for users to provide feedback and for taxonomy administrators to communicate with users. Interacting with taxonomy users and serving as a general point of contact for taxonomy issues is one of the most important aspects of routine taxonomy maintenance for taxonomy administrators.

Email aliases, bug/issue tracking software, dedicated portals, message boards, and other tools used in a help desk or customer support setting are all potentially useful mechanisms for taxonomy administrators to interact with users. Governance policies should address these needs with a well-defined communications plan.

It is also common for predictable events to have an impact on the taxonomy. Marketing campaigns, product updates, new products, company reorganizations and mergers are a few examples of events that could lead to taxonomy changes. Changes of this type can be significant in terms of scale but they can usually be handled as a routine part of taxonomy maintenance. These events should be identified and relevant change and communication policies developed.

In contrast to small-scale changes, large-scale changes tend to be infrequent and are typically driven by strategic business needs. Major expansions in scope requiring the creation of large numbers of new terms and implementation of significant new systems or technologies are examples of large-scale taxonomy changes that may be needed.

Difficulty and scale of taxonomy changes is dependent on the specific details of its implementation. Management of the taxonomy with a dedicated taxonomy tool versus within a content management system, the capabilities of the tool being used, the number and complexity of taxonomy use cases and the number and characteristics of consuming systems are a few variables that will influence the change process.

Collecting statistics on change requests and taxonomy use should be part of taxonomy administrator’s routine responsibilities. This data should be reported to the governance team and used to inform strategic decision-making. In the same way decisions made at the strategic level will impact the prioritization and performance of day-to-day tasks.

Maximizing ROI on Taxonomy Investments

Quality control mechanisms are an important function of governance, especially for businesses that operate in highly regulated environments, but they are not the only, or most important purpose of governance.

The high-level goal of taxonomy governance is to maximize the return on taxonomy investments. The taxonomy governance team establishes strategic goals for the taxonomy and develops organization-wide policies for taxonomy management and use designed to meet those goals.

Goals, policies and procedures should not only be designed to mitigate risks but also to improve organizational performance and capabilities. An enterprise taxonomy is used by many different individuals, groups, and systems and can impact multiple business processes. All of these stakeholders should have insight into taxonomy management processes and a mechanism to provide feedback. Because of the breadth of business processes using the taxonomy it is also important that the governance team include high-level representation to provide strategic guidance and advocacy for taxonomy operations. In return, the governance team must communicate the positive benefits to stakeholders so that policies are more than just vague background noise.

One of the most important tasks of a governance team is to communicate these policies and procedures in a positive way. Governance is often perceived as an enforcement mechanism and it’s natural for stakeholders to react defensively if they believe that policies are in place because they’re not trusted to produce high-quality work. Processes, standard operating procedures, responsibility matrices and so on are viewed as a an active obstructions to productive work.

Galaxy Consulting has 20 years experience in taxonomy development and taxonomy governance. Please contact us for a free consultation.

Tuesday, January 29, 2019

Content Self-Service

Good content self-service options can provide your organization with significant benefits. Online users can get answers and receive the services they need quickly and efficiently, while your organization can be responsive and efficient in assisting them when they need it.

Since online self-service is a fraction of the cost of assisted support channels, it is by far the least expensive. If it is done well, it can help to ease customer effort, reduce operating costs, and even differentiate your business through superior service delivery.

Many factors drive effective customer self-service, including technology, the user interface, and personalization. However, one of the most powerful things your organization can do to drive effective self-service is developing truly user friendly content that is both quick and easy to find.

The trick to providing excellent customer service in a self-service content management world is describing the product in the words of the customer.

Getting the taxonomy right means understanding the customer—and recognizing that customers don’t necessarily agree on the terms. Describing content isn’t as easy as it looks. Acronyms can be a problem, since they can mean different things.

Meet the Expectations of Online Users

Self-service systems are only as good as the quality and usability of the information they deliver. The long-standing knowledge management statement “content is king” is particularly true in today’s self-service world, especially when you consider online users’ general self-service expectations:

They may not necessarily know exactly what they need to ask or do, just what they are trying to accomplish. Likewise, they may not always know your organization’s terminology or taxonomy.

They don’t want to spend time looking through lots of information or understanding the details of the self-service environment. They expect very little interaction—the two word “Google query” approach is the standard amount of information that is typically provided initially. Users generally consider performing additional clicks to deepen the context of their inquiry (such as scoping searches by specific categories or refining queries) if/when there’s a clear payoff trail to the answer.

Given the quick, concise nature of the self-service environment, it’s critical that customer facing content be written and structured to meet these expectations. This doesn’t mean you need only to provide a few short FAQs. Once the audience is understood, the principles of effective authoring can be employed to structure many information sources in a consumable way.

Start With the End in Mind

When developing self-service content, focus on the information that customers need, as opposed to the information that you have. For service and support content, here are some techniques that can help you gain insight into information that can be useful online:
  • Ask your support and service staff: People who communicate with customers every day know the types of issues customer ask about, the terminology they use, and how much information they can easily absorb. Since support staff also knows what the top questions are, they are an excellent source of customer-facing insights.
  • Examine your self-service content: Look carefully at the information that is most used online and what might be moved online based on what internal staff recommend. Flag the key information that would most quickly and clearly respond to common queries. Restructure supporting and related information into the background, and link it to the core knowledge objects. Create an easy-to-navigate path to success for common issues.
  • Test search queries and carefully review the results: Take the journey with your online users. Enter the top queries and questions, and navigate them in the self-service system. See what results come back, and whether the titles, content scope, and information format provide the best response. Try variations of queries and browse topics to confirm consistent, predictable results. Query testing is a tried-and-true method of assessing relevancy and defining where to make specific improvements (to technology, the user interface, and/or content tagging and structure).
Design Effective Experiences Around Useful Scenarios

While a self-service experience must be clear, simple, and intuitive, it does not have to be shallow or overly simplistic. Many resources and knowledge objects can be melded into the self-service experience. The key is to help users identify the main information pathways they should start on and relate other resources from there. This can be accomplished through a variety of methods:

Implement task-focused taxonomy: This can help users narrow their domain of interest intuitively by matching classification terminology and hierarchy to the most common support tasks.
  • Make clear visual distinctions between primary and secondary information—Using featured markers, icons, starting/landing pages, and clear titling standards can help users see what information is likely to be most relevant and what might be useful as they investigate certain questions further.
  • Organize content types for specific tasks: Most types of information can benefit from standard structuring that makes it clear what type of content users are looking at and how they should expect to use it (e.g., FAQs, How-To’s, Procedures, Diagnostics, Specifications, Promotions).
  • Provide natural transitions to other locations, information, or assisted channels: Leverage technology, where possible, to carry the context of a self-service interaction (the query, categorization scope, and relevant details about the user) forward into the next channel, such as chat, email, or a call into the contact center. This can accelerate the user’s path to the answer by helping route the request effectively.
Ultimately, users are apt to like and use self-service when it’s fast, instinctive, and provides the information or services they need. Given the potential benefits of self-service, it’s well worth the investment to assess, structure, tag, and deliver knowledge in the most intuitive way possible. It really still is all about the content!

Galaxy Consulting has 20 years experience in content management and content self-service. Please call us today for a free consultation!

Friday, December 21, 2018

Records Management Challenges

Records Management (RM) is supported by mature records management systems. However, the data explosion is raising new concerns about how RM should be handled. 

A few ongoing issues include big data, master data management (MDM), and how to deal with unstructured data and records in unusual formats such as those contained in graph databases.

Records are kept for compliance purposes and for their business value and sometimes because no process has been implemented for systematically removing them. 

There are continuing struggles with the massive volume of big data. IT and legal have different priorities about what to keep and getting rid of data makes IT nervous, but there are times when records should be dispositioned.

Data stored in data lakes is mainly uncontrolled and typically has not had data retention processes applied to it. Data quality for big data repositories is usually not applied until someone actually needs to use the data. 

Quality assurance might include making sure that duplicate records are dealt with appropriately, that inaccurate information is excluded or annotated and that data from multiple sources is being mapped accurately to the destination database or record. In traditional data warehouses, data is typically extracted, transformed and loaded. With a data lake, data is extracted (or acquired), loaded and then not transformed until required for a specific need.

MDM is a method for improving data quality by reconciling inconsistencies across multiple data sources to create a single, consistent and comprehensive view of critical business data. The master file is recognized as the best that is available and ideally is used enterprise wide for analytics and decision making. But from an RM perspective, questions arise, such as what would happen if the original source data reached the end of its retention schedule.

A record is information that is used to make a business decision, and it can be either an original set of data or a derivative record based on master data. The record is a snapshot that becomes an unalterable document and is stored in a system. Even if the original information is destroyed or transformed, the record lives on as a captured image or artifact. Therefore the “golden record” that constitutes the best and most accurate information can become a persistent piece of data within an RM system.

Unstructured data challenge

A large percentage of records management efforts are oriented toward being ready for e-discovery, This is much more of a problem in the case of unstructured data than for MDM. MDM has gone well beyond the narrow structure of relational databases and is entering the realm of big data, but its roots are still in the world of structured databases with well-defined metadata classifications, which makes RM for such records a more straight forward process.

The challenge with unstructured data is to build out the semantics so that the content management or RM and the data management components can work together. In the case of a contract, for example, the document might have many pieces of master data. It contains transactional data with certain values, such as product or customer information, and a specialist data steward or data librarian might be needed to tag and classify what data values are represented within that contract. With both the content and the data classified using a consistent semantic, it would be much simpler bringing intelligent parsing into the picture to bridge the gap between unstructured and structured data. Auto-classification of records can assist, although human intervention remains an essential element.

Redundant, obsolete and trivial information constitutes a large portion of stored information in many organizations. The information generated by organizations needs to be under control whether it consists of official records or non-record documents with business value. Otherwise, it will accumulate and become completely unmanageable. On the other hand, if organizations aggressively delete documents, they run the risk of employees creating underground archives of information they don’t want to relinquish, which can pose significant risks. Companies need to approach this with a well defined strategy.

The system should follow a “five-second rule,” allowing employees to easily save documents using built-in classification instead of a lot of manual tagging. The key is to make the system intuitive enough for any employee to use with just a few seconds of time and a few clicks of the mouse. In addition, the value of good records management needs to be communicated and the value "sold" so employees understand that it can actually help them with their work rather than being a burden. A well-designed system hides the complexity from users and puts it in the back end. It is more difficult to set up this type of system initially, but as more information is created, the importance of managing it also increases, in order to reduce costs and risk.

Graph databases - a different kind of record

Graph databases store information in a way that focuses on the relationships among data elements. Those representations could include networks and hierarchies as well as other relationships among nodes. 

Graph databases are designed to persist data in a format that highlights relationships among data elements. A graph might include customers, orders, products and promotions. The network itself as represented in the graph database might be a useful record. A network could show relationships that indicate fraudulent activities, and those networks could be saved as records.

Graph databases are used in several other ways to aid records management. Many organizations today are creating their own internal knowledge graphs that represent records as a connected data model to aid search and discovery. This knowledge graph speeds up risk analysis and compliance determination. Graph databases are also used within the legal industry to speed up legal research associated with a case. A graph of case files, opinions and other documents makes it easy for researchers to identify information that may be material to a case.

The RM Struggle

Studies of records management consistently show that only a minority of organizations have a retention schedule in place that would be considered legally credible and that some have no schedule at all. 

Even if a retention schedule is in place, compliance with it is often poor. Some go so far as to say that RM is facing a crisis. There is a battle shaping up between those who essentially want to keep everything forever because they might be able to extract business value from it and those who want to use records and information management to effectively get rid of as much information as soon as possible. It is very important to reconcile these differences.

From a business perspective, the potential upside of retaining corporate records so they can be used to gain insights into customer behavior, for example, may outweigh the apparent risks that result from non-compliance. Storage costs have drastically decreased and are often bundled into other paid services such as messaging, collaboration and large-scale analytics services in the cloud. The cost of combing through and removing unnecessary information can be high. I have seen a number of scenarios in which companies have undertaken projects to get rid of data, and they have found it more expensive than just keeping it.

Organizations need to ensure legal compliance function with its highest measurable value coming from getting rid of outdated and useless records. However, the highest value is actually within its framework for understanding and classifying information so that its business value can be exploited. RM professionals who realize this will not only survive, but also thrive in a world of increasing information complexity and volume.

If organizations view RM as a resource rather than a burden, it can contribute to enterprise success. In many respects, the management of enterprise information is already becoming more integrated and less siloed. For example, most enterprise content management (CMS) systems now have RM functionality. The same classification technology used for e-discovery is also used for classification of enterprise content. Seeing RM as part of that environment and recognizing its ability to enrich the understanding of business content as well as ensuring compliance can support that convergence.

Governance can be a unifying technique that provides a framework to encompass any type of information as it is created and managed. Governance is a set of multidisciplinary structures, policies and procedures to manage enterprise information in a way that supports an organization’s near-term and long-term operational and legal requirements. It is important to consider the impact of all forms of information, from big data to graph data, but within a comprehensive strategy of governance, the changing environment for RM becomes more tractable.

Galaxy Consulting has over 20 years experience in records management. Please contact us today for a free consultation.

Tuesday, October 30, 2018

Create Value in Your Content

Organizations are buried in content. Some content is important, some is out of date, and some content is vital for an organization to survive and thrive. Content management can provide great help in ensuring that your organization gets the most value out of its content.

Managing content is similar to all the things that accumulate in your house. The longer you live in one place, the more things you accumulate. Most families don’t have retention policies in place for their personal things. They don’t write policies and procedures regarding furniture, electronics, or other things that they meant to fix several years ago and they are now gathering dust in the closet. At some point in time, the closet needs to be cleaned out or there will be no more space in a closet and then in the house.

Why do people decide to get rid of the stuff in their houses? It might be because they’ simply decided that they own too much or they are tired of paying extra money to keep things they are not using at a storage facility or they ran out of space in their house or they have been urged by a family member to stop what looks like hoarding behavior or they’ve decided to downsize and move to a smaller house. Whatever the reasons, the decision to divest themselves of personal goods leads people to donate their goods to a charity or hold a massive yard sale or maybe both.

Organizations do not hold yard sales. The content stored in organizations is frequently, but not always, in digital form. Enterprises are better at replacing outdated computers and worn out desk chairs than deciding which pieces of content are no longer relevant to running the business. Many organizations may not even really know how much content they own or where it resides.

Cleaning Out the Content House

Organizations need enterprise content management to keep their content fresh and to get the most value out of it. They need to clean out their content closets and their information garages from time to time.

Enterprise content management is not a new concept. Companies have been accumulating information for years and managing their content has been a part of business functions in many organizations but not in all organizations. For many, content management has been designed by IT departments and driven by regulatory requirements. It’s concentrated on compliance, with meeting the rules imposed upon them from outside.

Regulatory compliance remains a huge factor in doing business, and enterprise content management plays huge role in ensuring that organizations meet compliance requirements.

When it comes to handling inactive content, companies need to consider archiving by which he means retiring the content rather than keeping it. Retention schedule would greatly help in this task.

A Content Management System (CMS) would help to automate the process of content management. Human element is also very important in content management. It is important to humanize the experience of working with content.

The humans interacting with content could be employees, customers, suppliers, partners, and regulators. For content management to succeed, people must enjoy a digital and, an experience-based interaction. Behind the scenes, a content management system should organize content so that content findability is enhanced without too much work on the part of the person seeking information.

Breaking Down Information Silos

One barrier to content usability is information silos. Today’s content users do not want to constantly switch from one silo to another or from one user interface to another. Multiple systems are simply not intuitive and do not foster the collaboration needed to effectively run a business. So, a content management system should concentrate on changing the silo mentality, breaking down silos of content, and digitally connecting them.

Silos started with people. They store data as their department, their piece of the enterprise, thinks it should be done. They don’t take a holistic view of content. As the result, content ends up in many different systems.

It is important to manage content in place, where it is at the moment and add value to it in a modular way. To add value modularly, you need to think about the difference between different types of content. Modular content management lets you use content you need in real time.

Digital Transformation

A major change in the content management landscape has been digital transformation. It’s no longer just dealing with regulations. Newer technologies, such as cloud computing and mobile devices, put unprecedented pressure on those responsible for content management. The IT challenges are real and enterprise content management can provide significant assistance in the process.

Digital transformation affects all organizations. Think of the things people used to do in person that they now do online. The retail industry has been hugely affected by technology. Online ordering, price comparisons, product reviews, and mobile payments are now the ordinary way people buy books, electronics, household goods, and almost everything else. Even groceries can be ordered online and delivered to your door.

Digital information, with no paper equivalent, is increasingly the norm for enterprise content. It can be stored in multiple locations by numerous individuals working for different departments. Digital information creates interesting challenges for content management.

The world is becoming more and more digital. Medical records are transitioning to electronic versions. Many patients can now contact their doctors electronically, ask questions by email or in a secure forum-type environment, and view their digital records. Travelers routinely get their airline boarding passes delivered to their phones and choose their hotel room digitally before their arrival in the hotel lobby. Tickets for movie theaters and concerts have gone electronic. Restaurant reservations have also become a digital activity. Any digital activity implies content management.

Not just the consumer commerce is experiencing digital transformation. Regulatory agencies expect reports in digital form. Suppliers and manufacturers communicate digitally and complex supply chains are managed, controlled, and updated with digital systems.

Sensors track delivery trucks so that the trucking company knows where they are and when shipments are expected to arrive at their destination. Retailers and wholesalers alike use digital data for product improvement; fast and tailored distribution to stores, warehouses, and manufacturing facilities; and trend tracking.

Digital transformation makes businesses to become more agile. It is through using digital content in an agile way that allows companies to respond quickly, identify new opportunities, discard what is not working, and find new avenues of profitability. Access to information with the immediacy of digital data gives those who understand it an enormous competitive advantage.

Non-Digital Content

Even though the digital transformation is real, paper documents have not disappeared. Many organizations continue to maintain paper repositories.

Organizations make the best effort on automating and removing paper documents from core business functions and processes.

One example is clinical trials in the pharmaceutical industry. Clinical trials are essential to the new drug approval process and even 10 years ago were usually paper-based, with patients filling out diaries by hand. Today, those paper accounts have largely been replaced by electronic patient reporting to capture information, which leads to more timely and accurate responses by the patient and higher quality data for the pharmaceutical company to analyze.

However, there are still a lot of paper-based documents.

Even though paper documents are digitized somewhere during the workflow sequence, they begin as paper and are often stored as paper.

Organizations are moving from paper documents to digital, but this shift is not complete. Thus, a content management system must acknowledge the existence of paper documents.

Security Concerns

Security is foremost in organizations. Part of content management is ensuring that sensitive data is secure in an organization. This means identifying content and its access permissions accordingly.

Safeguarding content is important on both the consumer and the enterprise level. Today’s content is more distributed than ever before. It is not locked up, secured, and then made compliant. Instead, content exists, and sometimes is created outside the enterprise. However, organizations must secure content.

Security is also a function of compliance. Compliance must be in place for all systems. Keeping up with what constitutes compliance and with updates in regulatory requirements should be mandatory.

What about business rules? Every organization has some business rules, and enterprise content systems must conform to these rules. However, even though rules sometimes seem hardwired, they can change and systems need to change when rules change.

Impact on Enterprise Content Management

Digital transformation profoundly affects enterprise content management. Cleaning out the information closets starts with identifying what the organizations needs today and what no longer needed today but might be needed in the future.

Organizations must execute a set of strategies to ensure the content clutter is under control. Enterprise content management is not simply a technical issue. It is rooted in the human element.

Becoming a digital enterprise and building a digital platform comes with the territory of digital transformation. Still, essential elements of content management must be addressed. Determining value of stored content is extremely important.

Galaxy Consulting has over 20 years experience in content management. Please call us today for a free consultation.

Monday, July 30, 2018

Ten tips to unlock the knowledge-ready advantage

Effective Knowledge Management (KM) is very important for an organization, especially for a service organization who has customer support to answer phone calls and chats. KM drives performance and innovation. It can help companies solve critical problems.

Here are 10 tips to optimize knowledge management in your organization.

1. Agile KM helps to stay focused and deliver quick results. Agile methods can contribute to KM in a number of ways. Pilot projects is very good way to test KM initiatives, its direction, and assumptions. KM challenges today include keeping up with operational tempo, adjusting to or creating new behavior and evolving new metrics. Agile KM helps an organization develop new possibilities, new mindsets and new capabilities.

KM is a long-term journey but you also need to show quick results. An example of a quick result could be after action review (AAR) methods as an example of a quick win. Agile KM helps an organization develop new possibilities, new mindsets and new capabilities.

2. Tie knowledge to learning. It is not enough to promote a knowledge sharing culture. You also need to promote a learning culture. KM metrics will also have to evolve and cover a range of activities and impacts, such as user adoption, knowledge sharing, user benefits and customer satisfaction. Different kinds of learning tools and channels can be explored. Gamification, rewarding system, rap songs about KM features would be very helpful.

3. Map the different types of leaderships and narratives. This will create a clear picture of what you currently have in your KM program and what you are missing.

4. Build bridges between KM, big data, and data analytics. KM and data analytics are connected. In consumer, corporate, and industrial work place contexts, analytics can yield useful insights, if the right questions are asked, and that is where KM can help.

5. KM education and industry need to be tied together. KM education helps KM practitioners to stay on the top of global trends and findings, and industry best practices.

6. Let people express themselves in their own creative ways. While much in knowledge capture and communication tends to focus on the typed or written word, people also express themselves in multiple other ways. KM visioning exercises have shown new insights when people express themselves through doodles, drawings, figures, PostIts, flip charts, cards, audio, video, and even skits.

7. Ensure knowledge succession. Knowledge must succeed and be sustainable. Organizations need to focus not just on creating knowledge but also on its implications and immediate actions. Innovation is at the intersection of local knowledge, organizational knowledge, academic knowledge and stakeholder knowledge. Aligned conversations help companies keep the focus on strategic knowledge in the long run.

8. Explore weak ties and strong ties. Organizations certainly must share knowledge and build on collaboration but they also need to master a number of other factors. For example, there are advantages as well as challenges to virtual teams: geographic dispersion (but lack of shared context), online reach (but less richness), structural dynamism (but less organization) and national diversity (but also culture clash). Weak ties give access to novel knowledge and information, but it is the strong tie that will lead to transfer of the innovative idea.

9. Inter-organizational KM must lead to co-creation. Mature KM practitioners are extending their initiatives across organizational boundaries to share knowledge between organizations. But that should extend beyond sharing and cooperation to collaboration and co-creation. Co-creation is usually with a smaller group than in crowdsourcing and includes active involvement of customers.

10. Focus on formal as well as informal knowledge sharing activities. Focus not just on knowledge assets in the “forefront” (e.g. documents) or in the “background (e-mails, PostIts) but also “out of sight” (stories), and online discussion. Acknowledge and identify backroom knowledge sharing in informal clusters. There also needs to be a healthy attitude toward learning from failure.

Future KM trends include a continuing emphasis on collaboration, alignment with business strategy, blend with analytics, and rise of the multigenerational workforce.

Galaxy Consulting has over 20 years experience in Knowledge Management. Contact us today for a free consultation.

Friday, June 29, 2018

Information Security

Data is not just critical to business it is core. It is the essence of a company’s function. Big data is a major part of that flow, and the more customer data that is out there, the more it needs protection.

As big data gathers momentum, incorporating security into planning and processes in the early stages of a project are becoming more important. The big data revolution is just getting started and will present major security challenges if its data management is not carefully planned.

Formerly the exclusive domain of IT, information security has now become the domain of everybody including content and knowledge managers.

Major retailers and government agencies have suffered data breaches, denials of service and destructive intrusions. Millions of individuals have been affected, and organizations are now forced to devote more resources to prevention and remediation. Everyone in a company, from consumers to CEOs, has become acutely aware of the hazards of failing to protect information.

Every business user and anyone accessing data needs to be aware of it. The advent of the mobile worker and the proliferation of cloud technology have added a new dimension.

People want to run their businesses on a tablet, and they can do that but information managers need to understand how to do it safely. Much of the data in an enterprise exists only at endpoints, which increasingly are mobile devices.

According to a study by IDC, 75% of the U.S. workforce is mobile, with most of those employees having more than one mobile device. But those devices are at risk: about five to 10% of laptops are lost each year, according to a study from Ponemon Institute, and about one-third of them contain unencrypted sensitive or confidential data. In another study, one in six respondents reported having a mobile device lost, stolen or destroyed. In addition, a lot of intellectual property is stored on mobile devices, and in the event of litigation, the company has to be able to locate it.

Despite the convenience of mobile devices, their use creates well-recognized conflicts with security, especially in the face of increased frequency of BYOD (Bring Your Own Device).

Even when users hold onto their devices, security is far from guaranteed. Data is becoming more dispersed and fragmented. Even when companies do not know where the data is flowing, they still have an obligation to protect it. Information sharing is the norm rather than the exception today, both among employees within an organization and with outside organizations.

Along with mobile devices, the supply chain is a point of vulnerability. Once supply chain information leaves your organization, you don’t know what is being shared and what is being protected. Tracking it is a massive task and has often been managed by departments well outside of IT, such as procurement. It’s not just information about material goods that enters the supply chain; intellectual property associated with the products also goes to third-party suppliers. Information, such as patent data or formulas for pharmaceuticals, is shared with lawyers and accountants.

Analyzing the risks to information in the supply chain can help focus resources on mission-critical data. Companies should work with their vendors to ascertain how they are protecting information, and to consider putting security requirements into the contracts they write with suppliers.

Business and IT should start with a conversation to explain what protection the company has in place and what measures are being taken. Then, the business side can work with IT to develop business cases based on the impact of their operations and illustrate the ROI for protection of their functions. That can help IT by showing the costs of downtime and clarifying what needs to be protected.

Technology can help overcome security problem. For example, an application can provide continuous backup, but users don’t know that it is running or the can also enforce encryption without the user’s awareness and remotely wipe laptops to clear the data. There are products which focus on encryption and tokenization, to secure the data itself rather than the network environment. Tokenization provides visibility to the flow of data without putting the data at risk.

A new product called Protegrity Avatar for Hortonworks is designed to secure individual data elements while managing and monitoring the data flow in Hortonworks, an enterprise Hadoop data platform.

In most cases, organizations need to deploy more than one security solution, because the threats are many and varied. Most companies use a best-of-breed strategy, picking out the strongest solutions for their needs.

Data security is about data protection, but it is also about continuity and availability. Protecting information with technology is important, but it is not a substitute for information governance within a company.

Achieving the right balance between business needs and information security requires a fundamental shift in attitude. Rather than thinking of data as something a company owns, business owners need to come to term with the fact that they are custodians of data that needs to flow and be managed.

A legislative proposal announced by the White House in mid-January is designed to increase data security by promoting information sharing, strengthening law enforcement for cyber crimes and requiring that data breaches be reported promptly.

Companies have been concerned about information sharing because of the risk of liability for violating individuals’ privacy. The bill addresses that issue by requiring compliance with privacy guidelines, including removal of unnecessary personal information. The legislation would simplify and standardize the requirements for reporting data breaches. Currently, the laws exist at the state level, but not all states have them, and those that exist are not consistent.

Whether defending their website from intrusions, keeping applications running or protecting data elements, organizations are faced with an increasing number of threats and a complex security environment. Awareness at every level of the extended enterprise will be essential to minimizing the adverse impact of security incidents.

Galaxy Consulting has 18 years experience in information security and governance. Please call us for a free consultation.

Wednesday, May 16, 2018

Yammer and SharePoint

Enterprise social network vendor Yammer was a large and fast growing player when Microsoft acquired it in late 2012. Yammer has users in more than 150 countries, and the interface is localized into more than 20 languages.

At its core, Yammer is a micro-blogging service for employees to provide short status updates. Whereas Twitter asks, “What’s happening?” Yammer asks, “What are you working on?”

Over the years, Yammer’s functional services have expanded a bit to include the ability to express praise for co-workers, create polls, share documents and provision smaller discussion groups. In practice, however, some of those supplementary services aren’t as rich or well-integrated into SharePoint as you might find in competing products.

And you can find a lot of competing products: from collaboration suites that offer tightly integrated social networking services to supplemental “social layer” offerings that compete directly with Yammer.

For this reason, it would be good to ask this question: Is Yammer truly the best social layer for your enterprise?

When Microsoft acquired Yammer shortly before releasing SharePoint 2013, the deal sent shock waves through the marketplace. Soon Microsoft started recommending that you hide SharePoint’s native social services in SharePoint and use Yammer instead.

Microsoft now promotes Yammer as a social layer over all your Microsoft systems, especially Office 365. Yammer usage can explode within an enterprise that heretofore offered no micro-blogging services, let alone any enterprise social network. People happily check in and often find new or long-lost colleagues in the first few days and weeks.

Yammer boasts a huge customer community. Customers get access to the quite sizable Yammer Community Network, where licensees share their successes, problems, questions and tips with the community as a whole. A small but growing apps marketplace rounds out the picture of a vibrant ecosystem around Yammer.

Smaller departments use Yammer to stay in touch, but enterprise-wide conversations typically decrease. Usage also drops off when employees struggle to place the service within the regular flow of their daily work. Yammer becomes yet another place you have to go, rather than a service you exploit as part of your regular workflow.

In a mobile environment, Yammer and SharePoint usage entails at least two separate native clients.

Yammer has key application: social questions and answers. When a user starts to type a question, Yammer uses a real-time search to auto-suggest already asked questions. That is useful and helps to eliminate duplication in content.

However, there are no ratings for answers and the original questioner cannot declare an authoritative answer. Search is not really ideal, so as answers build, they become harder to leverage, especially given the scarcity of curation services. Yammer works less for knowledge management and more for really simple, quick responses to simple questions.

Another Yammer key social application: communities of practice. Groups are either public or private. You might also have separate groups in Exchange and SharePoint (via Delve), as well as Communities in SharePoint.

There is single sign-on to Yammer with Office 365.

Larger enterprises find Yammer better suited as a supplement to formal collaboration and social networking efforts rather than as the center. Its simplistic handling of files and limited search facilities limit Yammer’s ability to serve as much more than a simple micro-blogging service.

If you are looking for pure micro-blogging services to communicate across your enterprise and are not looking for ready-to-use applications tailored for specific goals and processes, Yammer offers an obvious alternative to consider, especially for those whose SharePoint plans rest primarily on the Office 365 edition.

Galaxy Consulting has experience with all versions of SharePoint and with Yammer. Please contact us today for a free consultation.

Monday, March 26, 2018

E-Discovery and its Stages

Every organization should take necessary steps to be prepared for E-Discovery. What is E-Discovery?

Electronic discovery or E-Discovery refers to discovery in legal proceedings such as litigation or government investigations where the information is sought is in electronic format. This information is often referred to as electronically stored information or ESI.

Electronic information is considered different from paper information because of its intangible form, volume, transience, and persistence. Electronic information is usually accompanied by metadata that is not found in paper documents and it can play an important part as evidence. For example, the date and time a document was written could be useful in a copyright case. The preservation of metadata from electronic documents creates special challenges to prevent its destruction.

E-Discovery Stages

Identification

The identification phase is when potentially applicable documents are identified for further analysis and review. Failure to issue a written legal hold notice whenever litigation is reasonably anticipated, will be deemed grossly negligent. This is why it is very important to implement legal holds on specific electronic information.

Custodians who are in possession of potentially relevant information or documents should be identified. To ensure a complete identification of data sources, data mapping techniques can be used. Since the scope of data can be overwhelming in this phase, attempts are made to reduce the overall scope during this phase, such as limiting the identification of documents to a certain date range or search term(s) to avoid an overly burdensome volume of information to be on legal hold.

Preservation

A duty to preserve begins upon the reasonable anticipation of litigation. During preservation, data identified as potentially relevant is placed in a legal hold. This ensures that data cannot be destroyed. Care should be taken to ensure this process is defensible, while the end-goal is to reduce the possibility of data destruction. Failure to preserve data can lead to sanctions. Even if the court rules the failure to preserve as negligence, they can force the accused party to pay fines if the lost data puts the defense at an undue disadvantage in establishing their defense.

Collection

Once documents have been preserved, collection can begin. Collection is the transfer of data from a company to their legal counsel, who will determine relevance and disposition of data. Some companies that deal with frequent litigation have software in place to quickly place legal holds on certain custodians when an event (such as legal notice) is triggered and begin the collection process immediately. The size and scale of this collection is determined by the identification phase.

Processing

During the processing phase, native files are prepared to be loaded into a document review platform. Often, this phase also involves the extraction of text and metadata from the native files. Various data sorting techniques are employed during this phase, such as de-duplication. Sometimes native files will be converted to a paper-like format (such as PDF or TIFF) at this stage, to allow for easier redaction labeling.

Modern processing tools can also employ advanced analytic tools to help document review attorneys more accurately identify potentially relevant documents.

Review

During the review phase, documents are reviewed for responsiveness to discovery requests and for privilege. Different document review platforms can assist in many tasks related to this process, including the rapid identification of potentially relevant documents, and the sorting of documents according to various criteria (such as keyword, date range, etc.). Most review tools also make it easy for large groups of document review attorneys to work on cases, featuring collaborative tools and batches to speed up the review process and eliminate work duplication.

Production

Documents are turned over to opposing counsel, based on agreed-upon specifications. Often this production is accompanied by a load file, which is used to load documents into a document review platform. Documents can be produced either as native files, or in a paper-like format (such as PDF or TIFF), alongside metadata.

Types of ESI

Any data that is stored in an electronic form may be subject to production under common E-Discovery rules. This type of data can include email and office documents, photos, video, databases, and other file types such as raw data.

Litigators may review information from E-Discovery in one of several formats: printed paper, "native file", or a paper-like format, such as PDF files or TIFF images. Modern document review platforms accommodate the use of native files, and allow for them to be converted to PDF and TIFF files. Some archiving systems apply a unique code to each archived message or chat to establish authenticity. The systems prevent alterations to original messages, messages cannot be deleted, and the messages cannot be accessed by unauthorized persons.

Because E-Discovery requires the review of documents in their original file formats, applications capable of opening multiple file formats would be very useful.

In order to prevent data to be inadvertently destroyed, companies should deploy which properly preserves data across companies, preventing inadvertent data destruction.

Proper retention and management of electronically stored information (ESI) is crucial in every organization in order to be able to comply with E-Discovery process. Improper management of ESI can result in a finding of evidence destruction and the imposition of sanctions and fines.

We helped many organization in their E-Discovery preparedness in the last 17 years. We can do the same for you. Please call us for a free consultation.

Wednesday, February 28, 2018

12 Steps in Knowledge Management

User centered design is important for successful knowledge management. This design can also be called design thinking. Design thinking can help with process architecture, tools, and a knowledge sharing culture. These are important points for knowledge management improvement:

1. The emphasis on emotion and empathy of user would have a great impact, focus on experimentation and testing before scaling and confidence even in the face of uncertainty. Thus, buy-in for KM initiatives increases when adequate empathy has been shown to employees concerns and if participatory design elements have been used to come up with the knowledge management architecture and processes.

2. Design thinking includes a progressive approach to dealing with failure. Mistakes are treated as learning experience toward a final solution. That can help organizations by celebrating not just successes and best practices, but also failures as a source of learning. Many organizations have a repository of best practices.

3. In their haste toward project completion, many companies focus only on the results and final products. Design thinking allows for creation of extra levels of documentation along the project which may reveal new insights of value to subsequent project teams.

4. Through immersion and interaction, design thinking places a greater emphasis on conversations and thus uncovers deeper information about employees, customer and business partner expectations and aspirations. The use of customer personas also helps bring more holistic insight into the business modeling process.

5. By focusing first on minimum viable products and then full features, design thinking can help avoid features overload and large failed projects. Knowledge management can help in this regard in capturing best practices of frugal product and service development.

6. Design thinking and agile approach can be deployed right at requirements gathering stage and not just design and deployment stages. Organization can have conversations with users at the early stages and even help them question their understanding of the problems and solutions. Better alignment can be brought and lead to new ways of knowledge creation.

7. With its user centered design philosophy, design thinking brings about better interaction between a company and its employees and customers, particularly in an increasingly digital world where all kinds of assumptions are being made about customer's aspirations and problems. Organizations should work on improved formats of communication and knowledge sharing.

8. By repeatedly questioning basic assumptions behind problems, design thinking helps to structure problems in a more effective manner so that more appropriate solutions emerge. Knowledge management should include not only solving problems in a better and more efficient way, but also choosing which problems to solve.

9. Design thinking blends top-down and bottom-up approaches to problem solving, which can help overcome some problems in those KM initiatives that are top-down or led by higher levels of management without adequate factoring of users input or those initiatives where there is full users input with no management support.

10. Find the balance between design thinking and actual design. There are times when employees need to strictly adhere to established strategy, and there are times when fundamental operating assumptions should be questioned in light of changing circumstances and context. Thus the best practices certainly play a big role and design thinking can help come up with the best practices.

11. Design thinking is not just for designers or product developers. It has been used for better design of information portals, vision alignment in technology companies, more meaningful users experience, effective customer service, deeper users engagement in planning and collaboration on projects.

12. Design thinking is the key to innovation in many organizations. Involving users in the design project would also help user adoption of the knowledge management initiative.

Intent to introduce design thinking ideas in knowledge management should be followed by deep research of users and customers information creation and information seeking behavior. Interaction with them will yield very helpful ideas which should be integrated and tested repeatedly until an effective design of knowledge management can be finalized and deployed.

Galaxy Consulting has 18 years experience in applying design thinking in knowledge management. Please contact us for a free consultation.