Monday, March 27, 2023

Enterprise Content Management as SaaS

Software as a service (SaaS) is a software application delivery model where a software vendor develops a web-native software application and hosts and operates (either independently or through a third-party) the application for use by its customers over the Internet.

Two factors are driving the movement to SaaS. One is the significant technical improvements over the last decade. Computer hardware costs like CPUs, storage and network bandwidth have dropped significantly. In the last two years, memory prices have fallen by almost 75% and CPU prices have fallen by 50%, while capabilities like processing speed and capacity have increased significantly.

The second factor is that certain software applications are becoming standardized. Due to these improvements, it is now possible to host a software application for thousands of companies with shared hardware and still provide good performance. Likewise, it is practical to accommodate the business process needs of thousands of companies using a single copy of software without having to re-write the software code for each customer.

As more software is being delivered as a service, business users now have greater control over the destiny of their business process improvement efforts. And while many SaaS projects require little from the IT department, IT is increasingly feeling the need to get involved to ensure that integration, security and compliance requirements are met. IT is now taking a more consultative role and acting as a liaison between business and vendor.

Why SaaS is Ideal for ECM?

An enterprise content management (ECM) solution has many components that need to be assembled even for the simplest of projects. Below is list of common ECM related technologies that frequently have to be integrated:

  • Search
  • Email management
  • eForms
  • Workflow
  • Records management
  • OCR
  • Fax management
  • Access control
  • Reporting
  • Electronic signature
  • Viewing and mark-up
  • Version control

In addition to the above, ECM projects require hardware and software like a database server, web server, application server for integration, writing integration code and a plan to maintain all of this and a plan for regular backup. In addition to the ease of implementation, ECM benefits from the SaaS delivery model because a lot of flexibility can be delivered via configuration instead of customization.

On-premise software gets customized very heavily and typically most of the features are not used. The customization is done by expensive staff that has to write and maintain software code. When any component of the solution is upgraded, the code has to be rewritten, making it very expensive to maintain. By contrast, SaaS configuration works like Lego blocks. Most critical business needs can be accommodated although the system is not "infinitely" customizable.

In the context of ECM, this means having capabilities like configurable search, line-of-business users can decide how much importance should be given to document properties like name, the keywords of the document and the actual content. Configurability of the user interface means that different users can choose which buttons they see. Configurable workflow allows business operation managers to adapt the workflow to their business processes. Configuration is very similar to building your own Google or Yahoo home page by making point-and-click selections instead of writing complex code. Configuration is much easier and more flexible than customizing code.

SaaS-based ECM also provides pre-built configurable applications for common business processes that are document-heavy. Examples of such processes include:

  • Accounts payable automation
  • Contract management
  • Proposal management
  • Logistics/POD
  • Mortgage processing
  • Field support
  • Training
  • Legal
  • Architecture
  • Construction
  • Insurance processing
  • Engagement management

Benefits of SaaS

Software as a service has inherent advantages that make it attractive for the vast majority of content management applications. These are the benefits of the SaaS model:

1. Democratization of turnkey software: companies of all sizes can now afford the basic applications that were historically only within the reach of very large organizations.

2. Extremely fast deployment: with SaaS you can be up and running in days, not weeks or months like on-premises solutions. And users are quick to adopt SaaS applications, so your time-to-value will be much faster with a SaaS solution. Unlike on-premises software, there is nothing to install and SaaS requires no hardware, no software, no additional network infrastructure and no IT expenses to maintain that infrastructure. All you need is a browser and an Internet connection.

3. Complete solutions: Typically, all needed components of a SaaS solution are pre-integrated. There is no need to pay for additional modules or expensive and time-consuming integration services.

4. Low cost: SaaS is delivered as a pay-as-you-go subscription. Combined with the savings in hardware, administration, and professional services, the total cost of a SaaS implementation is typically less than half of on-premises software.

5. Low risk: SaaS is easy to get out of, by simply choosing not to renew the services agreement at the end of the initial term. Usually, this is a one-year period. Often, SaaS providers (like SpringCM) will give you a free trial period, allowing you to configure your solution and try it before making the purchase commitment.

6. Easy to administer: SaaS solutions are usually administered by someone in the business unit, not by IT staff. And since there is nothing to install, no hardware, and no network management, the administration of a SaaS solution usually consists of simple configuration and user access management.

7. Easy to use: SaaS applications are always built with the mindset of getting users productive as soon as possible. For example, the SpringCM solution has an intuitive Web interface, easy configuration steps and a quick tutorial designed to get users familiar with the system in minutes.

8. Easy to integrate: thanks to robust Web services, SaaS solutions fit nicely into other business applications. Often, the document management and workflow features can be invoked from within the other application, so the users do not have to learn a new system to take advantage of the added capabilities.

9. Easy to buy: SaaS applications are easier to buy. If you were considering a $500K on-premises system, you not only have to sell the project internally and get the budget approved but also write up a detailed RFP with help from IT and think of all the ways you might use the system. Instead, when acquiring a SaaS-based application, you can try it and see if it solves your immediate problems. You can go from thinking about a solution to go-live in as little as a few weeks depending on the complexity of your requirements.

10. Faster innovation: Because SaaS solutions are multi-tenant and easier for the vendor to support and upgrade and because a SaaS vendor can very easily find out if new features will conflict with old ones, SaaS companies typically release newer versions every two to three months.

Limitations of SaaS

Although SaaS works for the vast majority of solution areas, there are a few limitations that you should take into consideration.

1. While SaaS works for common business processes, if the business processes are extremely proprietary and very peculiar to the company, then SaaS may not be the right choice.

2. SaaS can handle large volumes of transactions and data, but if your application requires constant transfer of huge volumes of data (terabytes) over the Internet with an expectation of real-time processing, then SaaS may not be the best option.

3. SaaS may not be the right option for your business if your security requirements are such that only your internal IT department can meet them.

4. Although the SaaS model has been in operation for over a decade with millions of users, there are those who prefer the predominant computing model, which is still on-premises software.

Questions to Ask a Vendor

Not all SaaS providers are equal in their ability to satisfy your needs. Not only are there major differences in solution functionality, but there are also differences in each vendor’s ability to deliver and support the type of solution you need. 

Here are a few questions that any qualified SaaS vendor should be able to answer.

1. Are you a true SaaS provider, or simply a "hosted" or "ASP" solution? All genuine SaaS solutions are multi-tenant and this is what brings you many of the benefits of a SaaS solution. A truly multi-tenant SaaS provider can demonstrate that all customers are running the same version of the software using shared hardware.

2. What has been your average outage time over the past 12 months? The answer should be way less than 1%. For example, SpringCM provides availability of 99.7% for its solution.

3. How often is new software released? Because of the flexibility and effectiveness of the SaaS architecture, you should expect new product releases much more frequently than with the typical 12-18 month cycle of on-premises software vendors.

4. Is there lag time between a new release and your applying it? If the answer is yes, this is a sign that you may be dealing with a hosted or ASP provider, not a true SaaS. With SaaS, when a new release is available, everyone has immediate access to the new functionality.

5. How much domain experience do you have as a SaaS provider? This question is very important because, whenever possible, you want to deal with a company that really knows how to deliver SaaS solutions, not one who jumped on the bandwagon to take advantage of the rapid growth in the industry.

6. Did you start as a SaaS company? Many of the so-called SaaS vendors are really on-premises software companies who have re-purposed their software code to fit into an on-demand (hosted) model. Such systems do have advantages, but they also include many of the disadvantages of on-premises solutions.

7. Can I try the software for free? Every SaaS provider worth its salt will give you access to its solution for a trial period. This gives you a feel for how the solution will work for you day-to-day. One important consideration is to make sure that any of the content capture, configuration and workflows will be fully available to you as you transition from a trial to a full account.

8. How much customization is required to make the solution usable? The best answer is "none," since SaaS solutions use the concept of "configuration" instead of customization. As we covered earlier, configuration offers many of the same advantages as customization but does not require the significant time and expense.

9. How does your application scale? Certain applications run fine when there are 10 users or 100 users but cannot accommodate larger user populations. Ask your SaaS provider if their application can scale across thousands of companies, not just thousands of users.

10. How will you keep my content and data safe? There is a natural reluctance to trust the maintenance of valuable content (both documents and data) outside the corporate firewall. Good SaaS companies host in facilities that have sophisticated power backup, earthquake/flood-resistant construction, bio-metric security, SAS 70 Type certification and bulletproof construction. Although your IT department may not be able to afford such a facility, the SaaS vendor can do so because the cost gets spread over hundreds or thousands of customers.

Friday, February 24, 2023

Collaborative Knowledge Management

When social networking and collaboration tools and media first emerged as a cultural phenomenon, many companies had a predictable reaction: ignore a new form of communication that, at first glance, could not be influenced, much less controlled. 

However, as social media channels have matured, the most progressive brand stewards recognized that embracing social networks and collaboration tools can enhance customers’ relationships with a brand, and be an invaluable resource for serving those customers better.

Social media’s role in the knowledge economy is evolving rapidly, both within and external to an organization. Let's look at the best practices that can help companies to embrace social media, harvest knowledge from the conversations in their user communities, and apply that knowledge to deliver better customer service:

1. Recognize and reward contributions from the user community.

2. Promote community conversations into knowledge assets.

3. Integrate discussion forums into a seamless support experience.

4. Allow customers to self-direct how they participate in the community.

5. Moderate by exception.

Best Practice 1: Recognize and reward contributions from the user community.

When customers engage with the discussion forums on your support portal, they join a community, just like any other social network. Online communities thrive on recognition and reward. With discussion forums, recognition takes on added importance because the contributions from the community have potential to add value well beyond the forums themselves. Finding high-value contributors, recognizing their efforts and encouraging continued participation is essential to leveraging user-generated content for customer service. 

Recognizing individuals’ contributions to the forums often identifies the relevant information critical to resolving customer issues. Participation in these community conversations often exposes developing trends and needs that feed into product and service enhancements.

The challenge, of course, is volume. Many discussion threads will yield few insights that can be repurposed. 

The answer lies in reputation models and ratings systems that allow community participants, including company moderators, to rate or otherwise identify high-value content and translate those ratings into a points program that build a reputation for each contributor. As points accrue and reputation grows, "experts" are recognized and granted additional functionality on the forums. Over time, valued contributors are promoted to higher status levels.

You can complement recognition with tangible reward programs to further convey status and deepen the customer’s brand experience.

Best Practice #2: Promote community conversations into knowledge assets.

Reputation models can serve another function. As participants build reputation, you may grant them the right to recommend solutions from forum threads, in essence extending the reach of the company moderators, and permitting the most valued community participants to help determine which content can be harvested into more structured knowledgebase content. 

This content can then be exposed to the company’s call center agents and published on the company’s support websites. The "expert"-recommended solution would trigger a workflow to ensure the appropriate parties validate the solution information and rework it into the appropriate formats. It is not uncommon for companies to then withhold broader publication to the Web until the new knowledge content has achieved a reuse count in the call center or high access count in the forums.

Community conversations are not always external-facing. Many companies use discussion forums and other collaboration tools within the enterprise to foster communication and knowledge sharing across groups that might otherwise be disconnected. 

For example, problem escalation processes may be managed entirely through collaborative forums. Agents may pose the unsolved problem on an internal forum that reaches across support tiers and geographies. Relevant experts, which may include individuals outside of the support organization, are automatically alerted (via topic subscriptions) and directed to the conversation, where they collaborate to resolve the issue. Managing escalations through forums potentially involves more individuals than a phone escalation, and the discussion thread provides the content that can be harvested into a solution article. As above, participation can be encouraged with incentives tied to a dynamic reputation model that awards points based on issue complexity, timeliness of response, reuse counts and any number of other variables.

Best Practice #3: Integrate discussion forums into a seamless support experience.

Collaboration goes beyond the facilitation of conversations. To be truly transformational for the company, the knowledge emanating from those conversations must be captured and presented external to the forums themselves.

Customers who visit a company’s website to resolve a problem will typically take one of three actions: they will submit a service request for the problem, often through email; or they will search the knowledgebase for information to resolve the problem on their own; or they will search the discussion forums to validate they are not alone in having the problem and to find solutions to that problem. They may do all three. Companies should endeavor to provide a seamless support experience, regardless of which path a customer chooses. One way to do this is to incorporate relevant discussion forum content throughout the experience.

Customers that search the knowledgebase for answers should be presented with relevant discussion topics, specifically those threads that have been marked as solutions by the original poster or moderator.

And those customers who go direct to the discussion forums should be presented with relevant knowledgebase articles when they search the forums. Visitors who search discussion forums should be presented with both discussion forum content and solution articles from the knowledgebase as part of the search results. 

Posting a new question to the forum will automatically trigger a semantic search on existing knowledgebase content which may then deflect the topic from even being posted. This could be a particularly effective mechanism for addressing the duplication problem common in discussion forums. And if the knowledge content that deflects the intended post originated from a discussion thread in the first place, you are presenting harvested knowledge in formats that are both accessible to, and consumable by, the people you are trying to serve.

Even if the customer goes directly to the portion of the site where he can either submit a service request through an online form, or initiate a chat session with an agent, the initial problem description can trigger a search across all knowledgebase and forum content, returning potential solutions before the email is sent, or the chat session is joined by an agent effectively eliminating a costly interaction with an agent.

Taken one step further, discussion forum threads represent an ideal opportunity to present targeted online marketing or other relevant information, such as available product upgrades. Every customer support agent or customer service representative would cross-sell or upsell a customer engaged on the telephone; online, the same rules can be applied to questions posed to the knowledgebase and posted to the discussion forums.

This level of seamlessness is consistent with best practices for any kind of collaborative knowledge management. The ultimate goal is to deliver a consistent customer experience that spans all interaction channels, from phone support to Web self-service to discussion forums and beyond.

Best Practice #4: Allow customers to self-direct how they participate in the community.

The rapid evolution of social media has created new expectations for personalization and flexibility in the way people interact with online content. Users expect "anywhere access" (including mobile access from devices of all kinds) to contextually relevant information through methods of their choosing such as email subscriptions, RSS feeds, shared bookmarks, saved history and more.

Applying granular levels of personalization in collaborative knowledge environments encourages customer participation simply by making desired information more accessible. In customer service scenarios where users are more directed and specific with their objectives, every second saved boosts customer satisfaction with the support experience.

Suggest topics to your users, based on the products they use, and interests they have identified in the past. Save "My Topics" list for user-initiated discussions and highlight which threads have been updated since the user’s last visit, eliminating the need to manually check the site for new posts. Allow the user to define email alerts to content subscriptions to notify the subscriber when new responses are posted. Extend subscriptions across discussion forums and the knowledgebase, and allow flexibility to subscribe by topic or content category, by author and by discussion.

Provide custom RSS feeds for each subscription, and for searches containing specific phrases or keywords. Track user participation in the forums and maintain an access history so customers can quickly revisit forums and topics that interested them in the past, and highlight which information has been read, not read, or posted new since the last visit.

Focus not on how to push content to your customer community, but more on how to enable that community to pull the information they need in the way that makes the most sense to each individual participant.

Best Practice #5: Moderate by exception.

Even as social media has become more widespread and integrated into popular and corporate culture, brand stewards’ fear of potential damage persists. Influence and control remains a concern for most large organizations. But moderating and reviewing every post before it’s published on a discussion forum is not only resource-intensive; it robs users of the very value of the collaborative knowledge environment.

Finding the right balance will vary by company, but in general, to ensure a vibrant and collaborative community, organizations should moderate by exception, e.g., allowing users to post and publish freely, with moderators receiving notices from users reporting abuse, or from filters that identify inappropriate or undesirable behavior, such as mentions of a competitor or the use of objectionable or inflammatory language.

Reputation models, in particular, can help companies achieve that balance between freedom and control, by assigning more rights and functionality to users that have earned the trust of both the community and the company. Advanced search technology can add more power to the filtering mechanism by allowing companies to search on specific concepts so even if there is not a direct keyword or phrase match, semantic analysis will identify discussions that may be objectionable.

Moderating a community by exception, coupled with the ability to ban users, unpublish or edit posts or replies, or close forum topics, creates a positive environment that supports both customers’ need for fast, easy knowledge sharing, and companies’ need for an online community that reflects appropriate values and behavior.

Taken together and implemented correctly, these best practices will help you develop an online community that extends a company’s knowledge culture beyond its own walls, and into the domain of customers, improving customer service and reinforcing brand affinity.

Monday, January 30, 2023

SharePoint Implementations

There are a few main considerations for governance and metrics in SharePoint implementations:

  • metrics to gauge maturity, success, adoption, compliance and progress in your program;
  • mechanisms for managing content across the full lifecycle including compliance with standards for tagging;
  • governance processes and policies to control site and content ownership.

Metrics

Metrics will give you measures of success, adoption, compliance and progress. What is measured can be managed. When no objective ways have been put in place to measure how well a program is functioning, it is not possible to correct or improve it. It is essential to have a way of monitoring how things are going so changes can be made to serve the needs of the program.

Maturity

The first metric to consider is overall maturity and capability. Maturity in the SharePoint space can be considered across multiple dimensions, from the level of intentionality and structure of a process to the formal presence and level of sophistication of governing bodies. 

Consider a maturity model in which each dimension is mapped with a set of capabilities and characteristics that indicate a general level of maturity. Based on the overall characteristics of those processes (reflected in the rating for each dimension), the maturity of the organization’s SharePoint implementation can be measured at the start of a program and throughout its life. As processes are installed, the maturity is increased. That snapshot in time is a good indicator of the state of the program and can be used as a general measure of success.

Because SharePoint success is indicated by the ability to locate information (“findability”) and findability is the result of a combination of factors, it is possible to describe those factors in terms of existing practices and processes as well as benchmark the level of functionality or activity (for example, content quality measures, the presence of a process or the measure of the effectiveness of that process). One governance maturity measure regards whether there are any governing bodies or policies in place. Another might be the participation levels in governance meetings.

Use cases and usability

A second important measure of value includes overall usability based on use cases for specific classes of users. Use cases should be part of every content and information program, and there should be a library to access for testing each use case. Use cases are tasks that are part of day-to-day work processes and support specific business outcomes. At the start of the program, assessing the ability of users to complete their job tasks, which requires the ability to locate content, provides a practical baseline score to compare with later interventions.

User satisfaction is a subjective measure of acceptance. Although subjective, if measured in the same way, before an intervention or redesign and then after the intervention. The results will show a comparative improvement or decrease in perceived usability. The perception can be impacted by more than design. Training and socialization can have a large impact on user satisfaction.

Adoption

One simple metric for adoption is the volume of e-mail containing attachments as compared with those containing links. As users post their files on SharePoint and send links within messages rather than e-mailing attachments, they are clearly demonstrating use of the system. Looking at that metric as a baseline and then periodically on a department-by-department basis as well as company-wide provides a valuable information regarding SharePoint adoption.

Other adoption metrics include the number of collaboration spaces or sites that are set up and actively managed, the numbers of documents uploaded or downloaded, the degree of completeness of metadata, the accuracy of tagging, and the number of documents being reviewed based on defined lifecycles.

It is important to have self-paced tutorials regarding your particular environment and to monitor the number of people who have completed this kind of training. Participation in “lunch-and-learns,” webinars or conference calls on the use of the environment are other engagement metrics that can be tracked.

Socialization includes a narrative of success through sharing stories about the value of knowledge contained in knowledgebases, problems being solved and collaboration that leads to new sales or cost savings. Publicizing new functionality along with examples showing how that functionality can be used in day-to-day work processes will help people see the positive aspects of the program and help to overcome inevitable challenges with any new deployment. Those successes need to be communicated through different mechanisms and by emphasizing themes appropriate to the audience and process. An application for executives may not resonate with line-of-business users.

Alignment with business outcomes

A more challenging but also more powerful approach to metrics is to link the SharePoint functionality to a business process that can be impacted and that can be measured. One example is a proposal process that enables salespeople to sell more when they are able to turn proposals around more quickly, allowing more selling time or reduced cost of highly compensated subject matter experts. Employee self-service knowledgebases can be linked to help desk call volume. Those metrics are more challenging because they require the development of a model that predicts the impact of one action on another or at least an understanding that causality is involved, but they also can be a strong indication of success.

Tagging processes

The amount of content that is correctly tagged provides a useful measure of adoption and compliance. How do you know if content is tagged correctly? Taking a representative sample of content and checking whether tagging is aligned with the intent of the content publishing design will detect inconsistencies or errors in tagging. 

The percentage of content that is tagged at all is an indicator. One organization left a default value that did not apply to any content. The first term in the dropdown was "lark". If users left that value in, they were not paying attention and the quality of tagging was impacted. Measuring the percentage tagged with "lark" allowed for an inverse indicator. When the "lark" index declined, the quality increased. The quality of content can also be measured with crowd-sourced feedback. Up-voting or down-voting content can trigger workflows for review or boosting in ranking.

Change triggers

Metrics tell the organization something: whether something is working or not working. But what action is triggered? A metrics program has to lead to action: a course correction to improve performance. The change cycle can be characterized by conducting interaction analysis to measure the pathway through content and how it is used (such as impressions or reading time). 

If users exit after opening a document, that exit could be because they found their answer or because the content was not relevant. It is only by looking at the next interaction (another search, for example, or a long period of reading the document) can it be determined whether the content was high value or whether it did not provide an answer. Based on this analysis, it is possible to identify a remediation step (create missing content or fix a usability issue, etc.).

Search interactions also provide clues for action. When top searches return no content, the wrong content or too much content, the root cause can be addressed with an appropriate action (improve tagging, create content, tune the ranking algorithm or search experience with best bets, auto-complete, thesaurus entries, etc.).

By reviewing and troubleshooting content interaction metrics, patterns may emerge that point to problems with the publishing process or compliance with tagging guidelines.

Content processes and governance policies

SharePoint governance consists of decision-making bodies and decision-making mechanisms for developing and complying with rules and policies around SharePoint installations. This is the glue that holds SharePoint deployments together. Mechanisms for creating a new team sites and collaboration spaces need to go through a process of review to ensure that redundant sites are not created. Abandoned sites need to be retired or archived. Content needs to be owned and reviewed for relevance. If content is not owned and abandoned sites not actively removed, the installation becomes more and more cluttered.

Without clear guidelines for how and where to post content and ways to apply metadata tags, users will tend to post content haphazardly, and eventually libraries will be cluttered with junk. Over time, people will dump content in SharePoint because they are told they need to post it for sharing but no one will know how to find valuable content. Site administrators must understand the rules of deployment and control how users are utilizing SharePoint to prevent sprawl and keep the system from becoming cluttered with poorly organized content.

Among the chief goals of governance is to prevent SharePoint from becoming a dumping ground by segmenting collaboration spaces from content to be reused and enforcing standards for curation and tagging.

Consider that every element of SharePoint has a lifecycle and that this lifecycle has to be managed. Those elements range from design components that are created based on the needs of users and rigorous use cases (including taxonomies, metadata structures, content models, library design, site structures and navigational models), to the sites themselves that are created according to a policy and process and disposed of at the end of their life, to the content within sites that needs to be vetted, edited and approved for broad consumption. All of those are managed through policies, intentional decision-making and compliance mechanisms developed by a governance group.

SharePoint governance needs to be a part of the overall information governance program of the enterprise. It is part of content and data governance with particular nuances based on how the technology functions. In fact, many tools are designed into the core functionality of SharePoint to help with governance operationalization. The overarching principle is to consider the audience and the breadth of audience the content is designed to reach.

One analogy is that of an office structure. The lobby, which has a wide audience, limits what can be displayed. The lobby environment is visible to all, so it needs to be managed rigorously. But walking into a cubicle in the office building will reveal the personality of its inhabitant: personal photos, papers on the desk, individual and idiosyncratic organizing principles. A messy desk perhaps. A shared work area might be someplace between the orderliness of the lobby and the messiness of the individual workspace.

Those gradations are the local, personal and departmental level spans of control analogously managed in SharePoint. Information that has an enterprise span needs to be carefully managed and controlled. In a collaboration space, things can be a little more chaotic. In fact, the one thing to keep in mind is that content has a different value depending on the context and span and will increase in value as it is edited, vetted, tagged and organized for specific audiences and processes.

Segment the high-value content by promoting it from a collaboration space to a shared location and apply the tags that will tell the organization that it is important. Separate the wheat from the chaff. Manage high-value content and differentiate it from interim deliverables and draft work in process. Throw away the junk or take it out of the search results so they are not cluttered with low-value information.

Many people complain that they can’t find their content in SharePoint and they want search to work like Google. The answer is to put the same work into managing and processing content as search engine optimization departments do for web content, and the search engine will return the results that you are looking for.

SharePoint requires an intentional approach to design, deployment, socialization, maintenance and ongoing decision-making. The rules are simple: there is no magic. They need to be applied consistently and intentionally to get the most from the technology.

SharePoint Beyond the Firewall: Put Your Content to Work

SharePoint is undoubtedly one of the most important and widespread enterprise productivity tools, used by an estimated 67% of medium-to-large organizations, according to research firm AIIM. Many companies are heavily invested in SharePoint, and for good reason: it’s a highly adaptable solution that can be effective for content management and file sharing across a range of use cases. But SharePoint does have its limitations.

Where SharePoint struggles is when content needs to be securely shared outside the firewall, and consumed by remote workers, partners, or suppliers. Extending SharePoint for external needs introduces IT challenges, including content protection and security, user governance and support, and initial and on-going infrastructure and license costs.

This creates a challenge for organizations with sizable SharePoint investments and large populations of users. Rather than replacing SharePoint, it’s more practical to build on existing investments to provide secure, external collaboration and document sharing, without adding unnecessary complexity and cost to IT infrastructure, or putting sensitive or regulated content at risk.

According to AIIM, security and control are the top concerns of SharePoint administrators since it is routinely used to manage highly sensitive and regulated content: 51% of users share financial documents, 48% legal and contractual documents, and 36% board of directors and executive communications.

Leverage Your SharePoint Investment for External Document Sharing

As companies start sharing sensitive documents with collaboration partners, they need to maintain tight access control. SharePoint control over document access is not as well defined as many large enterprises might want. Attributes to consider for secure, seamless content sharing that complements your SharePoint investment include:
  • Secure, policy-based document-sharing control.
  • Agile response, and easy set-up and adoption.
  • Low, up-front investment.
  • Ability to leverage existing systems without adding new complexity.
  • Provisioning and support for a community of external users.
  • Cloud-based solutions are now meeting all of these demands to unburden the in-house IT infrastructure, but still allow internal users to continue using the familiar SharePoint-based platform and applications, with little or no change or added overhead.
Maintaining Control Over the Content Lifecycle

Externalizing SharePoint is one thing. Having control over the content once it’s left the firewall is another. For comprehensive control, you’ll want to consider tools with the following:
  • Access rights for external partners. Given the large number of potential collaboration partners and the number of documents to share, you’ll need granular and dynamic document administration.
  • Encryption. As soon as SharePoint documents pass beyond a firewall, they need to be encrypted and remain encrypted both as they move over the internet and while they are at rest within the external document sharing application. Seamless encryption means hackers can’t access the data within a document at any stage.
  • Virus protection. Avoid picking up file-based viruses that could penetrate your network while content is in motion, and shared and accessed from various geolocations and devices.
  • Information Rights Management (IRM). IRM services let IT departments provide secure document access to any device—PC, smartphone, tablet—while dynamically managing content rights even after a document has been distributed. Such systems have the ability to let users view without downloading documents, and prevent printing or screen capture. Ideally, IRM should be plug-in free so that it is frictionless to users. Finally, digital watermarking identifies a document as confidential and also embeds in the document the name of the person doing the download. This helps ensure that the user will be extra careful not to lose or leak the document.
  • Monitoring and auditing. Know which people are looking at what documents, for how long, and create audit reports from this information. This verifies compliance with data privacy and other relevant regulations, such as the Sarbanes-Oxley Act.
These security, compliance, and information governance capabilities should be accessible without requiring additional SharePoint software customization, or introducing a new user interface.

Galaxy Consulting has over 15 years experience in SharePoint implementations and management. Please contact us for a free consultation.

Tuesday, December 27, 2022

Seven Realities of Online Self-Service

It is very important to revitalize the self-service experience offered on customer-facing websites in in order to keep pace with evolving consumer expectations. There are seven key realities of modern online service that expose the gap between customer expectations and website self-service performance, and how you can take steps to close that gap starting now.

1. Customers have grown tired of old online help tools. Customer satisfaction with today's most common web self-service features is abysmal and getting worse.

As more companies rectify this by deploying next-generation self-service solutions and virtual agents, fewer customers will tolerate antiquated self-service help tools online.

2. Customers now expect a superior experience online, not just a good one. Exceptionally positive online experiences are now setting the bar for what customers expect when they visit virtually any web site in search of answers and information.

3. Consumers are impatient and protective of their time. Consumers cite "valuing my time" as the most important thing a company can do to deliver a good online customer experience. Yet many web sites are complex, hard to navigate and filled with content that provides multiple possible answers rather than a single, swift path to resolution.

4. Customer service has gone mobile. Mobile phones are now ubiquitous. Convenience and ease-of-use are the hallmarks of these mobile form factors, and web sites that offer experiences contrary to these attributes will only raise the ire of today's increasingly impatient and unforgiving mobile consumer.

5. Social media is increasingly embraced as a customer service tool. Delivering a consistent service experience across multiple channels is critical, as consumers are not shy about using social media sites to publicly complain and vent frustration about any interactions with companies that fail to satisfy them.

6. It's not just your younger customers who prefer to get their answers online. In fact, consumers of all ages are equally likely to prefer online channels for customer support.

7. Dissatisfaction online = hijacked revenues. One of the most appealing benefits of delivering a positive experience in the web channel is the opportunity for organizations to provide information that supports and encourages purchase decisions. Online, the segue from a customer service conversation to a purchase consideration conversation can be a very natural and systematic progression. This progression is thwarted, however, the moment a self-service experience fails to satisfy.

The impact of the self-service experience on revenues should not be underestimated. Customers are very likely to abandon their online purchase if they cannot find a quick answer to their questions.

These seven trends underline the urgent need to revitalize the online service experience offered by most companies. Online self-service is in need of resuscitation and useful web self-service and virtual agent technologies that can deliver an enhanced customer experience are currently underutilized.

Where To Go from Here?

What should your organization do as the first step toward improving the online customer experience? Begin with an honest and objective assessment of the self-service experience your website offers today. Looking at your customer-facing website, ask yourself these three questions.

1. Is there a single, highly visible starting point for self-service activity? Today's consumers are task-oriented when they go online. Your customers want their self-service journey to begin immediately and move swiftly to completion. Looking at your home page or most highly trafficked customer service page, ask yourself if the average customer would be able to identify the clear starting point for any customer service-related task in a matter of seconds. Any required navigation or clicking through to new pages is viewed as a time-waster and is out of alignment with their expectation.

2. Is issue resolution generally a multi-step, or a single-step activity? When looking for information online, customers want a single accurate answer that's accessible in one step. Any content page that offers more than one alternative answer, or path to an answer, requires your customer to take additional steps for sorting, scanning content and/or comparing answers. On your web site, when results are served, is the customer presented with a single answer, or multiple results to sift through?

3. How will you measure how your site is performing in this area? A quantitative assessment of your self-service performance is the first thing you will need to establish for any improvement to the self-service experience.

Optimizing self-service experience in organizations' web sites is extremely important and will help to increase revenues. Contact us today for a free consultation.

Tuesday, November 29, 2022

E-Discovery and Information Governance

More and more companies are operating throughout the world, so the impact of differing requirements for e-discovery is increasing, especially those relating to privacy. The rules tend to be much more rigorous outside the United States, particularly in the European Union.

Europe has adopted the General Data Protection Regulation (GDPR), which was promulgated in April 2016 and has a two-year implementation timeframe. It regulates the manner in which data can be collected and moved across international borders. The regulation makes an e-discovery company or law firm responsible for any compliance failure. If there is a breach, the data handling entity can be held liable for up to 4 % of its gross revenues worldwide, whether the breach was intentional or not.

A number of other trends are occurring in international litigation that are having an effect on e-discovery. Litigation is beginning to be seen as a business strategy in Asia as evidenced by the aggressive litigation some Korean electronics companies are taking with regard to protecting their IP. Those companies are seeing the potential benefits of using litigation as a method to protect or monetize their IP, which results in greater requirements for e-discovery.

Other factors are also driving the demand for e-discovery. The United States was the first country to carry out antitrust investigations that reached beyond its borders, and there is a domino effect with other countries now doing the same thing. These government investigations are often followed by class action lawsuits, creating additional challenges for the multinational companies.

The international nature of that litigation also creates more issues with respect to moving data across borders. Therefore, it is all the more important for companies to be aware of local laws and customs regarding privacy.

One question about data resulting from the proliferation of data is whether it will become a more frequent target of e-discovery. 

Potential issues abound including whether personally identifiable information (PII) is involved. Most information is stored in structured databases and it could be used in litigation to make a claim that an individual was doing something at a certain time. The information may or may not be encrypted; it could also involve health data from wearable devices, for example, that could be considered PII. Organizations may need to take a step back and think about who the custodian is, whether the data could be part of e-discovery and whether it is being appropriately protected.

Moving to the cloud

Every organization has information stored across a multitude of systems, computers, shared drives, repositories, and now a lot of this information is moving to the cloud. This is going to require a new approach and new technologies in order to address the challenges arising from the growing volume and format of information being generated.

Managing cloud based content may be new to an organization and as a result there might be uncertainty of the risks involved and the various approaches to mitigate them.

Most of cloud repositories lack information governance. This means that an appropriate architecture and supporting processes have to be put in place to ensure hat content is properly governed and managed. By joining a could enabled information governance platform with those cloud content repositories, an organization will be able to make those cloud based repositories complaint with e-discovery requirements.

SaaS-based delivery models for e-discovery are becoming more prevalent. The move to Office 365 is another part of this equation. With more data in the cloud, it makes sense to have cloud-based e-discovery solutions. The established benefits of SaaS delivery such as scalability, faster release of new features and simpler interfaces apply to e-discovery as well.

SaaS delivery also offers simpler inclusive cost models and, in general, lower costs than on-premise and legacy hosted products. 

With more data in the cloud, it makes sense to have cloud-based e-discovery solutions.

Information governance should be deployed within a traditional IT infrastructure, a cloud-based environment, a hybrid of traditional and cloud infrastructure. Information governance is rapidly moving toward an enterprise service model enabling organizations to deploy shared services across the complex IT infrastructure, eliminates dependence on users, and enables uniform governance across all applications and systems.

In order to remain competitive and maintain costs, organizations must consider information governance as a service. Technologies with a flexible central policy engine capable of managing the challenges of complex, federated governance environments are going to be the ones that enable organizations to make the most strategic use of information. These technologies have an enforcement model not tied to a specific store or repository but leverage standards to enable automatic enforcement across all systems, repositories, applications, and platforms. 

Sunday, October 30, 2022

Viewing Documents in the Cloud

The adoption of cloud technology has rapidly increased in many companies and it will continue to grow. The range of benefits offered by using cloud services and the maturity of cloud vendors is driving adoption at the global level.

More and more companies are using cloud technology and managed services to accelerate business initiatives, allowing them to be more agile and flexible, and reduce costs. Companies are using cloud based storage technology for corporate records and this is raising new challenges.

Implementing a solution that views documents stored in a cloud-based system, such as a content management system, engineering drawing repository or a technical publication library, can present some challenges. 

Each of these challenges requires consideration to promote a good experience for the end user. There are four common challenges that you could face when implementing a cloud-based document viewing system: working with multiple file formats; variations in document size; browser-compatibility with HTML5; and viewing documents on mobile devices.

1. Multiple file formats

First, the documents that you want to view may be in many different formats. They may be PDF, TIFF, Word, Excel, PowerPoint, CAD or many others. The device that is being used to display the content often may not have the correct software needed to display the document or image. 

This issue is further compounded by the varying number of devices that the content will be viewed on.  A common solution is to convert the files on the server to a generic format that can be viewed by many devices, but this presents other issues. For example, most browsers and devices today can display JPEG or PNG formats, but both of these are raster image formats. If a text-based document such as a Word file is converted to an image, the display quality deteriorates when a page is zoomed and you lose interactivity with the content.

2. Document size

The second challenge is the size of the document, either the number of pages or the physical size of the document. Downloading the entire document can take a long time depending on available bandwidth. 

This is especially an issue on mobile devices with slow or crowded data connections. A system that provides a quick initial view of the first pages of the document allows a user to begin reading the content while the rest of the document downloads. This increases worker productivity and can even reduce traffic if the user quickly determines that they do not wish to continue with the document.

3. Browser compatibility

The third challenge is that there are various browsers used to access the Internet and they do not all work the same. The four major browsers are Chrome, Internet Explorer, Firefox and Safari. Each browser has differences in how they operate and how the code works under the covers. 

Document viewing technology is dependent on some level of support within the browser. For example some browsers support Flash and some do not. HTML5 is only supported on recently updated versions of some browsers, so older browsers can create challenges. 

Even where HTML5 is supported, different browsers have different levels of support. Sometimes the differences are subtle and only cosmetic, while others, like complex formatting, can cause significant display issues.

4. Mobile viewing

The fourth challenge relates to viewing documents on mobile devices. With today's on-demand business world, it is imperative to be able to support viewing documents on mobile devices. But not all the devices behave the same way, and different operating systems are used on the various devices. 

Without a consistent mobile viewing platform, separate viewing apps may need to be installed on each device and results will vary. Using a single technology that supports many document types is very important in a mobile environment.

Is HTML5 the Answer?

HTML5-based viewers can help resolve some of the challenges associated with browsers and mobile devices. However, there is a misconception that the adoption of HTML5 is the answer to all problems. It is not. 

The four major browsers have been implementing HTML5 over time and how much of the standard that is supported varies greatly with the version of the browser. Older versions of the browsers that are used in many governments, educational institutions and well-established businesses do not support HTML5.

More and more organizations are moving to solutions where documents are stored in cloud-based systems. These challenges are examples of what you might face when deploying to your customers. Understanding that these common challenges are a possibility and preparing for them before you encounter them is important. 

Providing a single platform with multiple viewing technologies, including HTML5, Flash and image-based presentation, can help ensure that all users can view documents, regardless of their specific device, browser or operating system. With that knowledge you can successfully promote a good experience for your users and overcome the major pitfalls faced by so many organizations today.

Thursday, September 29, 2022

Intelligent Search Goes Beyond the Web

Search is a crucial component of the modern workplace. The ability to find information quickly and efficiently contributes not only to business success but also to employees satisfaction. 

It is frustrating to spend time looking for information when you could be completing a task.

Search has become ingrained as part of everyday life.

Pre-Internet Findability

Today, there’s no need to pull a volume of an encyclopedia off a shelf or even leave the room to find answers to questions. One can simply use phone to search for answers to questions. Google and Wikipedia have redefined what it means to search. But have they made search any more intelligent? They certainly satisfy the itch to correct people on event dates, geography, and historical characters. 

When it comes to the workplace, however, search encompasses a great deal more than fact checking, and intelligent search goes well beyond the web.

Search has gone mainstream. People use the word “search” when they want to locate a retail store or book a hotel. That simplistic notion of search does not carry over particularly well to finding information essential to doing your job.

Teasing Out the Meaning of the Search

Part of moving from a simple Google search to a more sophisticated model involves language. 

Standardizing content in one format—her example is high-definition PDFs—creates better visibility and fewer irrelevant search results. You may be able to avoid overly complex algorithmically based search engines by improving content processing, eliminating duplication, and using a single taxonomy.

Use better metadata and better data.

Almost anyone looking at search within the enterprise stresses findability. If you’re looking for the company’s holiday schedule, you don’t want the one from 3 years ago, you want the most recent one. 

Similarly, if you are building a web site for external use, you want potential customers to find what you are selling. You want to back up your sales efforts with excellent customer service. This is another opportunity for intelligent search, since customers increasingly prefer to help themselves without using an intermediary. They like self-service, but only if it answers their questions.

Semantics plays a role in customer service. Its analysis of the contextual meaning of words enhances the quality of answers. For example: customers might enter “How much will it cost me…” while your search engine understands phrases as “What is the price…” To be findable, your customer’s search query must translate to your words. Synonyms dictionary would help to resolve this issue.

Definition of intelligent search goes beyond findability. A search engine should know what you need and what your colleagues found valuable, and supply it to you when you need it. 

For Coveo search engine, the power and sophistication of machine learning technology is the driving force behind intelligent search. Intelligence springs from usage and analytics data, along with a multitude of other factors, the components of which are hosted and managed by companies such as Coveo.

Regardless of how you define intelligent search, it’s clear that enterprise search requirements go well beyond what Google or Wikipedia can provide. Different approaches to intelligent search provide much to think about when implementing, redesigning, and rethinking enterprise search. Intelligent search goes well beyond what searching the web looks like.

Improving Search and Decision-Making with Semantics

We’ve all heard about how Google’s proverbially simple search form has led professionals to expect similar simplicity from search solutions provided by corporate IT. Except this model doesn’t really work, and it’s costing millions of dollars every year in time wasted when professionals don’t find, and have to re-create information.

The reason it doesn’t work is that while every organization has a specific worldview, search engines are essentially blind. Worldview is the inventory of business objects that an organization cares about (products, geographies, customers, processes, etc.) and their relationships, that are typically captured in a taxonomy or ontology. While professionals implicitly want to search for information according to their worldview, search engines don’t offer them a practical way to do so.

Semantics Provides Meaning

The missing piece in this puzzle is a “meaning engine” that would understand unstructured content through the lens of your organization’s worldview. It exists: it’s called a semantic enrichment platform.

A semantic enrichment platform ingests your organization’s taxonomy or ontology and applies it to your content at scale. Leveraging natural language processing, it understands your content the same way humans do. It recognizes topics that are relevant to your business, entities of interest, their attributes and relationships, and converts them into structured data, that can be used standalone, or as metadata describing your content deeply and consistently. In energy, for example, entities of interest might include commodities, trading companies, and the countries where they do business.

Better Metadata Accelerates Search

When used as metadata, this data acts as an eye-opener for search engines that can finally see your content through your own worldview. This redefines the search experience by offering end-users new tools to locate what they are looking for.

Faceted Navigation enables end-users to search by business entity or topic (for example by company name, commodity type or region), helping to find the most relevant content in just a few clicks.

Links to relevant information provide convenient access to structured information about entities of interest so users don’t have to collate it themselves. For example, each company name could be linked to data about its activities. Topic pages concentrate all information about a specific topic in one convenient access point so users don’t need to sift through all other materials to access it. A topic page on electricity would, for example, filter out information related to other energy sources.

Content recommendation uses metadata to surface other documents with similar topics, promoting discovery of relevant information. A document on a merger in the gas sector might point to reports of other, similar operations.

Such mechanisms significantly accelerate and simplify search tasks, offering not only time and cost savings, but also more informed decision-making.

Better Data Improves Decision-Making

But semantically-extracted information can be used for its informative, rather than descriptive, value. Not as metadata, but as standalone data. This opens the door to applications that address the above blindness at a deeper level, providing higher-level and faster insight into the subject matter at hand.

One of semantics’ capabilities is to recognize not only entities, but also their relationships (often expressed as triples). One such relationship might for example indicate that company A is a “supplier of” company B. Information value from these relationships may come into play under a variety of scenarios.

Knowledge Bases (or Graphs) integrate such structured information at scale so they can then be queried. One might contain, for a given commodity, links to all suppliers.

Complex Reasoning can be performed on these knowledge bases, enabling business applications to provide higher degrees of automation in decision-making tasks, for example, automatically balancing supply by identifying alternative suppliers when one announces production issues.

Analytics and Visualizations provide dashboards that sit on top of the data and reveal its meaning on a more holistic level. For example, a network graph could plot all company relationships in natural gas, indicating which companies might be exposed to increasing prices in a given region.

Lastly, semantics can also be used to deliver Question Answering Systems that offer users a way to get answers to questions formulated in natural language (“Which electricity providers have the most diversified supply chain?”) instead of engaging in search.

Semantics Provides Faster Insights and Better Decisions

As can be seen from the examples above, semantics is the “meaning engine” that ensures that users can overcome search’s blindness and access information through the specific worldview relevant to their work. But this engine brings meaning to more than your search engine: it is your information management as a whole that benefits, bringing the promise of smarter applications that efficiently handle more of the groundwork, accelerate time-to-insight and support better decisions.

Self-learning

Intelligent search is no longer a nice-to-have feature in organizational information systems; it is a critical part of how businesses are transforming the way they work. Intelligent search goes beyond findability and information access. Like a trusted advisor, intelligent search knows what documents you need for your tasks and which articles your colleagues found most valuable and would be useful to you too, and simply gives everyone the information they need, when they need it. And the power and sophistication of machine-learning technology is the driving force behind intelligent search.

What Is Machine Learning?

Machine learning learns from and makes predictions on data. Applied to search, every time a user performs an action on your web site or support portal, he or she provides data about what is useful. Did they submit a support ticket? That means the articles they just read did not help. Do most people spend only one minute with a document that would normally take 10 minutes to read? That’s a sign that the content isn’t useful, or perhaps it’s too difficult to understand. With machine learning, all of that information and more can be used to make data-driven predictions and decisions without manual intervention.

How Will Machine Learning Make Search Intelligent?

When someone submits a search query or clicks on the third search result, they are implicitly telling you what is most relevant. As your online community members download content, visit various web pages, watch videos, start an online chat with your support agents or submit support tickets, their behavior provides information on the relevance of the content they come across. This behavioral data as well as search behavior which signals intent are captured by search usage analytics.

Intelligent self-learning search engines powered by machine learning can leverage such usage analytics data to continuously self-learn. This improves search relevance and hence, the self-service experience on your community in many ways. For example, automatic fine-tuning and ranking of search results based on machine-generated predictions about what is most useful improves the experience of all community members.

Without machine learning and analytics data, administrators need to fine-tune search rankings manually: create boosting rules, add synonyms, promote documents, etc. Because relevance is an ever-evolving process and the document that was the most relevant last week may no longer be relevant today, it is almost impossible for administrators, especially those at large organizations or those with multiple product lines, to keep pace with the rate of change.

With machine learning, highly manual and complex enterprise search can be transformed into intelligent, self-learning and self-tuning search.

Why Now?

Machine learning has been around for a long time. It used to be very complex to deploy and manage. Collecting usage data, managing databases, provisioning servers, developing and maintaining machine learning algorithms and using machine-learning predictions in the search system were typically very complex. This would require data scientists, database experts and developers. Only the biggest organizations could afford that. But the fast adoption of cloud solutions has made the use of machine learning much easier, cheaper and more attainable. In particular, the recent trend towards cloud-based enterprise search is a game changer.

What Is the Impact of Cloud-Based, Self-Learning Search?

With cloud-based, self-learning search, all the required components are hosted and managed by the vendor, such as Coveo. Because of its scalability, it has the potential to change the customer service industry the same way machine learning has impacted e-commerce and social networks. 

In the past, the high cost of using and managing machine-learning systems meant that machine learning was rarely used for traditional enterprise search or self-service support sites. The cloud makes that affordable to all customers and to all departments, especially when deploying self-learning search on self-service support sites and on communities, because of its ability to scale and handle large volumes of data.

Tuesday, August 30, 2022

Importance of Information Governance

The fact is that most people will either embrace or decline information governance depending on their individual situation at a certain point in time. Information governance is closely allied with privacy and security. Knowledge as internal currency that needs to be managed wisely, which is where a governance procedure would be helpful.

It is entirely possible that someone might curse a rule as arbitrary while simultaneously recognizing the necessity of it from a security standpoint. Someone else could easily applaud relevant search results without actually realizing the role information governance played in facilitating that relevance. And there’s always “that guy” who complains regardless of whether the complaint is justified.

Information governance is an important and necessary component of modern organizations’ information infrastructure. It is our job, as information specialists and knowledge managers, to combat any negativity about information governance within our organizations and to manage expectations. Information governance is an integral part of both information technology and knowledge management. Together, they bring information governance forward onto that center stage.

With almost everyone in an organization contributing content, the role of information governance is ever more critical. Information governance is hardly an impediment to productivity; it’s actually a productivity enhancer. Risk management in the form of information governance, data security processes, and legal compliance stands center stage for organizations of all sizes and types.

Information governance is not just a good idea, created by computer geeks or imposed by legal departments. It is tied to international legislation about privacy and that affects all organizations, whether they are involved in international trade or not. 

Companies should be looking at information governance not in reaction to legislation but as an opportunity to reflect on what is good information life cycle management. 

Take archiving, for example. If data is archived in five different places, your potential exposure is multiplied by five. It’s also harder to determine which version is the most current and the most authoritative. Whether protecting your data comes first or having a streamlined archival system comes first is a chicken-and-egg question. The fact is it doesn’t matter—they can happen simultaneously and be of equal benefit to your organization.

It is a KM responsibility to accentuate the positive about information governance. It is good data management, not simply a bunch of random rules. Since it makes good business sense and should be presented as such, we need to foster a culture of compliance and to have both top down and bottom up support. We should make it easy for people to do the right thing, remove obstacles, build a stakeholder community, and incentivize them to comply. Removing obstacles, however, should not mean removing all obstacles. Policies should still restrict access to those qualified to view the data.

Retention policies should recognize that information has a beginning, middle, and end. It has been created, collected, used internally, shared inside the company and externally, and then it should have a define disposition. Disposition might mean it is archived but it might also mean it is destroyed.

Organizations should comply with legal requirements and not dispose of information too quickly. On the other hand, hoarding information does not help with risk avoidance, either. If you think that information might have long-term implications, possibly to identify trends, you still don’t want that sitting in your content management system. Archiving it and getting it out of a production environment could be the answer, but if and only if you are not saving it simply for the sake of saving it.

Life cycle management of information starts with thinking about how information is created or collected. Did it come from internal sources? Was it gleaned from an external repository? Was it provided by customers? This will differ from company to company and even from one industry sector to another. Next is access policies: who is authorized to access and use the data. 

The point is to strike a balance between being punitive to the point of inhibiting compliance and restricting access to preserve privacy and security. Sharing information is an important component of modern information  management and the cornerstone of KM, but excessive sharing creates more problems than it solves and sharing across national borders raises potential legal issues. Retention policies and disposition practices are integral to good information governance, as is the understanding of what can and should be shared.

Data without information governance practices in place can create operational, privacy, and security gaps that put company assets at risk. Once you know what your data is, where it is, who can access it, and who has accessed it, you can then make decisions about where it should reside. Data in a highly secure system may need less controls than data located in a cloud environment or a broadly available corporate intranet or website.

Depending on your information governance rules, data can be a valuable asset like gold or it can become toxic like asbestos. A true best practice approach requires a sustainable ecosystem where you derive value from the data you hold while protecting company assets.

In organizations around the world, almost every employee is now a content contributor. Social, mobile, and cloud technologies have made it easier than ever to share information both in and out of the organization. This influx of new content, however, brings about new risks. Legal systems and government regulators worldwide are clamping down and demanding greater compliance, particularly on IT systems, requiring that organizations quickly implement risk management protocols. Data is growing too fast to keep up, which creates both great opportunity and risk for all organizations.

Organizations must be vigilant in creating enforceable policies, training programs, and automated controls to prevent and monitor appropriate access, use, and protection of sensitive data, whether they are regulated or not. Doing so will not only mitigate the risk of regulatory and statutory penalties and consequences, but will also help prevent an unnecessary erosion of employee or consumer confidence in the organization as the result of a breach or the loss of sensitive data.

Understanding Data Lifecycle Management

You can’t secure data you don’t know you have. Thus, a process of identification, value extraction, classification, and archiving needs to occur.

Whether data is generated by your organization or collected from a third party (such as a customer, vendor, or partner), the only way you can effectively protect it is by understanding it. For instance, does it contain customer information, employee information, intellectual property, sensitive communications, personally identifiable information, health information, or financial data?

Implementing a Best Practice Approach

1. Contemplate how data is created or collected by your company. You should think about excessive collection as well as how you will provide notice to individuals about that collection and appropriate levels of choice. You should also understand whether you need to keep appropriate records of that collection and creation.

2. Think about how you are going to use and maintain this data. Here you should consider inappropriate access, ensure that the data subjects’ choices are properly honored, address concerns around a potential new use or even misuse, consider how to address concerns around breach, and also ensure that you are properly retaining the data for records management purposes.

3. Consider who is going to share this data, and with whom they are going to share it. You should consider data sovereignty requirements and cross-border restrictions along with inappropriate, unauthorized, or excessive sharing.

4. All data must have an appropriate disposition. You should only keep data for as long as you are required to do so for records management, statutory, regulatory, or compliance requirements. You should ensure you are not inadvertently disposing of data while understanding that as long as you store sensitive information you run the risk of breach.

5. Understand the difference between what can and should be shared. A good program must continually assess and review who needs access to what types of information. Privacy and security teams should work with their IT counterparts to automate controls around enterprise systems to make it easier for employees to do the right than wrong or simply neglect the consequences of their actions. Once you have implemented your plan, be sure that you maintain regular and ongoing assessments.

Discovery and Classification

Many companies worry about “dark data” or data that exists across their enterprise systems (file shares, SharePoint, social systems, and other enterprise collaboration systems and networks) and is not properly understood. Understanding what and where this data is and properly classifying it will allow organizations to set the appropriate levels of protection in place. 

For example, many companies apply their security controls in broad terms using the same security procedures for everything. But logically, you do not need to put the same security protocols around protecting pictures from your company picnic as you do towards protecting your customer’s critical infrastructure design or build information, or credit card information or your employee’s benefits information.

Data discovery will allow you to determine the origin and relevance of the data you hold, and determine its retention schedule. You be more equipped to effectively implement Data Loss Prevention in a tactical way. Data aware security policies provide an opportunity for organizations to build a more layered approach to security, prioritizing where efforts (and costs) should be spent, and building multiple lines of defense. 

This provides you with the ability to manage the life cycle of the data within your company, from creation or collection through retention, archiving and/or defensible destruction. You cannot block everything from leaving your company any more than you should encrypt every document you have. When security blocks productivity, employees find a way to go around it. The job of security is to help the business use data productively and securely.

Data-Centric Audit and Protection

Understanding and controlling data flows is a critical component to an effective roll out of information management strategies. Key components of an effective methodology should include:

  • Data inventories that help customers understand where their sensitive data resides.
  • Classification on structured and unstructured data to ensure sensitive data is clearly identified.
  • Governance policies that protect the use of sensitive information by applying data sovereignty requirements, permissions management, encryption, and other data protection techniques.
  • Incident remediation and response for sensitive data breaches when they occur.

Report and Audit

Identifying potential risks within your information is just the first step. Take action to quickly and efficiently resolve issues with security-trimmed, pre-prioritized reports that provide guidance to your content owners and compliance teams to target the most critical violations. 

Privacy and security risk management intersect with other data lifecycle management programs within your company. Combining these related areas will allow you to better optimize resources while mitigating risk around digital assets to support responsible, ethical, and lawful collection, use, sharing, maintenance, and disposition of information.

Friday, April 29, 2022

Intranet in Knowledge Management Strategy

The modern workplace is increasingly spread out in many locations, with employees and expertise spread across multiple offices and areas. This makes it very difficult to know what information exists and where it is kept. 

We can make the assumption that a majority of a company’s information is stored on hard drives, content management systems, file sharing applications and in the minds and memories of employees. This creates a few problems:

  • People don’t have access to the information they need to do their jobs effectively.
  • The sheer amount of information becomes difficult to manage and measure.
  • Information becomes stale or inaccurate because it’s not open for collaboration.
  • Constant duplication of work, hampering productivity and crippling the pace of innovation.

On average, a typical employee wastes 2.3 hours per week searching for information. This can cost companies $7,000 per employee per year. Prioritizing a company-wide audit of all knowledge can help companies cut down on wasted time and allocate these resources elsewhere.

Turn Information into Knowledge

Knowledge is power, but only when it is shared. Until then, it is just information without context or meaning. The transformation of information into knowledge occurs only when it is stored in a place where people can talk about it and build upon it. Here are three ways a modern intranet can help.

Knowledge Bases

A modern Intranet supports the creation of many types of knowledge bases (KBs), including standard operating procedures, technical documentation, and best practices. This content, which would typically live in documents stored on drives, can now be published as wiki or blog articles that are easy to organize, search, and update. While a robust KB can lead to quicker decision-making and increased productivity, even the best KB is only effective if people know it is there and how to use it. The key is to make sure the structure is intuitive and that the information is searchable based on permissions so people only see what they need and can see.

Expertise Location

A people directory makes it easy for experts to share what they know with the rest of the organization. Think of it like a baseball card collection. Employees are players, their profiles are cards, and each card is tagged with stats (or an employee’s knowledge, skills, and abilities). Your collection should be searchable so it is easy to find who you are looking for, and it should allow employees to validate each other’s expertise by endorsing each other with badges or rewards. Having a full set makes it easy to trade information and expertise in your organization, and identify gaps or areas that you may need to recruit for.

Forums

Online forums give structure to typical water cooler interactions or brainstorming meetings, helping to surface the information that exists in people’s heads. These types of conversations that would typically happen behind closed doors or on email trails can now be transformed into knowledge that everyone can access. Employees can ask questions, submit ideas, or make requests, out in the open, for everyone to see. Even if they don’t initiate a conversation, employees can still participate by liking, rating, or commenting on someone else’s post. Eventually, forums develop into a library of collective knowledge built upon the exchange of information between people and teams in your company.

Example: Onboarding

To demonstrate these concepts, let’s look at a challenge that faces many growing organizations: onboarding. With a modern intranet, you can create a “newbie zone” to house everything employees need during their first few days. The space should feel warm and welcoming, and not confusing or technical. Starting a new job is overwhelming enough. Give them only what they need so they can spend their time learning about the culture, meeting new people, and acquainting themselves with the company’s products and services.

  • Include a knowledge base of all company policies and guidelines that employees should be aware of, as well as any training they need to complete. Direct them to the information that is most relevant to their role and responsibilities and try to avoid overloading them with too much at once.
  • Include a forum that addresses any “newbie” questions or concerns. It is a safe space for employees to get comfortable with the company, but it also allows your HR team to gather insights about what information is important to new employees and adjust their knowledge bases accordingly.
  • Use the forum to introduce employees to experts, mentors, and other influencers that can teach them about the company, and its culture and processes. Invite these experts to answer new forum topics and ensure all existing topics are up to date.

Onboarding is the first opportunity to establish open knowledge sharing as a cultural norm. By using your modern intranet to demonstrate the value and benefit to your employees, it becomes a mentality that everyone adopts from day one.

The Power of Collective Wisdom

Knowledge should be treated as an internal currency with structures in place to ensure that it is managed wisely and that you are not losing any of it along the way. By continuously converting information into knowledge, you can realize a variety of benefits that will move your organization forward, including:

  • Active and constant validation of company information.
  • A common language that everyone understands.
  • A culture of sharing and collaboration where knowledge belongs to everyone.

A modern intranet brings content and conversations together in one place, promoting active and continuous knowledge sharing across all levels of an organization. 

Galaxy Consulting works with many companies to tackle the challenges facing them, knowledge management being just one. Our goal is to help our customers capture the collective wisdom in their organizations so they can drive productivity, promote innovation, and help their business succeed.