Monday, January 30, 2023

SharePoint Implementations

There are a few main considerations for governance and metrics in SharePoint implementations:

  • metrics to gauge maturity, success, adoption, compliance and progress in your program;
  • mechanisms for managing content across the full lifecycle including compliance with standards for tagging;
  • governance processes and policies to control site and content ownership.

Metrics

Metrics will give you measures of success, adoption, compliance and progress. What is measured can be managed. When no objective ways have been put in place to measure how well a program is functioning, it is not possible to correct or improve it. It is essential to have a way of monitoring how things are going so changes can be made to serve the needs of the program.

Maturity

The first metric to consider is overall maturity and capability. Maturity in the SharePoint space can be considered across multiple dimensions, from the level of intentionality and structure of a process to the formal presence and level of sophistication of governing bodies. 

Consider a maturity model in which each dimension is mapped with a set of capabilities and characteristics that indicate a general level of maturity. Based on the overall characteristics of those processes (reflected in the rating for each dimension), the maturity of the organization’s SharePoint implementation can be measured at the start of a program and throughout its life. As processes are installed, the maturity is increased. That snapshot in time is a good indicator of the state of the program and can be used as a general measure of success.

Because SharePoint success is indicated by the ability to locate information (“findability”) and findability is the result of a combination of factors, it is possible to describe those factors in terms of existing practices and processes as well as benchmark the level of functionality or activity (for example, content quality measures, the presence of a process or the measure of the effectiveness of that process). One governance maturity measure regards whether there are any governing bodies or policies in place. Another might be the participation levels in governance meetings.

Use cases and usability

A second important measure of value includes overall usability based on use cases for specific classes of users. Use cases should be part of every content and information program, and there should be a library to access for testing each use case. Use cases are tasks that are part of day-to-day work processes and support specific business outcomes. At the start of the program, assessing the ability of users to complete their job tasks, which requires the ability to locate content, provides a practical baseline score to compare with later interventions.

User satisfaction is a subjective measure of acceptance. Although subjective, if measured in the same way, before an intervention or redesign and then after the intervention. The results will show a comparative improvement or decrease in perceived usability. The perception can be impacted by more than design. Training and socialization can have a large impact on user satisfaction.

Adoption

One simple metric for adoption is the volume of e-mail containing attachments as compared with those containing links. As users post their files on SharePoint and send links within messages rather than e-mailing attachments, they are clearly demonstrating use of the system. Looking at that metric as a baseline and then periodically on a department-by-department basis as well as company-wide provides a valuable information regarding SharePoint adoption.

Other adoption metrics include the number of collaboration spaces or sites that are set up and actively managed, the numbers of documents uploaded or downloaded, the degree of completeness of metadata, the accuracy of tagging, and the number of documents being reviewed based on defined lifecycles.

It is important to have self-paced tutorials regarding your particular environment and to monitor the number of people who have completed this kind of training. Participation in “lunch-and-learns,” webinars or conference calls on the use of the environment are other engagement metrics that can be tracked.

Socialization includes a narrative of success through sharing stories about the value of knowledge contained in knowledgebases, problems being solved and collaboration that leads to new sales or cost savings. Publicizing new functionality along with examples showing how that functionality can be used in day-to-day work processes will help people see the positive aspects of the program and help to overcome inevitable challenges with any new deployment. Those successes need to be communicated through different mechanisms and by emphasizing themes appropriate to the audience and process. An application for executives may not resonate with line-of-business users.

Alignment with business outcomes

A more challenging but also more powerful approach to metrics is to link the SharePoint functionality to a business process that can be impacted and that can be measured. One example is a proposal process that enables salespeople to sell more when they are able to turn proposals around more quickly, allowing more selling time or reduced cost of highly compensated subject matter experts. Employee self-service knowledgebases can be linked to help desk call volume. Those metrics are more challenging because they require the development of a model that predicts the impact of one action on another or at least an understanding that causality is involved, but they also can be a strong indication of success.

Tagging processes

The amount of content that is correctly tagged provides a useful measure of adoption and compliance. How do you know if content is tagged correctly? Taking a representative sample of content and checking whether tagging is aligned with the intent of the content publishing design will detect inconsistencies or errors in tagging. 

The percentage of content that is tagged at all is an indicator. One organization left a default value that did not apply to any content. The first term in the dropdown was "lark". If users left that value in, they were not paying attention and the quality of tagging was impacted. Measuring the percentage tagged with "lark" allowed for an inverse indicator. When the "lark" index declined, the quality increased. The quality of content can also be measured with crowd-sourced feedback. Up-voting or down-voting content can trigger workflows for review or boosting in ranking.

Change triggers

Metrics tell the organization something: whether something is working or not working. But what action is triggered? A metrics program has to lead to action: a course correction to improve performance. The change cycle can be characterized by conducting interaction analysis to measure the pathway through content and how it is used (such as impressions or reading time). 

If users exit after opening a document, that exit could be because they found their answer or because the content was not relevant. It is only by looking at the next interaction (another search, for example, or a long period of reading the document) can it be determined whether the content was high value or whether it did not provide an answer. Based on this analysis, it is possible to identify a remediation step (create missing content or fix a usability issue, etc.).

Search interactions also provide clues for action. When top searches return no content, the wrong content or too much content, the root cause can be addressed with an appropriate action (improve tagging, create content, tune the ranking algorithm or search experience with best bets, auto-complete, thesaurus entries, etc.).

By reviewing and troubleshooting content interaction metrics, patterns may emerge that point to problems with the publishing process or compliance with tagging guidelines.

Content processes and governance policies

SharePoint governance consists of decision-making bodies and decision-making mechanisms for developing and complying with rules and policies around SharePoint installations. This is the glue that holds SharePoint deployments together. Mechanisms for creating a new team sites and collaboration spaces need to go through a process of review to ensure that redundant sites are not created. Abandoned sites need to be retired or archived. Content needs to be owned and reviewed for relevance. If content is not owned and abandoned sites not actively removed, the installation becomes more and more cluttered.

Without clear guidelines for how and where to post content and ways to apply metadata tags, users will tend to post content haphazardly, and eventually libraries will be cluttered with junk. Over time, people will dump content in SharePoint because they are told they need to post it for sharing but no one will know how to find valuable content. Site administrators must understand the rules of deployment and control how users are utilizing SharePoint to prevent sprawl and keep the system from becoming cluttered with poorly organized content.

Among the chief goals of governance is to prevent SharePoint from becoming a dumping ground by segmenting collaboration spaces from content to be reused and enforcing standards for curation and tagging.

Consider that every element of SharePoint has a lifecycle and that this lifecycle has to be managed. Those elements range from design components that are created based on the needs of users and rigorous use cases (including taxonomies, metadata structures, content models, library design, site structures and navigational models), to the sites themselves that are created according to a policy and process and disposed of at the end of their life, to the content within sites that needs to be vetted, edited and approved for broad consumption. All of those are managed through policies, intentional decision-making and compliance mechanisms developed by a governance group.

SharePoint governance needs to be a part of the overall information governance program of the enterprise. It is part of content and data governance with particular nuances based on how the technology functions. In fact, many tools are designed into the core functionality of SharePoint to help with governance operationalization. The overarching principle is to consider the audience and the breadth of audience the content is designed to reach.

One analogy is that of an office structure. The lobby, which has a wide audience, limits what can be displayed. The lobby environment is visible to all, so it needs to be managed rigorously. But walking into a cubicle in the office building will reveal the personality of its inhabitant: personal photos, papers on the desk, individual and idiosyncratic organizing principles. A messy desk perhaps. A shared work area might be someplace between the orderliness of the lobby and the messiness of the individual workspace.

Those gradations are the local, personal and departmental level spans of control analogously managed in SharePoint. Information that has an enterprise span needs to be carefully managed and controlled. In a collaboration space, things can be a little more chaotic. In fact, the one thing to keep in mind is that content has a different value depending on the context and span and will increase in value as it is edited, vetted, tagged and organized for specific audiences and processes.

Segment the high-value content by promoting it from a collaboration space to a shared location and apply the tags that will tell the organization that it is important. Separate the wheat from the chaff. Manage high-value content and differentiate it from interim deliverables and draft work in process. Throw away the junk or take it out of the search results so they are not cluttered with low-value information.

Many people complain that they can’t find their content in SharePoint and they want search to work like Google. The answer is to put the same work into managing and processing content as search engine optimization departments do for web content, and the search engine will return the results that you are looking for.

SharePoint requires an intentional approach to design, deployment, socialization, maintenance and ongoing decision-making. The rules are simple: there is no magic. They need to be applied consistently and intentionally to get the most from the technology.

SharePoint Beyond the Firewall: Put Your Content to Work

SharePoint is undoubtedly one of the most important and widespread enterprise productivity tools, used by an estimated 67% of medium-to-large organizations, according to research firm AIIM. Many companies are heavily invested in SharePoint, and for good reason: it’s a highly adaptable solution that can be effective for content management and file sharing across a range of use cases. But SharePoint does have its limitations.

Where SharePoint struggles is when content needs to be securely shared outside the firewall, and consumed by remote workers, partners, or suppliers. Extending SharePoint for external needs introduces IT challenges, including content protection and security, user governance and support, and initial and on-going infrastructure and license costs.

This creates a challenge for organizations with sizable SharePoint investments and large populations of users. Rather than replacing SharePoint, it’s more practical to build on existing investments to provide secure, external collaboration and document sharing, without adding unnecessary complexity and cost to IT infrastructure, or putting sensitive or regulated content at risk.

According to AIIM, security and control are the top concerns of SharePoint administrators since it is routinely used to manage highly sensitive and regulated content: 51% of users share financial documents, 48% legal and contractual documents, and 36% board of directors and executive communications.

Leverage Your SharePoint Investment for External Document Sharing

As companies start sharing sensitive documents with collaboration partners, they need to maintain tight access control. SharePoint control over document access is not as well defined as many large enterprises might want. Attributes to consider for secure, seamless content sharing that complements your SharePoint investment include:
  • Secure, policy-based document-sharing control.
  • Agile response, and easy set-up and adoption.
  • Low, up-front investment.
  • Ability to leverage existing systems without adding new complexity.
  • Provisioning and support for a community of external users.
  • Cloud-based solutions are now meeting all of these demands to unburden the in-house IT infrastructure, but still allow internal users to continue using the familiar SharePoint-based platform and applications, with little or no change or added overhead.
Maintaining Control Over the Content Lifecycle

Externalizing SharePoint is one thing. Having control over the content once it’s left the firewall is another. For comprehensive control, you’ll want to consider tools with the following:
  • Access rights for external partners. Given the large number of potential collaboration partners and the number of documents to share, you’ll need granular and dynamic document administration.
  • Encryption. As soon as SharePoint documents pass beyond a firewall, they need to be encrypted and remain encrypted both as they move over the internet and while they are at rest within the external document sharing application. Seamless encryption means hackers can’t access the data within a document at any stage.
  • Virus protection. Avoid picking up file-based viruses that could penetrate your network while content is in motion, and shared and accessed from various geolocations and devices.
  • Information Rights Management (IRM). IRM services let IT departments provide secure document access to any device—PC, smartphone, tablet—while dynamically managing content rights even after a document has been distributed. Such systems have the ability to let users view without downloading documents, and prevent printing or screen capture. Ideally, IRM should be plug-in free so that it is frictionless to users. Finally, digital watermarking identifies a document as confidential and also embeds in the document the name of the person doing the download. This helps ensure that the user will be extra careful not to lose or leak the document.
  • Monitoring and auditing. Know which people are looking at what documents, for how long, and create audit reports from this information. This verifies compliance with data privacy and other relevant regulations, such as the Sarbanes-Oxley Act.
These security, compliance, and information governance capabilities should be accessible without requiring additional SharePoint software customization, or introducing a new user interface.

Galaxy Consulting has over 15 years experience in SharePoint implementations and management. Please contact us for a free consultation.

Tuesday, December 27, 2022

Seven Realities of Online Self-Service

It is very important to revitalize the self-service experience offered on customer-facing websites in in order to keep pace with evolving consumer expectations. There are seven key realities of modern online service that expose the gap between customer expectations and website self-service performance, and how you can take steps to close that gap starting now.

1. Customers have grown tired of old online help tools. Customer satisfaction with today's most common web self-service features is abysmal and getting worse.

As more companies rectify this by deploying next-generation self-service solutions and virtual agents, fewer customers will tolerate antiquated self-service help tools online.

2. Customers now expect a superior experience online, not just a good one. Exceptionally positive online experiences are now setting the bar for what customers expect when they visit virtually any web site in search of answers and information.

3. Consumers are impatient and protective of their time. Consumers cite "valuing my time" as the most important thing a company can do to deliver a good online customer experience. Yet many web sites are complex, hard to navigate and filled with content that provides multiple possible answers rather than a single, swift path to resolution.

4. Customer service has gone mobile. Mobile phones are now ubiquitous. Convenience and ease-of-use are the hallmarks of these mobile form factors, and web sites that offer experiences contrary to these attributes will only raise the ire of today's increasingly impatient and unforgiving mobile consumer.

5. Social media is increasingly embraced as a customer service tool. Delivering a consistent service experience across multiple channels is critical, as consumers are not shy about using social media sites to publicly complain and vent frustration about any interactions with companies that fail to satisfy them.

6. It's not just your younger customers who prefer to get their answers online. In fact, consumers of all ages are equally likely to prefer online channels for customer support.

7. Dissatisfaction online = hijacked revenues. One of the most appealing benefits of delivering a positive experience in the web channel is the opportunity for organizations to provide information that supports and encourages purchase decisions. Online, the segue from a customer service conversation to a purchase consideration conversation can be a very natural and systematic progression. This progression is thwarted, however, the moment a self-service experience fails to satisfy.

The impact of the self-service experience on revenues should not be underestimated. Customers are very likely to abandon their online purchase if they cannot find a quick answer to their questions.

These seven trends underline the urgent need to revitalize the online service experience offered by most companies. Online self-service is in need of resuscitation and useful web self-service and virtual agent technologies that can deliver an enhanced customer experience are currently underutilized.

Where To Go from Here?

What should your organization do as the first step toward improving the online customer experience? Begin with an honest and objective assessment of the self-service experience your website offers today. Looking at your customer-facing website, ask yourself these three questions.

1. Is there a single, highly visible starting point for self-service activity? Today's consumers are task-oriented when they go online. Your customers want their self-service journey to begin immediately and move swiftly to completion. Looking at your home page or most highly trafficked customer service page, ask yourself if the average customer would be able to identify the clear starting point for any customer service-related task in a matter of seconds. Any required navigation or clicking through to new pages is viewed as a time-waster and is out of alignment with their expectation.

2. Is issue resolution generally a multi-step, or a single-step activity? When looking for information online, customers want a single accurate answer that's accessible in one step. Any content page that offers more than one alternative answer, or path to an answer, requires your customer to take additional steps for sorting, scanning content and/or comparing answers. On your web site, when results are served, is the customer presented with a single answer, or multiple results to sift through?

3. How will you measure how your site is performing in this area? A quantitative assessment of your self-service performance is the first thing you will need to establish for any improvement to the self-service experience.

Optimizing self-service experience in organizations' web sites is extremely important and will help to increase revenues. Contact us today for a free consultation.

Tuesday, November 29, 2022

E-Discovery and Information Governance

More and more companies are operating throughout the world, so the impact of differing requirements for e-discovery is increasing, especially those relating to privacy. The rules tend to be much more rigorous outside the United States, particularly in the European Union.

Europe has adopted the General Data Protection Regulation (GDPR), which was promulgated in April 2016 and has a two-year implementation timeframe. It regulates the manner in which data can be collected and moved across international borders. The regulation makes an e-discovery company or law firm responsible for any compliance failure. If there is a breach, the data handling entity can be held liable for up to 4 % of its gross revenues worldwide, whether the breach was intentional or not.

A number of other trends are occurring in international litigation that are having an effect on e-discovery. Litigation is beginning to be seen as a business strategy in Asia as evidenced by the aggressive litigation some Korean electronics companies are taking with regard to protecting their IP. Those companies are seeing the potential benefits of using litigation as a method to protect or monetize their IP, which results in greater requirements for e-discovery.

Other factors are also driving the demand for e-discovery. The United States was the first country to carry out antitrust investigations that reached beyond its borders, and there is a domino effect with other countries now doing the same thing. These government investigations are often followed by class action lawsuits, creating additional challenges for the multinational companies.

The international nature of that litigation also creates more issues with respect to moving data across borders. Therefore, it is all the more important for companies to be aware of local laws and customs regarding privacy.

One question about data resulting from the proliferation of data is whether it will become a more frequent target of e-discovery. 

Potential issues abound including whether personally identifiable information (PII) is involved. Most information is stored in structured databases and it could be used in litigation to make a claim that an individual was doing something at a certain time. The information may or may not be encrypted; it could also involve health data from wearable devices, for example, that could be considered PII. Organizations may need to take a step back and think about who the custodian is, whether the data could be part of e-discovery and whether it is being appropriately protected.

Moving to the cloud

Every organization has information stored across a multitude of systems, computers, shared drives, repositories, and now a lot of this information is moving to the cloud. This is going to require a new approach and new technologies in order to address the challenges arising from the growing volume and format of information being generated.

Managing cloud based content may be new to an organization and as a result there might be uncertainty of the risks involved and the various approaches to mitigate them.

Most of cloud repositories lack information governance. This means that an appropriate architecture and supporting processes have to be put in place to ensure hat content is properly governed and managed. By joining a could enabled information governance platform with those cloud content repositories, an organization will be able to make those cloud based repositories complaint with e-discovery requirements.

SaaS-based delivery models for e-discovery are becoming more prevalent. The move to Office 365 is another part of this equation. With more data in the cloud, it makes sense to have cloud-based e-discovery solutions. The established benefits of SaaS delivery such as scalability, faster release of new features and simpler interfaces apply to e-discovery as well.

SaaS delivery also offers simpler inclusive cost models and, in general, lower costs than on-premise and legacy hosted products. 

With more data in the cloud, it makes sense to have cloud-based e-discovery solutions.

Information governance should be deployed within a traditional IT infrastructure, a cloud-based environment, a hybrid of traditional and cloud infrastructure. Information governance is rapidly moving toward an enterprise service model enabling organizations to deploy shared services across the complex IT infrastructure, eliminates dependence on users, and enables uniform governance across all applications and systems.

In order to remain competitive and maintain costs, organizations must consider information governance as a service. Technologies with a flexible central policy engine capable of managing the challenges of complex, federated governance environments are going to be the ones that enable organizations to make the most strategic use of information. These technologies have an enforcement model not tied to a specific store or repository but leverage standards to enable automatic enforcement across all systems, repositories, applications, and platforms. 

Sunday, October 30, 2022

Viewing Documents in the Cloud

The adoption of cloud technology has rapidly increased in many companies and it will continue to grow. The range of benefits offered by using cloud services and the maturity of cloud vendors is driving adoption at the global level.

More and more companies are using cloud technology and managed services to accelerate business initiatives, allowing them to be more agile and flexible, and reduce costs. Companies are using cloud based storage technology for corporate records and this is raising new challenges.

Implementing a solution that views documents stored in a cloud-based system, such as a content management system, engineering drawing repository or a technical publication library, can present some challenges. 

Each of these challenges requires consideration to promote a good experience for the end user. There are four common challenges that you could face when implementing a cloud-based document viewing system: working with multiple file formats; variations in document size; browser-compatibility with HTML5; and viewing documents on mobile devices.

1. Multiple file formats

First, the documents that you want to view may be in many different formats. They may be PDF, TIFF, Word, Excel, PowerPoint, CAD or many others. The device that is being used to display the content often may not have the correct software needed to display the document or image. 

This issue is further compounded by the varying number of devices that the content will be viewed on.  A common solution is to convert the files on the server to a generic format that can be viewed by many devices, but this presents other issues. For example, most browsers and devices today can display JPEG or PNG formats, but both of these are raster image formats. If a text-based document such as a Word file is converted to an image, the display quality deteriorates when a page is zoomed and you lose interactivity with the content.

2. Document size

The second challenge is the size of the document, either the number of pages or the physical size of the document. Downloading the entire document can take a long time depending on available bandwidth. 

This is especially an issue on mobile devices with slow or crowded data connections. A system that provides a quick initial view of the first pages of the document allows a user to begin reading the content while the rest of the document downloads. This increases worker productivity and can even reduce traffic if the user quickly determines that they do not wish to continue with the document.

3. Browser compatibility

The third challenge is that there are various browsers used to access the Internet and they do not all work the same. The four major browsers are Chrome, Internet Explorer, Firefox and Safari. Each browser has differences in how they operate and how the code works under the covers. 

Document viewing technology is dependent on some level of support within the browser. For example some browsers support Flash and some do not. HTML5 is only supported on recently updated versions of some browsers, so older browsers can create challenges. 

Even where HTML5 is supported, different browsers have different levels of support. Sometimes the differences are subtle and only cosmetic, while others, like complex formatting, can cause significant display issues.

4. Mobile viewing

The fourth challenge relates to viewing documents on mobile devices. With today's on-demand business world, it is imperative to be able to support viewing documents on mobile devices. But not all the devices behave the same way, and different operating systems are used on the various devices. 

Without a consistent mobile viewing platform, separate viewing apps may need to be installed on each device and results will vary. Using a single technology that supports many document types is very important in a mobile environment.

Is HTML5 the Answer?

HTML5-based viewers can help resolve some of the challenges associated with browsers and mobile devices. However, there is a misconception that the adoption of HTML5 is the answer to all problems. It is not. 

The four major browsers have been implementing HTML5 over time and how much of the standard that is supported varies greatly with the version of the browser. Older versions of the browsers that are used in many governments, educational institutions and well-established businesses do not support HTML5.

More and more organizations are moving to solutions where documents are stored in cloud-based systems. These challenges are examples of what you might face when deploying to your customers. Understanding that these common challenges are a possibility and preparing for them before you encounter them is important. 

Providing a single platform with multiple viewing technologies, including HTML5, Flash and image-based presentation, can help ensure that all users can view documents, regardless of their specific device, browser or operating system. With that knowledge you can successfully promote a good experience for your users and overcome the major pitfalls faced by so many organizations today.

Thursday, September 29, 2022

Intelligent Search Goes Beyond the Web

Search is a crucial component of the modern workplace. The ability to find information quickly and efficiently contributes not only to business success but also to employees satisfaction. 

It is frustrating to spend time looking for information when you could be completing a task.

Search has become ingrained as part of everyday life.

Pre-Internet Findability

Today, there’s no need to pull a volume of an encyclopedia off a shelf or even leave the room to find answers to questions. One can simply use phone to search for answers to questions. Google and Wikipedia have redefined what it means to search. But have they made search any more intelligent? They certainly satisfy the itch to correct people on event dates, geography, and historical characters. 

When it comes to the workplace, however, search encompasses a great deal more than fact checking, and intelligent search goes well beyond the web.

Search has gone mainstream. People use the word “search” when they want to locate a retail store or book a hotel. That simplistic notion of search does not carry over particularly well to finding information essential to doing your job.

Teasing Out the Meaning of the Search

Part of moving from a simple Google search to a more sophisticated model involves language. 

Standardizing content in one format—her example is high-definition PDFs—creates better visibility and fewer irrelevant search results. You may be able to avoid overly complex algorithmically based search engines by improving content processing, eliminating duplication, and using a single taxonomy.

Use better metadata and better data.

Almost anyone looking at search within the enterprise stresses findability. If you’re looking for the company’s holiday schedule, you don’t want the one from 3 years ago, you want the most recent one. 

Similarly, if you are building a web site for external use, you want potential customers to find what you are selling. You want to back up your sales efforts with excellent customer service. This is another opportunity for intelligent search, since customers increasingly prefer to help themselves without using an intermediary. They like self-service, but only if it answers their questions.

Semantics plays a role in customer service. Its analysis of the contextual meaning of words enhances the quality of answers. For example: customers might enter “How much will it cost me…” while your search engine understands phrases as “What is the price…” To be findable, your customer’s search query must translate to your words. Synonyms dictionary would help to resolve this issue.

Definition of intelligent search goes beyond findability. A search engine should know what you need and what your colleagues found valuable, and supply it to you when you need it. 

For Coveo search engine, the power and sophistication of machine learning technology is the driving force behind intelligent search. Intelligence springs from usage and analytics data, along with a multitude of other factors, the components of which are hosted and managed by companies such as Coveo.

Regardless of how you define intelligent search, it’s clear that enterprise search requirements go well beyond what Google or Wikipedia can provide. Different approaches to intelligent search provide much to think about when implementing, redesigning, and rethinking enterprise search. Intelligent search goes well beyond what searching the web looks like.

Improving Search and Decision-Making with Semantics

We’ve all heard about how Google’s proverbially simple search form has led professionals to expect similar simplicity from search solutions provided by corporate IT. Except this model doesn’t really work, and it’s costing millions of dollars every year in time wasted when professionals don’t find, and have to re-create information.

The reason it doesn’t work is that while every organization has a specific worldview, search engines are essentially blind. Worldview is the inventory of business objects that an organization cares about (products, geographies, customers, processes, etc.) and their relationships, that are typically captured in a taxonomy or ontology. While professionals implicitly want to search for information according to their worldview, search engines don’t offer them a practical way to do so.

Semantics Provides Meaning

The missing piece in this puzzle is a “meaning engine” that would understand unstructured content through the lens of your organization’s worldview. It exists: it’s called a semantic enrichment platform.

A semantic enrichment platform ingests your organization’s taxonomy or ontology and applies it to your content at scale. Leveraging natural language processing, it understands your content the same way humans do. It recognizes topics that are relevant to your business, entities of interest, their attributes and relationships, and converts them into structured data, that can be used standalone, or as metadata describing your content deeply and consistently. In energy, for example, entities of interest might include commodities, trading companies, and the countries where they do business.

Better Metadata Accelerates Search

When used as metadata, this data acts as an eye-opener for search engines that can finally see your content through your own worldview. This redefines the search experience by offering end-users new tools to locate what they are looking for.

Faceted Navigation enables end-users to search by business entity or topic (for example by company name, commodity type or region), helping to find the most relevant content in just a few clicks.

Links to relevant information provide convenient access to structured information about entities of interest so users don’t have to collate it themselves. For example, each company name could be linked to data about its activities. Topic pages concentrate all information about a specific topic in one convenient access point so users don’t need to sift through all other materials to access it. A topic page on electricity would, for example, filter out information related to other energy sources.

Content recommendation uses metadata to surface other documents with similar topics, promoting discovery of relevant information. A document on a merger in the gas sector might point to reports of other, similar operations.

Such mechanisms significantly accelerate and simplify search tasks, offering not only time and cost savings, but also more informed decision-making.

Better Data Improves Decision-Making

But semantically-extracted information can be used for its informative, rather than descriptive, value. Not as metadata, but as standalone data. This opens the door to applications that address the above blindness at a deeper level, providing higher-level and faster insight into the subject matter at hand.

One of semantics’ capabilities is to recognize not only entities, but also their relationships (often expressed as triples). One such relationship might for example indicate that company A is a “supplier of” company B. Information value from these relationships may come into play under a variety of scenarios.

Knowledge Bases (or Graphs) integrate such structured information at scale so they can then be queried. One might contain, for a given commodity, links to all suppliers.

Complex Reasoning can be performed on these knowledge bases, enabling business applications to provide higher degrees of automation in decision-making tasks, for example, automatically balancing supply by identifying alternative suppliers when one announces production issues.

Analytics and Visualizations provide dashboards that sit on top of the data and reveal its meaning on a more holistic level. For example, a network graph could plot all company relationships in natural gas, indicating which companies might be exposed to increasing prices in a given region.

Lastly, semantics can also be used to deliver Question Answering Systems that offer users a way to get answers to questions formulated in natural language (“Which electricity providers have the most diversified supply chain?”) instead of engaging in search.

Semantics Provides Faster Insights and Better Decisions

As can be seen from the examples above, semantics is the “meaning engine” that ensures that users can overcome search’s blindness and access information through the specific worldview relevant to their work. But this engine brings meaning to more than your search engine: it is your information management as a whole that benefits, bringing the promise of smarter applications that efficiently handle more of the groundwork, accelerate time-to-insight and support better decisions.

Self-learning

Intelligent search is no longer a nice-to-have feature in organizational information systems; it is a critical part of how businesses are transforming the way they work. Intelligent search goes beyond findability and information access. Like a trusted advisor, intelligent search knows what documents you need for your tasks and which articles your colleagues found most valuable and would be useful to you too, and simply gives everyone the information they need, when they need it. And the power and sophistication of machine-learning technology is the driving force behind intelligent search.

What Is Machine Learning?

Machine learning learns from and makes predictions on data. Applied to search, every time a user performs an action on your web site or support portal, he or she provides data about what is useful. Did they submit a support ticket? That means the articles they just read did not help. Do most people spend only one minute with a document that would normally take 10 minutes to read? That’s a sign that the content isn’t useful, or perhaps it’s too difficult to understand. With machine learning, all of that information and more can be used to make data-driven predictions and decisions without manual intervention.

How Will Machine Learning Make Search Intelligent?

When someone submits a search query or clicks on the third search result, they are implicitly telling you what is most relevant. As your online community members download content, visit various web pages, watch videos, start an online chat with your support agents or submit support tickets, their behavior provides information on the relevance of the content they come across. This behavioral data as well as search behavior which signals intent are captured by search usage analytics.

Intelligent self-learning search engines powered by machine learning can leverage such usage analytics data to continuously self-learn. This improves search relevance and hence, the self-service experience on your community in many ways. For example, automatic fine-tuning and ranking of search results based on machine-generated predictions about what is most useful improves the experience of all community members.

Without machine learning and analytics data, administrators need to fine-tune search rankings manually: create boosting rules, add synonyms, promote documents, etc. Because relevance is an ever-evolving process and the document that was the most relevant last week may no longer be relevant today, it is almost impossible for administrators, especially those at large organizations or those with multiple product lines, to keep pace with the rate of change.

With machine learning, highly manual and complex enterprise search can be transformed into intelligent, self-learning and self-tuning search.

Why Now?

Machine learning has been around for a long time. It used to be very complex to deploy and manage. Collecting usage data, managing databases, provisioning servers, developing and maintaining machine learning algorithms and using machine-learning predictions in the search system were typically very complex. This would require data scientists, database experts and developers. Only the biggest organizations could afford that. But the fast adoption of cloud solutions has made the use of machine learning much easier, cheaper and more attainable. In particular, the recent trend towards cloud-based enterprise search is a game changer.

What Is the Impact of Cloud-Based, Self-Learning Search?

With cloud-based, self-learning search, all the required components are hosted and managed by the vendor, such as Coveo. Because of its scalability, it has the potential to change the customer service industry the same way machine learning has impacted e-commerce and social networks. 

In the past, the high cost of using and managing machine-learning systems meant that machine learning was rarely used for traditional enterprise search or self-service support sites. The cloud makes that affordable to all customers and to all departments, especially when deploying self-learning search on self-service support sites and on communities, because of its ability to scale and handle large volumes of data.

Tuesday, August 30, 2022

Importance of Information Governance

The fact is that most people will either embrace or decline information governance depending on their individual situation at a certain point in time. Information governance is closely allied with privacy and security. Knowledge as internal currency that needs to be managed wisely, which is where a governance procedure would be helpful.

It is entirely possible that someone might curse a rule as arbitrary while simultaneously recognizing the necessity of it from a security standpoint. Someone else could easily applaud relevant search results without actually realizing the role information governance played in facilitating that relevance. And there’s always “that guy” who complains regardless of whether the complaint is justified.

Information governance is an important and necessary component of modern organizations’ information infrastructure. It is our job, as information specialists and knowledge managers, to combat any negativity about information governance within our organizations and to manage expectations. Information governance is an integral part of both information technology and knowledge management. Together, they bring information governance forward onto that center stage.

With almost everyone in an organization contributing content, the role of information governance is ever more critical. Information governance is hardly an impediment to productivity; it’s actually a productivity enhancer. Risk management in the form of information governance, data security processes, and legal compliance stands center stage for organizations of all sizes and types.

Information governance is not just a good idea, created by computer geeks or imposed by legal departments. It is tied to international legislation about privacy and that affects all organizations, whether they are involved in international trade or not. 

Companies should be looking at information governance not in reaction to legislation but as an opportunity to reflect on what is good information life cycle management. 

Take archiving, for example. If data is archived in five different places, your potential exposure is multiplied by five. It’s also harder to determine which version is the most current and the most authoritative. Whether protecting your data comes first or having a streamlined archival system comes first is a chicken-and-egg question. The fact is it doesn’t matter—they can happen simultaneously and be of equal benefit to your organization.

It is a KM responsibility to accentuate the positive about information governance. It is good data management, not simply a bunch of random rules. Since it makes good business sense and should be presented as such, we need to foster a culture of compliance and to have both top down and bottom up support. We should make it easy for people to do the right thing, remove obstacles, build a stakeholder community, and incentivize them to comply. Removing obstacles, however, should not mean removing all obstacles. Policies should still restrict access to those qualified to view the data.

Retention policies should recognize that information has a beginning, middle, and end. It has been created, collected, used internally, shared inside the company and externally, and then it should have a define disposition. Disposition might mean it is archived but it might also mean it is destroyed.

Organizations should comply with legal requirements and not dispose of information too quickly. On the other hand, hoarding information does not help with risk avoidance, either. If you think that information might have long-term implications, possibly to identify trends, you still don’t want that sitting in your content management system. Archiving it and getting it out of a production environment could be the answer, but if and only if you are not saving it simply for the sake of saving it.

Life cycle management of information starts with thinking about how information is created or collected. Did it come from internal sources? Was it gleaned from an external repository? Was it provided by customers? This will differ from company to company and even from one industry sector to another. Next is access policies: who is authorized to access and use the data. 

The point is to strike a balance between being punitive to the point of inhibiting compliance and restricting access to preserve privacy and security. Sharing information is an important component of modern information  management and the cornerstone of KM, but excessive sharing creates more problems than it solves and sharing across national borders raises potential legal issues. Retention policies and disposition practices are integral to good information governance, as is the understanding of what can and should be shared.

Data without information governance practices in place can create operational, privacy, and security gaps that put company assets at risk. Once you know what your data is, where it is, who can access it, and who has accessed it, you can then make decisions about where it should reside. Data in a highly secure system may need less controls than data located in a cloud environment or a broadly available corporate intranet or website.

Depending on your information governance rules, data can be a valuable asset like gold or it can become toxic like asbestos. A true best practice approach requires a sustainable ecosystem where you derive value from the data you hold while protecting company assets.

In organizations around the world, almost every employee is now a content contributor. Social, mobile, and cloud technologies have made it easier than ever to share information both in and out of the organization. This influx of new content, however, brings about new risks. Legal systems and government regulators worldwide are clamping down and demanding greater compliance, particularly on IT systems, requiring that organizations quickly implement risk management protocols. Data is growing too fast to keep up, which creates both great opportunity and risk for all organizations.

Organizations must be vigilant in creating enforceable policies, training programs, and automated controls to prevent and monitor appropriate access, use, and protection of sensitive data, whether they are regulated or not. Doing so will not only mitigate the risk of regulatory and statutory penalties and consequences, but will also help prevent an unnecessary erosion of employee or consumer confidence in the organization as the result of a breach or the loss of sensitive data.

Understanding Data Lifecycle Management

You can’t secure data you don’t know you have. Thus, a process of identification, value extraction, classification, and archiving needs to occur.

Whether data is generated by your organization or collected from a third party (such as a customer, vendor, or partner), the only way you can effectively protect it is by understanding it. For instance, does it contain customer information, employee information, intellectual property, sensitive communications, personally identifiable information, health information, or financial data?

Implementing a Best Practice Approach

1. Contemplate how data is created or collected by your company. You should think about excessive collection as well as how you will provide notice to individuals about that collection and appropriate levels of choice. You should also understand whether you need to keep appropriate records of that collection and creation.

2. Think about how you are going to use and maintain this data. Here you should consider inappropriate access, ensure that the data subjects’ choices are properly honored, address concerns around a potential new use or even misuse, consider how to address concerns around breach, and also ensure that you are properly retaining the data for records management purposes.

3. Consider who is going to share this data, and with whom they are going to share it. You should consider data sovereignty requirements and cross-border restrictions along with inappropriate, unauthorized, or excessive sharing.

4. All data must have an appropriate disposition. You should only keep data for as long as you are required to do so for records management, statutory, regulatory, or compliance requirements. You should ensure you are not inadvertently disposing of data while understanding that as long as you store sensitive information you run the risk of breach.

5. Understand the difference between what can and should be shared. A good program must continually assess and review who needs access to what types of information. Privacy and security teams should work with their IT counterparts to automate controls around enterprise systems to make it easier for employees to do the right than wrong or simply neglect the consequences of their actions. Once you have implemented your plan, be sure that you maintain regular and ongoing assessments.

Discovery and Classification

Many companies worry about “dark data” or data that exists across their enterprise systems (file shares, SharePoint, social systems, and other enterprise collaboration systems and networks) and is not properly understood. Understanding what and where this data is and properly classifying it will allow organizations to set the appropriate levels of protection in place. 

For example, many companies apply their security controls in broad terms using the same security procedures for everything. But logically, you do not need to put the same security protocols around protecting pictures from your company picnic as you do towards protecting your customer’s critical infrastructure design or build information, or credit card information or your employee’s benefits information.

Data discovery will allow you to determine the origin and relevance of the data you hold, and determine its retention schedule. You be more equipped to effectively implement Data Loss Prevention in a tactical way. Data aware security policies provide an opportunity for organizations to build a more layered approach to security, prioritizing where efforts (and costs) should be spent, and building multiple lines of defense. 

This provides you with the ability to manage the life cycle of the data within your company, from creation or collection through retention, archiving and/or defensible destruction. You cannot block everything from leaving your company any more than you should encrypt every document you have. When security blocks productivity, employees find a way to go around it. The job of security is to help the business use data productively and securely.

Data-Centric Audit and Protection

Understanding and controlling data flows is a critical component to an effective roll out of information management strategies. Key components of an effective methodology should include:

  • Data inventories that help customers understand where their sensitive data resides.
  • Classification on structured and unstructured data to ensure sensitive data is clearly identified.
  • Governance policies that protect the use of sensitive information by applying data sovereignty requirements, permissions management, encryption, and other data protection techniques.
  • Incident remediation and response for sensitive data breaches when they occur.

Report and Audit

Identifying potential risks within your information is just the first step. Take action to quickly and efficiently resolve issues with security-trimmed, pre-prioritized reports that provide guidance to your content owners and compliance teams to target the most critical violations. 

Privacy and security risk management intersect with other data lifecycle management programs within your company. Combining these related areas will allow you to better optimize resources while mitigating risk around digital assets to support responsible, ethical, and lawful collection, use, sharing, maintenance, and disposition of information.

Friday, April 29, 2022

Intranet in Knowledge Management Strategy

The modern workplace is increasingly spread out in many locations, with employees and expertise spread across multiple offices and areas. This makes it very difficult to know what information exists and where it is kept. 

We can make the assumption that a majority of a company’s information is stored on hard drives, content management systems, file sharing applications and in the minds and memories of employees. This creates a few problems:

  • People don’t have access to the information they need to do their jobs effectively.
  • The sheer amount of information becomes difficult to manage and measure.
  • Information becomes stale or inaccurate because it’s not open for collaboration.
  • Constant duplication of work, hampering productivity and crippling the pace of innovation.

On average, a typical employee wastes 2.3 hours per week searching for information. This can cost companies $7,000 per employee per year. Prioritizing a company-wide audit of all knowledge can help companies cut down on wasted time and allocate these resources elsewhere.

Turn Information into Knowledge

Knowledge is power, but only when it is shared. Until then, it is just information without context or meaning. The transformation of information into knowledge occurs only when it is stored in a place where people can talk about it and build upon it. Here are three ways a modern intranet can help.

Knowledge Bases

A modern Intranet supports the creation of many types of knowledge bases (KBs), including standard operating procedures, technical documentation, and best practices. This content, which would typically live in documents stored on drives, can now be published as wiki or blog articles that are easy to organize, search, and update. While a robust KB can lead to quicker decision-making and increased productivity, even the best KB is only effective if people know it is there and how to use it. The key is to make sure the structure is intuitive and that the information is searchable based on permissions so people only see what they need and can see.

Expertise Location

A people directory makes it easy for experts to share what they know with the rest of the organization. Think of it like a baseball card collection. Employees are players, their profiles are cards, and each card is tagged with stats (or an employee’s knowledge, skills, and abilities). Your collection should be searchable so it is easy to find who you are looking for, and it should allow employees to validate each other’s expertise by endorsing each other with badges or rewards. Having a full set makes it easy to trade information and expertise in your organization, and identify gaps or areas that you may need to recruit for.

Forums

Online forums give structure to typical water cooler interactions or brainstorming meetings, helping to surface the information that exists in people’s heads. These types of conversations that would typically happen behind closed doors or on email trails can now be transformed into knowledge that everyone can access. Employees can ask questions, submit ideas, or make requests, out in the open, for everyone to see. Even if they don’t initiate a conversation, employees can still participate by liking, rating, or commenting on someone else’s post. Eventually, forums develop into a library of collective knowledge built upon the exchange of information between people and teams in your company.

Example: Onboarding

To demonstrate these concepts, let’s look at a challenge that faces many growing organizations: onboarding. With a modern intranet, you can create a “newbie zone” to house everything employees need during their first few days. The space should feel warm and welcoming, and not confusing or technical. Starting a new job is overwhelming enough. Give them only what they need so they can spend their time learning about the culture, meeting new people, and acquainting themselves with the company’s products and services.

  • Include a knowledge base of all company policies and guidelines that employees should be aware of, as well as any training they need to complete. Direct them to the information that is most relevant to their role and responsibilities and try to avoid overloading them with too much at once.
  • Include a forum that addresses any “newbie” questions or concerns. It is a safe space for employees to get comfortable with the company, but it also allows your HR team to gather insights about what information is important to new employees and adjust their knowledge bases accordingly.
  • Use the forum to introduce employees to experts, mentors, and other influencers that can teach them about the company, and its culture and processes. Invite these experts to answer new forum topics and ensure all existing topics are up to date.

Onboarding is the first opportunity to establish open knowledge sharing as a cultural norm. By using your modern intranet to demonstrate the value and benefit to your employees, it becomes a mentality that everyone adopts from day one.

The Power of Collective Wisdom

Knowledge should be treated as an internal currency with structures in place to ensure that it is managed wisely and that you are not losing any of it along the way. By continuously converting information into knowledge, you can realize a variety of benefits that will move your organization forward, including:

  • Active and constant validation of company information.
  • A common language that everyone understands.
  • A culture of sharing and collaboration where knowledge belongs to everyone.

A modern intranet brings content and conversations together in one place, promoting active and continuous knowledge sharing across all levels of an organization. 

Galaxy Consulting works with many companies to tackle the challenges facing them, knowledge management being just one. Our goal is to help our customers capture the collective wisdom in their organizations so they can drive productivity, promote innovation, and help their business succeed.

Wednesday, March 30, 2022

Improving User Adoption

Many organizations that deployed a content management system have gone through phases of deployment, development and upgrades without leveraging common practices around information architecture and usability. 

In some cases, a well-intentioned IT department holds user requirements sessions, only to implement the technical features without truly understanding core principles of usability. In other situations, a particular process will be enabled and user tested with good design principles but employing the “build it and they will come” deployment plan. 

In other words, let users just start using the system. In rare cases, organizations do get those elements right but then after the deployment is completed, there is no organizational design to maintain the system, continue to train users, and update design and functionality as user needs change.

The reasons for a lack of user acceptance break down into numerous categories ranging from lack of user involvement in the development process to inadequate content.

For these reasons, many users of content management systems are frustrated and long for a well-designed, maintained, highly functional system with well-organized information and search that gives them what they need when they need it. They blame the technology rather than the way that technology has been configured and managed.

The challenge is that everyone wants everything to be user friendly and intuitive. Users want tools that help them do their jobs without requiring that they jump through hoops to upload and access information. If the system is awkward and poorly designed, users do not want to spend the time to learn how to get the most from the system. However, even when the tools are sophisticated and well designed, fluency is still necessary to leverage them effectively.

When adoption is poor, it is difficult for an organization to get the majority of users needed to achieve the good collaboration, where the knowledge is producing real value and triggering successful cycles of participation and contribution. So moving to a new platform, rather than solving core issues, seems to be the preferred approach that many organizations take, though that will lead to a recurrence of the core challenges. It is best to get to the root of the problems and address them.

Even with a perfectly configured system and design that is user tested, validated, refined, tested some more and validated again, there is no guarantee that the system will be adopted and embraced. Taking an intentional approach to the system requirements and design will go a long way toward increasing the likelihood of user adoption. User adoption requires a thoughtful, intentional approach to a number of areas.

Here are some ways to maximize the chances for success of user adoption.

In many cases, users don’t have a voice in the design decisions and are not sufficiently kept in the loop through ongoing communications from leadership. Involve users in the development process. Socialization should be part of a project from the beginning and continue throughout the life of the project.

Perform user acceptance testing. It is very important to give users a chance to test the system before asking them to use it.

Create realistic expectations for how intuitive the system can be. No matter how user friendly the system is, it may never be completely intuitive to all. The nature of work processes and the information to support those processes can be complex. 

The nature of the task might require understanding terminology that is not part of everyone’s vocabulary. If the job itself requires training and skill development, the information may also require a degree of socialization. Some systems can be very complex.

Allow users time to develop a mental model. When learning to use an application of any sort, users need time to grasp the big picture and become fluent in the details. This means that it would be better to show users the details over time as opposed to in a one-shot training. Doing that at the scale of any enterprise requires planning and development of just-in-time learning that people can move through to get the big picture and can access in the context of their work processes. 

Provide users with the consistency they need. A consistent taxonomy and information architecture will help improve usability in the first place but also increase the learnability of the system. Once users learn about one part of an information structure, they can more quickly understand and internalize other areas if the same terminology is used.

Update functionality often enough to keep up with changes in user requirements. No information environment is static, so ongoing feedback that drives new functionality and capabilities is required. It is important to keep users updated on features in each new release. 

Without updates to functionality, continued testing and adjustments, the delta between what users need and what the application provides will get larger and lead to greater dissatisfaction.

Provide high-quality content. A system deployment should begin with value for the user. That means populating repositories with curated, tagged quality content that they will find valuable. Too often there is a “lift-and-load” migration in which poorly organized content filled with redundant, outdated and trivial content is presented to the user in a new environment. No matter how good the design is, the content will not be viable if it does not meet the users’ work requirements, and it will not be accessible if it is not tagged and organized.

User acceptance of a system will be improved when the right information is available for the tasks and the right processes are reflected in the application.

Offer users an easy way to contribute content. Another barrier to acceptance is a difficult process for uploading content. Too many metadata fields, long lists of choices or fields that don’t apply to the content will keep people from content uploading. The process for uploading content should be as painless as possible. Frequently the best answer is machine-assisted tagging where an auto-classifier tuned to the content and taxonomies appropriate for the process presents the user with suggested values, and the user either accepts them or selects a different value.

Establish a robust governance process. A content management system lives in an ecosystem that is continually changing. There are multiple upstream and downstream processes, and resources need to be allocated with a view to the larger picture of the information environment. 

The system owners and sponsors must make decisions in that context as well as within the context of the system environment. Therefore, they should have a seat at the table in the enterprise information governance decisions and the institution of controls, standards and compliance processes all the way down to the level of content repositories. If sites and content do not have ownership, they will quickly become outdated. If policy decisions are made without compliance mechanisms, they will not be implemented.

Users don’t hate content management systems. They hate poorly designed applications. In reality what they don’t like is the lack of functionality, the poorly constructed taxonomies, confusing navigation, endless fields to fill out and poor-quality content. With the correct approach to design and deployment and with adequate training and ongoing updates, people like and in many cases like a content management system. It helps them do their jobs, makes tasks easier to accomplish, improves efficiency and lets workers redirect their efforts to the more challenging and fulfilling parts of their jobs.

Sunday, January 30, 2022

Challenges of Records Management

Records management is very important for companies. There are many electronic records management systems that can optimize the process of records management. However, the huge amount of data is raising new challenges about how records management should be handled. 

A few of the ongoing issues include big data, master data management (MDM) and how to deal with unstructured data and records in unusual formats such as graph databases.

Records are kept for e-discovery, compliance purposes, for their business value, and sometimes because no process has been implemented for systematically removing them. This might be a double-edged sword: getting rid of data makes IT nervous, but there are times when records should be dispositioned.

Data stored in data lakes is largely uncontrolled and typically has not had data clean up processes applied to it. Data quality for big data repositories is usually not applied until someone actually wants to use the data.

Quality assurance might include making sure that duplicate records are dealt with appropriately, that inaccurate information is excluded or annotated and that data from multiple sources is being mapped accurately to the destination database or record. In traditional data warehouses, data is typically extracted, transformed and loaded (ETL). With a data lake, data is extracted (or acquired), loaded and then not transformed until required for a specific need (ELT).

MDM is a method for improving data quality by reconciling inconsistencies across multiple data sources to create a single, consistent and comprehensive view of critical business data. The master file is recognized as the best that is available and ideally is used enterprise-wide for analytics and decision making. But from records management perspective, questions arise, such as what would happen if the original source data reached the end of its retention schedule.

As a practical matter, a record is information that is used to make a business decision, and it can be either an original set of data or a derivative record based on master data.  Therefore the “golden record” that constitutes the best and most accurate information can become a persistent piece of data within records management system.

Unstructured data challenge

A large percentage of records management efforts are oriented toward being ready for e-discovery. 

There is the more of a problem in the case of unstructured data than in MDM. MDM has gone well beyond the narrow structure of relational databases and is entering the realm of big data, but its roots are still in the world of structured databases with well-defined metadata classifications, which makes records management for such records a more straightforward process.

The challenge with unstructured data is to build out the semantics so that the content management or records management and data management components can work together. In the case of a contract, for example, the document might have many pieces of master data. It contains transactional data with certain values, such as product or customer information, and a specialist data steward or data librarian might be needed to tag and classify what data values are represented within that contract. 

With both the content and the data classified using a consistent semantic, it would be much simpler bringing intelligent parsing into the picture to bridge the gap between unstructured and structured data. Auto-classification of records can assist, although human intervention remains an essential element.

Redundant, obsolete and trivial information constitutes a large portion of stored information in many organizations, up to 80%.  The information generated by organizations needs to be under control whether it consists of official records or non-record documents with business value. Otherwise, it will accumulate and become completely unmanageable. On the other hand, if organizations aggressively delete documents, they run the risk of employees creating underground archives of information they don’t want to relinquish, which can pose significant risks. Companies need to approach this with a well thought out strategy.

The system should allow employees to easily save documents using built-in classification instead of a lot of manual tagging. It is important to make the system intuitive enough for any employee to use with just a few seconds of time and a few clicks of the mouse. 

The value of good records management needs to be communicated in such a way so that employees understand that it can actually help them with their work rather than being a burden. A well-designed system hides the complexity from users and puts it in the back end. 

Studies of records management consistently show that only a minority of organizations have a retention schedule in place that would be considered legally acceptable and that some organizations have no retention schedule at all. Even if a schedule is in place, compliance is often poor.

A strategy should be developed to reconcile dilemma between keeping everything forever in order to extract business value from it and using records and information management to effectively get rid of as much information as soon as possible.

From a business perspective, the potential upside of retaining corporate records so they can be used to gain insights into customer behavior, for example, may outweigh the apparent risks that result from non-compliance. 

The highest value is within records management framework for understanding and classifying information so that its business value can be utilized. 

If organizations view records management as a resource rather than a burden, it can contribute to their success. In many respects, the management of enterprise information is already becoming more integrated and less siloed. For example, most enterprise content management (ECM) systems now have records management functionality. The same classification technology used for e-discovery is also used for classification of enterprise content. Seeing records management as part of that environment and recognizing its ability to enrich the understanding of business content as well as ensuring compliance can support that combination.

Governance can be a unifying technique that provides a framework to encompass any type of information as it is created and managed. Governance is a set of multidisciplinary structures, policies and procedures to manage enterprise information in a way that supports an organization’s short term and long term operational and legal requirements. It is important to consider the impact of all forms of information, from big data to graph data. Within a comprehensive strategy of governance, records management is successful.

Friday, July 30, 2021

GDPR Compliance


GDPR compliance is being enforced. The GDPR has already garnered international attention, with similar legislation in the works in countries like China, Japan, India, Brazil, and New Zealand. Attention around the GDPR has been mounting in US. Beyond the United States and the other countries already mentioned, most experts predict that an even wider rollout of consumer data protections is inevitable.

Since GDPR took effect, Google was fined nearly $57 million for processing personal data for advertising purposes without obtaining the required consumer permissions. Google also failed to adequately inform consumers about how their data would be used, nor did it provide enough information about its data consent policies.

The GDPR requires companies doing business in EU member countries to get consumers' consent via an explicit opt-in process before collecting and sharing information about them; to provide a way for consumers to correct, update, and delete the data that companies hold about them; to fully disclose what information is being collected and how it will be used; and to properly notify all parties involved when there is a data breach.

Most companies are certainly pushing to improve their processes by updating older software solutions and processes where parts of their responsibilities are clear, and others are still in a murky world of gray and uncertainty. Many companies are still looking at their obligations under the legislation, trying to determine what is applicable to them and their portions of processing an individual's data.

In a recent survey from the International Association of Privacy Professionals, less than half of respondents said they were fully compliant with the GDPR, and nearly a fifth said they believed full compliance with the GDPR would be impossible.

One of the biggest shortfalls for businesses right now concerns the GDPR provisions requiring a full accounting of all the information organizations hold on consumers upon request within one month.

Companies should simply assume that all aspects of the GDPR apply to them.

Experts and insiders concede that the GDPR has been successful in one key area: Consumers now have more of an interest in what happens with their personal information. GDPR has made it simple for consumers to understand the important details about their data, such as how it is being used, where it is being stored, etc. Because of the GDPR, consumers are asking more questions and reading companies' privacy policies more closely. And that will ultimately lead to greater accountability.

The GDPR has also changed the entire dialogue between companies and customers.

Whether it was a stated goal of the GDPR or an unforeseen consequence, companies are beginning to self-regulate, knowing that regardless of the form, there is increased need to give consumers greater transparency and control over their data.

Because of the penalties and other negative ramifications of ignoring GDPR, companies have to take GDPR seriously with internal programs to organize their data better. Companies need to provide transparency about the data they capture, as well as a mechanism for consumers to choose which information can be captured and how it can be used.

For companies that have come into compliance, the GDPR has resulted in finely tuned databases and distribution lists, and streamlining email communication has made outreach more impactful with higher-than-before engagement rates.

If GDPR compliance is done right, companies will have the ability to create a master record of customer data on one platform.

That master record could contain all of the customer's allowed permissions, revoked permissions, or any changed notification settings, as well as a unified customer profile that combines details about their behaviors, interests, preferences, purchases, and other information from any engagement system or data source.

GDPR continues to require an investment of time and resources, but it is a worthwhile investment.

When all aspects of the GDPR are carried out fully, companies are able to deepen relationships and profitably grow revenue, consumers are able to gain transparency and control over their data, and regulators are able to safeguard commerce and consumer rights.

Galaxy Consulting has over 15 years in helping companies to achieve compliance in different areas, and since GDPR was released, we are helping companies to achieve compliance with GDPR. Please contact us for a free consultation.