Monday, April 26, 2021

Knowledge Management to Increase Efficiency and Productivity

Knowledge management (KM) has become both an important topic driven by a number of industry trends, foremost among them the strong and growing interest in artificial intelligence (AI). A knowledge base (KB) can serve as the centralized source of knowledge for an organization, providing the data needed to feed an AI solution. 

Interest in KM is also being driven by its ability to help companies achieve many of their top enterprise servicing goals: improving productivity, increasing the use of self-service, decreasing customer effort, reducing operating costs, improving cross-departmental coordination, increasing customer and staff engagement, and delivering a better, more personalized customer experience.

This is a major and long overdue turnaround for the KM, which has taken many years to catch the attention of organizations. The question that organizations are now asking is whether KM solutions are able to meet their needs in the era of digital transformation.

KM Awakening

The new generation of KM solutions, many of which are relatively new market entrants, are either up to the digital challenge or are benefiting from investments to get them there. These solutions are built to run in the cloud (although many can also be placed in a private cloud or on premises); use the newest database technology; incorporate responsive design techniques to allow delivery of content to many groups of internal and external users in a variety of channels; depend on highly sophisticated and fast-search software to speed the delivery of information; and embed content management functionality to enable the collection and preparation of all types of data from an unlimited number of sources. 

Many of these solutions also incorporate a KM framework such as knowledge center support to help users roll out and apply their solutions effectively.

Differentiating between KM, search, and content management software has always been a challenge. In fact, a good KM solution depends on content management techniques to enable it to capture, structure, and properly store data. 

KM ensures that the right components of the data are delivered in a manner appropriate for each group (agents, IT staff, back-office employees, executives, customers, partners) and in a format appropriate for each channel (live agent, web self-service, voice self-service, email, chat, SMS, video, social media). When it comes to data sharing, a KM solution is the heart, and it pumps knowledge out to where it is needed, when it is needed, to keep an organization running properly.

Changing KM's Perception and Value Proposition

Major technical innovations during the past few decades are enabling a new generation of KM solutions. But this is only a small part of the developments that are altering the perception of KM. 

In the past, KM solutions were sold to customer service, contact centers, technical support, field service, and other departments that were dependent upon having a source of information to address customer inquiries. 

The value proposition was that a KM solution could replace or lessen the need for staff training and reduce the average handling time with customers. Essentially, KM solutions were sold to enhance productivity and reduce operating costs while improving service quality and first-contact resolution (FCR).

The problem was that employees did not like using many of the KM solutions because the solutions slowed them down; instead of reducing the average handling time of inquiries and improving FCR, the opposite occurred, and agents were penalized. The solutions came with poorly designed interfaces, and the search capabilities were ineffective. 

In addition, agents learned not to rely on a KM solution’s answers because much of the information residing there was either out of date or inaccurate, and the process of keeping knowledge current was cumbersome, time consuming, and costly.

The situation is different today. Companies are anticipating much broader uses for their knowledge bases. Executives have bought into the concept of having a single version of the truth for organization's knowledge, particularly when the information can be rendered appropriately for each group of users. 

As a result, the number of potential KM users has increased, which is a significant game changer. Customers are also making it known that they prefer to use self-service over speaking to live agents, making it necessary to have a clean, accurate, and easy-to-update KB. 

Additionally, Millennial agents, who are now the primary employee demographic throughout  organizations, are wired to look up answers and are happy to use a KM solution, as long as it can quickly give them the accurate information they need. In other words, the current generation of KM solutions is delivering on its promise and has a proven and quantifiable value proposition, when supported by the right enterprise framework and culture.

The KM Competitive Landscape

The fundamental KM concepts still stand, but how they are addressed varies by vendor. Each solution is unique, with an assortment of underlying technology and approaches. Vendors are entering the KM market from many IT sectors, including AI, customer relationship management (CRM), enterprise resource planning (ERP), IT service management (ITSM), workforce optimization (WFO), contact center infrastructure, professional services, and others. 

Some vendors sell only a KM solution; many others offer a KM capability as part of a suite of products, but do not offer it on a stand-alone basis.

The market is in the early stages of transformation, and a great deal more change is expected in the next few years. KM has remained more or less the same for decades, but this is expected to change as organizations get serious about creating a single source of knowledge. The opportunities are great for disruptive solutions to enter and transform this sector.

KM Needs a Framework and Best Practices

While the KM offerings have improved substantially, the primary challenge confronting this sector remains the acquisition, maintenance, and delivery of content. A KM solution is effective only if the underlying data is correct; if the data is inaccurate, it doesn’t matter how well organized or how fast and easy to deliver it is. 

Moreover, for a KM solution to work, a company needs to create an operating environment where all employees support the concept and practice of KM. It’s more than building a KM culture. An organization must institute a framework supported by internal infrastructure that facilitates the processes. It’s not about rewarding employees for authoring articles and using the KM solution. Instead, KM needs to become an inherent and essential component of what employees do on a daily basis.

Final Thoughts on KM

It’s taken a few decades, but KM is finally in the spotlight. AI is helping to push the KM agenda, and companies are getting on board with the idea of creating a single repository of enterprise knowledge, formal and “tribal”, as they consider its broad benefits for the organization, employees, partners, and customers. 

While it’s challenging to implement a KM solution, this is actually the easy part of the effort. More challenging is to set up the organization and processes to succeed with the transformation.

We have 20 years experience in KM. Please contact us today for a free consultation.

Wednesday, March 31, 2021

Digital Trust

While consumers have happily shared personal data on social platforms in return for greater connectivity and shared experiences, recent news about data harvesting has caused alarm. Many companies that rely on consumer insight are rethinking how to build digital trust and make it sustainable.

A study of 25,000 consumers across 33 countries, the majority of  92 % of which are U.S. consumers say it’s extremely important that companies protect their personal information. Another 79 % say it’s frustrating to realize that some cannot be trusted to use it appropriately. Lack of trust is one of the biggest reasons consumers switch companies.

And with the General Data Protection Regulation (GDPR), a regulation intended to strengthen data protection for EU citizens and let individuals decide which brands can use their personal data, good data stewardship is becoming critical to the success of every business globally.

The Importance of Insight

The ability to process personal data is critical to business in the digital age. Data-driven organizations rely on customer insights to help inform the development and design of products and services, the overall customer experience, and marketing strategy. From demographics to personal preferences, customer data allows companies to deliver hyper-relevant products, services, and experiences.

Some companies have built entire business models around the sale of anonymized personal data. Technology is creating opportunities for businesses to understand their customers on a deeper level and monetize this knowledge. Biometric, visual, genomic, and device data can allow ever-increasing degrees of personalization.

Personal data is a currency no business can afford to risk.

Earning Digital Trust

To earn digital trust, organizations' leaders have to eliminate anything that jeopardizes it. Companies looking to future-proof their customer data supply should take these measures:

• Deliver on their commitments. 83 % of U.S. consumers say it’s extremely frustrating when companies promise one thing but deliver another. An organization’s commitment to delivering promised experiences and meeting customers’ expectations is paramount to earning trust. Successful companies understand their baseline level of trust and eliminate issues or offers that detract from the trust. Otherwise they must reset their parameters.

• Establish rigorous governance. The only way trust can become sustainable is by establishing a rigorous process and a robust, cross-functional governance structure to continuously measure trust and hyper-relevant effectiveness and acting on the findings. Please see our posts on Information governance.

• Give customers full control over their data. As customers demand greater control over how companies use their personal information, organizations must become more transparent. Customers must be given full access to, and control over their data, which will demonstrate responsible stewardship and ethics. Furthermore, they must ensure that the appropriate safeguards are in place to protect it.

Some companies may look to adjust their profit models and potentially charge for services (i.e., “pay for privacy”) so customers are explicitly aware of the value being exchanged. That way companies could make money on direct interactions with customers as opposed to the derivatives of those interactions (i.e., selling insights or advertising). Or they could move from an information exchange relationship to a more classic view of understanding what customers need and having them pay for it.

More companies will undoubtedly assess their existing propositions and the economic viability of new models. But the question remains as to whether the underlying information and experience will become something that is merely expected, rather than something that customers would be willing to pay for.

The Path Forward

Digital trust is only sustainable when companies establish a rigorous process and governance structure. Most importantly, digital trust must be managed as the critical growth enabler it is. Companies will inevitably look to capture new categories of customer data such as biometric, geolocation, even genomic data in their drive for greater relevance. Customers' concerns will inevitably rise, so it’s critical that companies have strong data security and privacy measures in place, give customers full control over their data, and, crucially, are transparent with how they use it.

We have successfully implemented data security and data privacy in many organizations. Please contact us today for a free consultation.

Thursday, February 11, 2021

Mastering Fractured Data

Data complexity in companies can be a big obstacle to achieve efficient operations and excellent customer service.

Companies are broken down into various departments. They have hundreds, thousands, or even hundreds of thousands of employees performing various tasks. Adding to the complexity, customer information is stored in so many different applications that wide gaps exist among data sources. Bridging those gaps so every employee in the organization has a consistent view of data is possible and necessary.

Various applications collect customer information in different ways. For example, CRM solutions focus on process management and not on data management.

Consequently, customer data is entered into numerous autonomous systems that were not designed to talk to one another. Client data is housed one way in a sales application, another way in an inventory system, and yet another way in contact center systems.

Other organizational factors further splinter the data, which can vary depending on the products in which a customer is interested, where the product resides, and who (the company or a partner) delivers it.

In addition, information is entered in various ways, including manually, either by the customer or an employee, or via voice recognition. And applications store the information in unique ways. One system might limit the field for customers’ last names to 16 characters while another could allow for 64 characters.

The challenge is further exacerbated by software design and vendors’ focus. CRM vendors concentrate on adding application features and do not spend as much time on data quality.

Customers can input their personal information 10 different ways. Most applications do not check for duplication when new customer information is entered.

Human error creates additional problems. Employees are often quite busy, move frequently and quickly from one task to the next, and, consequently, sometimes do not follow best practices fully.

Data becomes very fractured and there appear different versions of truth. The data features a tremendous amount of duplication, inconsistencies, and inefficiencies.

The inconsistencies exist because fixing such problems is a monumental task, one that requires companies to tackle both technical and organizational issues. Master data management (MDM) solutions, which have been sold for decades, are designed to address the technical issues. They are built to clean up the various inconsistencies, a process dubbed data cleansing.

The work sounds straightforward, but it is time-consuming and excruciatingly complex. The company has to audit all of its applications and determine what is stored where and how it is formatted. In many cases, companies work with terabytes and petabytes of information. Usually, they find many more sources than initially anticipated because cloud and other recent changes enable departments to set up their own data lakes.

Cleansing Process

Cleansing starts with mundane tasks, like identifying and fixing typos. The MDM solution might also identify where necessary information is missing.

To start the process, companies need to normalize fields and field values and develop standard naming conventions.  The data clean-up process can be streamlined in a few ways. If a company chooses only one vendor to supply all of its applications, the chances of data having a more consistent format increase. Typically, vendors use the same formats for all of their solutions. In some cases, they include add-on modules to help customers harmonize their data.

But that is not typically the case. Most companies purchase software from different suppliers, and data cleaning has largely been done in an ad hoc fashion, with companies harmonizing information application by application. Recognizing the need for better integration, suppliers sometimes include MDM links to popular systems, like Salesforce Sales Cloud, Microsoft Dynamics, and Marketo.

Artificial intelligence and machine learning are emerging to help companies grapple with such issues, but the work is still in the very early stages of development.

Still other challenges stem from internal company policies—or a lack thereof—and corporate politics. Businesses need to step back from their traditional departmental views of data and create an enterprise-wide architecture. They must understand data hierarchies and dependencies; develop a data governance policy; ensure that all departments understand and follow that policy; and assign data stewards to promote it.

The relationship between company departments and IT has sometimes been strained. The latter’s objectives to keep infrastructure costs low and to put central policies in place to create data consistency often conflict with the company departments' drivers. And while departments have taken more control over the data, they often lack the technical skills to manage it on their own.

It is a good idea to start with small area and then expand to other areas.

Having clean and organized data would make company's operations much more effective and would enable to optimize customer service. They can take steps to improve their data quality.

Please contact us for more information or for a free consultation.

Saturday, January 30, 2021

Digital Transformation

Digital technology is drastically changing how companies do their business and companies' relationship with their customers. While customers gain the power of information and choice, digital technology dramatically improves the economics of business. The rules of business are being rewritten nearly every day with new digital technologies. Every company has a unique digital transformation opportunity.

However, doing so involves far more than merely converting paper processes to electronic ones. Companies undergoing a digital transformation also need to make sure that all of their digital processes are interconnected. Even more important, though, digital transformation requires a company-wide culture transformation.

For successful digital transformation, first we need to understand that digital transformation is more than simply a technology change or software adoption. It requires a cultural shift and a change in how a business behaves, given changes in customer demands. The shift and change require complete support within the company, from top managers to rank-and-file personnel.

A well-timed adoption and utilization of technology and software can support this bridge by enabling seamless flow of information between the company and the customer.

Digital transformation is also about being focused on the company's customers. It is about you enabling them to be intelligent and self-educated and to go along their own journey in a self-guided manner, and then you figuring out where you need to intersperse human touchpoints along that journey to add value to the digital touchpoints.

The best practice is a mix of digital content and human interaction that is orchestrated around customers and how they want to learn about and experience the company and its brands.

Another critical element of digital transformation is the interconnection of all company's data. Companies are broken down into various departments. They have hundreds, thousands, or even hundreds of thousands of employees performing various tasks. Adding to the complexity, customer information is stored in so many different applications that wide gaps exist among data sources. Bridging those gaps so every employee in the organization has a consistent view of data is possible and necessary.

But the task requires large investments of money and manpower and sweeping process changes, steps that most organizations have not been willing to make thus far.

It’s not an easy task, but it is getting simpler, particularly as a wide and growing variety of applications emerge. Vendors are now building solutions to streamline workflows for employees inputting data or responding to various triggers, like customers calling in with a problem.

Please contact us today for more information and for a free consultation.

Monday, December 28, 2020

Electronic Signature and Content Management


 At this time of digital transformation, it is difficult to talk about managing content management without talking about using electronic signatures. E-signatures make it possible to create digital workflows, help to maximize ROI from content management, and enhance productivity, compliance, security, and analytics.

Quite a few content management tools include e-signature implementation such as SharePoint, Box, and other content management systems (CMS).

Electronic signatures, digital business, and content management are interdependent. Without e-signature capability, documents continue to be printed for signing, then photocopied, shipped, corrected, imaged back into the system, archived, and shredded. 90% of the time and cost of labor dedicated to managing paper can be saved by using e-signatures. There are also other benefits of using e-signatures such as faster decision making, shorter sales cycles, and improved customer experience.

In the last few years, financial services, insurance, healthcare, and government have embraced digital transformation. A major driver is compliance and risk. Many organizations are concerned about legal risk or they struggle with the constantly changing regulatory landscape in their industries, in part because manual processing is very prone to errors.

Rather than react to regulatory pressure with additional people, manual controls, and process complexity, organizations that adopt e-signatures have these benefits:

  • Leverage workflow rules to execute transactions correctly and consistently.
  • Capture a full audit trail and electronic evidence.
  • Minimize exposure to risk due to misplaced or lost documents.
  • Make the process of e-discovery easier, more reliable, and less expensive.
  • Demonstrate compliance and reduce legal risk through the ability to playback the exact process that was used to capture signatures.

Let's look at this example: the VP of compliance is asking for transaction records from 5 years ago. How helpful would it be to quickly produce all signed records, in good order and replay the entire web-based signing process for context.

According to Forrester Research, organizations and customers now recognize that e-signature is an important enabler of digital business.

Today, the business is digital and e-signature is a foundational technology enabling end-to-end digitization. Let's look at this example: a customer filled out an insurance application. When the package is ready to be signed by the customer, traditionally it would revert to paper. Instead, documents are handed off to the electronic signature solution. This solution would manage every aspect of the e-sign process, including notifying and authenticating signers, presenting documents for review, capturing intent, securing documents, collecting evidence, etc.

Once e-signed, the documents can be downloaded in PDF format and stored in any archiving system. The e-signature audit trail and the security travels seamlessly with the document, ensuring the record can be verified independently or the e-signature service.

A document centric approach to embedding e-signatures within signed records allows for greater portability and easier long term storage in an CMS solution. Additional metadata related to the e-sign transaction can be handed off to the CMS as well for analytics purpose.

Adopting electronic signatures is quick and easy and does require IT or programming resources. Companies who are looking for a more integrated automated workflow, e-signature plugins for SharePoint, Salesforce, Box are available.

Organizations can quickly and easily enhance approval workflows with a more robust e-signature solution than a checkbox on an approval routing sheet, while also automating archival.

Thursday, July 30, 2020

Metadata Driven Solutuions

Metadata is data that provides information about other data. Many distinct types of metadata exist, including descriptive metadata, structural metadata, administrative metadata, reference metadata, and statistical metadata.
  • Descriptive metadata is descriptive information about a resource. It is used for discovery and identification. It includes elements such as title, abstract, author, and keywords.
  • Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters. It describes the types, versions, relationships and other characteristics of digital materials.
  • Administrative metadata is information to help manage a resource, like resource type, permissions, and when and how it was created.
  • Reference metadata is information about the contents and quality of statistical data.
  • Statistical metadata, also called process data, may describe processes that collect, process, or produce statistical data.
Metadata, properly managed, is the powerful tool to make things happen. We can have processes and solutions which are driven by metadata.

In application building process which is metadata driven, instead of building the desired application directly we define the application’s specifications. These specifications are fed to an engine which builds the application for us by using predefined rules.

Instead of building a package to create a dimension, for example, we can provide the dimension description (metadata) to a package generating engine. This engine is then responsible for creating the defined package. Once the package is executed, it will create and maintain the prescribed dimension.

Why is metadata driven so much more efficient than traditional methods?

  • Creating a definition of a process is much faster than creating a process. A metadata driven approach results in building the same asset in less time as compared to traditional methods.
  • Quality standards are enforced. The rules engine becomes the gatekeeper by enforcing best practices.
  • The rules engine becomes a growing knowledge base which all processes benefit from.
  • Easily adapts to change & extension. Simply edit the definition and submit to the engine for a build. Need to inject a custom process? No problem, create a package the old fashioned way.
  • Enables agile data warehousing. Agile becomes possible due to greatly increased speed of development and reduced rework required by change.
The ongoing proliferation of devices joined with the distributed nature of data sources has created an indispensable role for metadata. Metadata provides knowledge such as location of the device and nature of the data, which facilitates integration of data regardless of its origin or structure.

Enterprises are incorporating data quality and data governance functions as part of data integration flows. Embedding these processes in the integration pipeline necessitates sharing of metadata between the integration tools, and the quality and governance tools.

Metadata also facilitates performance optimization in integration scenarios by providing information on the characteristics of underlying sources in support of dynamic optimization strategies.

In content management, folder-less approach allows you to search for and access files however you want – by client, project type, date, status or other criteria. It's completely dynamic, enabling you organize and display information how you need it, without the limitations of antiquated, static folder structure.

All you do is save to files and tag the file with the properties you need, and you are done. No more wandering through complex, hard-to-navigate folder structures, trying to guess where to save a file. With metadata you just quickly describe what it is you are looking for.

Metadata management software provides context and information for data assets stored across the enterprise. ... Metadata management tools include data catalogs, or assemblages of data organized into datasets (e.g. searchable tables or other arrangements, facilitating exploration).

Wednesday, April 29, 2020

IT Systems Validation

GMP guidelines require that IT systems must be validated by adequate and documented testing. It is required in all regulated industries: pharmaceuticals, medical devices, food and beverages, cosmetics.

GMP Requirements - “The software development process should be sufficiently well planned, controlled, and documented to detect and correct unexpected results from software changes."

Validation is defined as the documented act of demonstrating that a procedure, process, and activity will consistently lead to the expected results. This is the formal testing to demonstrate that the software meets its specified requirements.

This is a documented process to verify that a computerized system does exactly what it is designed to do in a consistent and reproducible manner, that it will produce information or data that meets a set of defined requirements. If a system meets these requirements, it can be assumed that it is consistently performing in the way it was intended.

Validation helps to ensure that both new and existing computer systems consistently fulfill their intended purpose and produce accurate and reliable results that enable regulatory compliance, fulfillment of user requirements, and the ability to discern invalid and/or altered records.

Computer systems need to be examined to confirm that the systems will work in all situations.

Computer system validation is required when configuring a new system or making a change in a validated system (upgrades, patches, extensions, etc.).

Validation processes should be based on applicable regulations and guidance, best practices for the domain, and the characteristics of the system being validated.

To validate software, it must be:

  • structured, documented, and evaluated as it is developed;
  • checked to make sure that it meets specifications;
  • adequately tested with the assigned hardware systems;
  • operated under varied conditions by the intended operators or persons of like training to assure that it will perform consistently and correctly.

Why Do We Need IT Systems Validation?

Regulated industries have to adopt many compliance procedures to make sure their final product is safe for distribution or sale.
Regulatory agencies around the world require validation processes to confirm the accuracy and integrity of data in computerized systems in order to ensure product safety and effectiveness.

The process is used to make sure that the IT systems are completely transparent, robust and tamper proof because they directly impact public health and safety.

It is critical that these systems can be relied upon to produce data consistently and store those electronic data records so that they stand the test of time.

Validation is one of those compliance requirements and is part of the Quality Management System within regulated industries.

There are several of examples as to why software validation is important. We can look at our library of FDA Warning Letters to see more than 200 reasons to validate your software or systems.

There is a case study about Therac-25, a radiation therapy machine from the 1980s.

Due to programming issues, the machine could administer the wrong amount of radiation to patients (often as a huge overdose), which led to serious injuries and even death. Had there been software validation standards in place, these types of instances could have been identified and remediated prior to the treatment of patients.

Computer systems validation is serious and the FDA and other regulatory agencies do not take this lightly.

What benefits does validation deliver?
  • Accuracy – when test outcomes are routinely checked against predetermined expected results, the accuracy of computer systems within the manufacturing process can be relied upon.
  • Security – validation processes make clear when entries to the system have been altered.Reliability – the process ensures that system outputs can be relied upon throughout the life cycle.
  • Consistency – it also ensures that the system output is consistent across its life cycle.
  • Optimization – following the process also means that computer systems can be more easily optimized. Optimization is a key feature of an effective and efficient manufacturing site.
When used as intended, systems validation can provide increased process reliability, confidence, improved production results, and reduced operating expenses significantly.

Saturday, March 28, 2020

Purpose of Document Control and its Role in Quality Assurance

GxP/GMP, GDocP, ISO 9000 and documentation

GxP stands for "Good Practice" which is quality guidelines and regulations. The "x" stands for the various fields, for example Good Documentation Practice or GDocP, Good Financial Practice or GFP and so on. There are many instances of these regulations. One instance of GxP is Good Manufacturing practice or GMP.

GMP describes required Quality Management System (QMS) for manufacturing, testing, and quality assurance in order to ensure that products are safe, pure, and effective. GMP has ultimate goal to enable companies to minimize or eliminate contamination and errors which protects consumers from purchasing a product which is not effective or even dangerous. GMP regulations are required to be used in regulated industries such as food and beverages, pharmaceutical, medical devices, and cosmetics.

GMP documentation requirements are aligned with Good Documentation Practice (GDocP). GDocP is the standard in the regulated industries by which documents are created and maintained. It is the systematic set of procedures of preparation, reviewing, approving, issuing, recording, storing, and archiving documents.

The ISO 9000 is a set of standards which deals with fundamentals of Quality Management System (QMS) that helps organizations to ensure that they meet customers’ needs within statutory and regulatory requirements related to a product or service. ISO 9001 deals with the requirements that organizations wishing to meet the standard must fulfil.

GxP/GMP, GDocP, ISO 9000 are about QMS where an organization needs to demonstrate its ability to consistently provide a product that meets customer and applicable statutory and regulatory requirements.

Documentation is the key to compliance with these regulations and ensures traceability of all development, manufacturing, and testing activities. Documentation provides the route for auditors to assess the overall quality of operations within a company and the final product. GMP, GDocP, and ISO 9000 are enforced by regulatory agencies. Auditors pay particular attention to documentation to make sure that it complies with these regulations.

Therefore, in order for an organization to meet these requirements, it must have documentation procedures in place. Documentation is a critical tool for ensuring this compliance.

Purpose of document control and its role in Quality Assurance (QA)

The primary purpose of document control is to ensure that only current documents and not documents that have been superseded are used to perform work and that obsolete versions are removed. Document control also ensures that current documents are approved by the competent and responsible for the specific job people and documents are distributed to the places where they are used.

Document control is an essential preventive measure ensuring that only approved, current documents are used throughout an organization. Inadvertent use of out-of-date documents or not approved documents can have significant negative consequences on quality, costs, customer satisfaction, and can even cause death.

The role of QA, in regards to the document control system is one of management and overview.

QA ensures that all documents are maintained in a controlled fashion and that all controlled documents are approved by the appropriate subject matter experts, are consistent with other documents, and are the most current version.

One way that QA ensures this is by being the last signature on all approved documents. All documents - current, obsolete, superseded, as well as all the history on the creation and revision of the document should be kept in Quality Assurance.

Monday, December 30, 2019

Headless CMS - Contentful

In the last post, we have described headless CMS. Headless CMS architecture is rising in popularity in the development world. 

This model allows breakthrough user experiences, gives developers the great flexibility to innovate, and helps site owners future-proof their builds by allowing them to refresh the design without re-implementing the whole CMS.

One of headless CMS is Contentful. Contentful platform lets you create, manage and distribute content to any platform. It gives you total freedom to create your own content model so you can decide which content you want to manage. 

With an uncluttered user interface, Contentful is an efficient tool for creating and managing your content online, either alone or in team. You can assign custom roles and permissions to team members, add validations depending on the kind of content you have to insert and add media such as images, documents, sounds or video.

Contentful has a three-step process. First, you define a content model which is independent from any presentation layer that defines what kind of content you want to manage. In a second step, you and other internal or external editors can manage all of the content in easy-to-use and interactive editing interface. In the third step, the content is served in a presentation-independent way.

Being presentation-layer agnostic is one of the strengths of Contentful because you will be able to reuse your content across any platform.

To create a web site, you will either have to code it yourself and load content from Contentful API or work with someone who can develop the web site for you. Contentful is the platform where you can update the content of your web site, a mobile app or any other platform that displays content.

Contentful runs on all browsers. Contentful offers the most powerful REST APIs and the only enterprise-grade in-production GraphQL API.

There are three steps you'll have to take in order to deliver content from Contentful to your apps and websites.

1. Create your Content Model

The Content Model is the first step to structuring your content properly. It consists of creating content types that will accept only certain types of data for entry. For example, when creating an interactive quiz, you will need to add something that is a question, multiple answers, an indicator of the correct answer and potentially an image. This can be set up in the content model, so you can then easily just add as many "Questions" to your Quiz as you want.

2. Add Entries and Assets

Entries refer to the content itself. Entries could be blog posts, product features or ingredients of a recipe or any other content. These entries will depend on your previously created content model. In this phase you can also add assets like images, sounds, videos and many other files.

3. Deliver your content with our API


The delivery part of the content may or may not be left only to developers. In this step you set up API Keys that will determine which content will go to which platform. After the delivery is set up correctly, your content is then available for consumption as soon as you hit the “Publish” button.

We have over 18 years experience with numerous CMS, so we can help you with Contentful as well. Call us today for a free, no obligation consultation.

Saturday, November 30, 2019

Headless CMS vs Decoupled CMS

A headless content management system, or headless CMS, is a back-end only content management system (CMS) built as a content repository that makes content accessible via a RESTful API for display on any device.

The term “headless” comes from the concept of chopping the “head” which is the front end, i.e. the web site off the “body” (the back end, i.e. the content repository).

Whereas a traditional CMS typically combines the content and presentation layers of a web site, a headless CMS is just the content component which focuses entirely on the administrative interface for content creators, the facilitation of content workflows and collaboration, and the organization of content into taxonomies.

It doesn’t include presentation layers, templates, site structure, or design, but rather stores its content in pure format and provides access to other components (e.g. delivery front ends, analytics tools, etc.) through stateless or loosely coupled APIs.

The headless CMS concept is one born of the demands of the digital era and a business’s need to focus on engaging customers with personalized content via multiple channels at all stages of the customer journey. As the content in a headless CMS is considered “pure” (because it has no presentation layer attached) just one instance of it can be used for display on any device; web site, mobile, tablet, smart watches, etc.

There is some confusion around what makes a headless CMS truly “headless”, as vendors use the term somewhat loosely to label their decoupled or hybrid CMS systems. But a true headless CMS is one that was built from the ground up to be API-first, not a full monolith CMS with APIs attached afterwards.

Cloud-first headless CMSs are those that were also built with a multi-tenant cloud model at their core and whose vendor promotes Software as a Service (Saas), promising high availability, scalability, and full management of security, upgrades, and hot fixes, etc. on behalf of clients.

Coupled CMS vs. Headless CMS

Most traditional (monolithic) CMS systems are “coupled”, meaning that the content management application (CMA) and the content delivery application (CDA) come together in a single application, making back-end user tools, content editing and taxonomy, web site design, and templates inseparable.

Coupled systems are useful for blogs and basic web sites as everything can be managed in one place. But this means that the CMS code is tightly connected to any custom code and templates, which means developers have to spend more time on installations, customization, upgrades, hot fixes, etc. and they cannot easily move their code to another CMS.

There is a lot of confusion around the differences between a decoupled CMS and a headless one because they have a lot in common.

A decoupled CMS separates the CMA and CDA environments, typically with content being created behind the firewall and then being synchronized and pushed to the delivery environment.

The main difference between a decoupled CMS and a headless CMS is that the decoupled architecture is active. It prepares content for presentation and then pushes it into the delivery environment, whereas a headless CMS is reactive. It sits idly until a request is sent for content.

Decoupled architecture allows for easier scalability and provides better security than coupled architecture, but it does not provide the same support for omni-channel delivery. Plus, there are multiple environments to manage, this increasing infrastructure and maintenance costs.

Advantages of Headless CMS
  • Omnichannel readiness: the content created in a headless CMS is “pure” and can be re-purposed across multiple channels, including web site, mobile applications, digital assistant, virtual reality, smart watches, etc., in other words, anywhere and at any time through the customer journey.
  • Low operating costs: headless CMSs are usually cheaper to install and run than their monolith counterparts, especially as they are typically built on a cloud model where multi-tenant options keep the running costs low.
  • Reduces time to market: a headless CMS promotes an agile way of working because content creators and developers can work simultaneously, and projects can be finished faster.
  • Easy to use: traditional CMSs tend to be cumbersome and complex as vendors attempt to offer every available feature in one box. Headless systems focus on content management, keeping things simple for those who use it on a daily basis. The entire user experience can usually be managed from within one back end.
  • Flexibility: content editors can work in whichever headless CMS they like and developers can build any kind of front end they want in their preferred language (e.g. Ruby, PHP, Java, or Swift) and then simply integrate the two via APIs (like JSON or XML) over RESTful communication. This allows for polyglot programming where multiple programming paradigms can be used to deliver content to multiple channels, and enables a company to benefit from the latest developments in language frameworks, promoting a micro-services architecture.
  • Cloud Scalability: the content purity and stateless APIs of headless CMSs enable high scalability, especially as the architecture fully leverages the elasticity of a cloud platform.
  • System Security: since the content is typically provided through a high-performance Content Delivery Network (rather than directly from the database), the risk of distributed denial-of-service attacks (DDOS) is reduced.
  • Marketing empowerment: marketers may end up relying more on developers in certain scenarios, e.g. creating a landing page with custom layout.
Disadvantages of Headless CMS
  • Multiple services: managing multiple systems can be challenging and a team’s knowledge base must cover them all.
  • No channel-specific support: since pure headless CMSs don’t deal with the presentation layer, developers may have to create some functionality, such as web site navigation.
  • Content organization: as pure headless CMSs do not typically provide the concept of pages and web site maps, content editors need to adapt to the fact that content is organized in its pure form, independently on the web site or other channel.
Headless CMS architecture is rising in popularity in the development world. This model allows breakthrough user experiences, gives developers the great flexibility to innovate, and helps site owners future-proof their builds by allowing them to refresh the design without re-implementing the whole CMS.

In the following posts, we will look more into headless CMS and will describe specific headless CMS. Stay tuned.