Monday, July 14, 2014

Benefits of Automatic Content Classification

I had few questions about my posts about automatic content classification. I would like to thank my blog readers for these questions. This post is to follow up on those questions.

Organizations receive countless amounts of paper documents every day. These documents can be mail, invoices, faxes, or email. Even after organizations scan these paper documents, it is still difficult to manage and organize them.

To overcome the inefficiencies associated with paper and captured documents, companies should implement an intelligent classification system to organize captured documents.

With today’s document processing technology, organizations do not need to rely on manual classification or processing of documents. Organizations that overcome manual sorting and classification in favor of an automated document classification & processing system can realize a significant reduction in manual entry costs, and improve the speed and turnaround time for document processing.

Recent research has shown that two-thirds of organizations cannot access their information assets or find vital enterprise documents because of poor information classification or tagging. The survey suggests that much of the problem may be due to manual tagging of documents with metadata, which can be inconsistent and riddled with errors, if it has been done at all.

There are few solutions for automated document classification and recognition. Some of them are: SmartLogic's Semaphore, OpenText, Interwoven Metatagger, Documentum, CVISION Trapeze, and others. These solutions enable organizations to organize, access, and control their enterprise information.

They are cost effective and eliminate inconstancy, mistakes, and the huge manpower costs associated with manual classification. Putting an effective and consistent automatic content classification system in place that ensures quick and easy retrieval of the right documents means better access to corporate knowledge, improved risk management and compliance, superior customer relationship management, enhanced findability for key audiences and an improved ability to monetize information.

Specific benefits of automatic content classification are:

More consistency. It produces the same unbiased results over and over. Might not always be 100% accurate or relevant, but if something goes wrong, it is at least is easy to understand why.

Larger context. Enforces classification from the whole organizations perspective, not the individuals. For example, a person interested in sports might tag an article which mentions a specific player, but forget/not consider a team and a country topic.

Persistent. A person can only handle a certain number of incoming documents per day, whilst an automatic classification works round the clock.

Cost effective. Possible to handle thousands of documents much faster than a person.

Automatic document classification can be divided into three types: supervised document classification where some external mechanism (such as human feedback) provides information on the correct classification for documents; unsupervised document classification (also known as document clustering) where the classification must be done entirely without reference to external information, and semi-supervised document classification where parts of the documents are labeled by the external mechanism.

Automate your content classification and consider using manual labor mainly for quality checking and approval of content.

In my next post on this topic, I will describe the role of automatic classification in records management and information governance.

Tuesday, June 10, 2014

Unified Data Management

In most organizations today, information is managed in isolated silos by independent teams using various tools for data quality, data integration, data governance, metadata and master data management, B2B data exchange, content management, database administration, information life-cycle management, and so on.

In response to this situation, some organizations are adopting Unified Data Management (UDM), a practice that holistically coordinates teams and integrates tools. Other common names for this practice include enterprise data management and enterprise information management.

Regardless of what you call it, the big picture that results from bringing diverse data disciplines together yields several benefits, such as cross-system data standards, cross-tool architectures, cross-team design and development synergies, leveraging data as an organizational asset, and assuring data’s integrity and lineage as it travels across multiple departments and technology platforms.

But unified data management is not purely an exercise in technology. Once data becomes an organizational asset, the ultimate goal of UDM becomes to achieve strategic, data-driven business objectives, such as fully informed operations and business intelligence, plus related goals in governance, compliance, business transformation, and business integration. In fact, the challenge of UDM is to balance its two important goals: uniting multiple data management practices and aligning them with business goals that depend on data for success.

What is UDM? It is the best practice for coordinating diverse data management disciplines, so that data is managed according to enterprise-wide goals that promote efficiency and support strategic, data-oriented business goals. UDM is unification of both technology practices and business management.

For UDM to be considered successful, it should satisfy and balance the following requirements:

1. UDM must coordinate diverse data management areas. This is mostly about coordinating the development efforts of data management teams and enabling greater inter-operability among their participants. UDM may also involve the sharing or unifying of technical infrastructure and data architecture components that are relevant to data management. There are different ways to describe the resulting practice, and users who have achieved UDM call it a holistic, coordinated, collaborative, integrated, or unified practice. The point is that UDM practices must be inherently holistic if you are to improve and leverage data on a broad enterprise scale.

2. UDM must support strategic business objectives. For this to happen, business managers must first know their business goals, then communicate data-oriented requirements to data management professionals and their management. Ideally, the corporate business plan should include requirements and milestones for data management. Hence, although UDM is initially about coordinating data management functions, it should eventually lead to better alignment between data management work and information-driven business goals of the enterprise. When UDM supports strategic business goals, UDM itself becomes strategic.

More Definitions

UDM is largely about best practices from a user’s standpoint. Most UDM work involves collaboration among data management professionals of different specialties (such as data integration, quality, master data, etc.). The collaboration fosters cross-solution data and development standards, inter-operability of multiple data management solutions, and a bigger concept of data and data management architecture.

UDM is not a single type of vendor-supplied tool. Even so, a few leading software vendors are shaping their offerings into UDM platforms. Such a platform consists of a portfolio of multiple tools for multiple data management disciplines, the most common being BI, data quality, data integration, master data management, and data governance.

For the platform to be truly unified, all tools in the portfolio should share a common graphical user interface (GUI) for development and administration, servers should inter-operate in deployment, and all tools should share key development artifacts (such as metadata, master data, data profiles, data models, etc.). Having all these conditions is ideal, but not an absolute requirement.

UDM often starts with pairs of practices. In other words, it’s unlikely that any organization would want or need to coordinate 100% of its data management work via UDM or anything similar. Instead, organizations select combinations of data management practices whose coordination and collaboration will yield appreciable benefits.

The most common combinations are pairs, as with data integration and data quality or data governance and master data management. Over time, an organization may extend the reach of UDM by combining these pairs and adding in secondary, supporting data management disciplines, such as metadata management, data modeling, and data profiling. Hence, the scope of UDM tends to broaden over time into a more comprehensive enterprise practice.

A variety of organizational structures can support UDM. It can be a standalone program or a subset of larger programs for IT centralization and consolidation, IT-to-business alignment, data as an enterprise asset, and various types of business integration and business transformations. UDM can be overseen by a competency center, a data governance committee, a data stewardship program, or some other data-oriented organizational structure.

UDM is often executed purely within the scope of a program for BI and data warehousing (DW), but it may also reach into some or all operational data management disciplines (such as database administration, operational data integration, and enterprise data architecture).

UDM unifies many things. As its name suggests, it unifies disparate data disciplines and their technical solutions. On an organizational level, it also unifies the teams that design and deploy such solutions. The unification may simply involve greater collaboration among technical teams, or it may involve the consolidation of teams, perhaps into a data management competency center.

In terms of deployed solutions, unification means a certain amount of inter-operability among servers, and possibly integration of developer tool GUIs. Technology aside, UDM also forces a certain amount of unification among business people, as they come together to better define strategic business goals and their data requirements. When all goes well, a mature UDM effort unifies both technical and business teams through IT-to-business alignment.

Why Care About UDM?

There are many reasons why organizations need to step up their efforts with UDM:

Technology drivers. From a technology viewpoint, the lack of coordination among data management disciplines leads to redundant staffing and limited developer productivity. Even worse, competing data management solutions can inhibit data’s quality, integrity, consistency, standards, scalability, architecture, and so on. On the upside, UDM fosters greater developer productivity, cross-system data standards, cross-tool architectures, cross-team design and development synergies, and assuring data’s integrity and lineage as it travels across multiple organizations and technology platforms.

Business drivers. From a business viewpoint, data-driven business initiatives (including BI, CRM, regulatory compliance, and business operations) suffer due to low data quality and incomplete information, inconsistent data definitions, non-compliant data, and uncontrolled data usage. UDM helps avoid these problems, plus it enables big picture data-driven business methods such as data governance, data security and privacy, operational excellence, better decision making, and leveraging data as an organizational asset.

To be successful, an organization needs a data strategy that integrates a multitude of sources, case studies, deployment technologies, regulatory and best practices guidelines, and any other operating parameters.

Many organizations have begun to think about the contributions of customer information in a more holistic way as opposed to taking a more fragmented approach. In the past, these efforts were undertaken mainly by organizations traditionally reliant on data for their model (like finance and publishing), but today, as more and more businesses recognize the benefits of a more comprehensive approach, these strategies are gaining much wider deployment.

What should a good strategy look like?

First, identify all data assets according to their background and potential contribution to the enterprise.

Second, outline a set of data use cases that will show how information will support any of a variety of customer marketing functions.

Next, create rules and guidelines for responsible use and access, making sure that the process is flexible and transparent. Keep in mind that not all data should be treated the same way; rather, it should be managed according to its sensitivity and need.

Finally, make sure that this process is ongoing so that tactics can be evaluated and adjusted as needed.

Such a strategy combines the best practices with responsible data governance and smart organization. Everyone wins - the employees who gain quick access to essential information, the enterprise that is running more smoothly; and of course, the customers who are served by a resource-rich organization!

Thursday, May 15, 2014

Content Categorization Role in Content Management

An ability to find content in a content management system is crucial. One of main goals of having a content management system is to make content easy to find, so you can take an action, make a business decision, do research and development work, etc.

The main challenge to findability is anticipating how users might look for information. That's where categorization comes into play. The quality of the categorization of each piece of content makes or breaks its findability. Theoretically, good tagging will last the lifetime of the content. You would think that if you do it well initially, then you can forget about it until it is time to retire that content. But reality can be very different.

Durable Categorization

Many issues complicate content categorization. They include:
  • the sheer volume, velocity, and variety of internal and external-facing content which needs management;
  • evolving/emerging regulations and compliance issues, some of which need to be retroactively applied; 
  • the need to limit the company's exposure and to support the strength of its position in any legal activity.
Some organizations face the added challenge of integrating content from acquisitions or mergers, which most likely use content management structure, categorization, and methodologies that are incompatible and of inconsistent quality.

Considering these issues, the success factor for good content categorization are the automatic categorization techniques and processes.

Traditionally, keywords, dictionaries, and thesauri are used to categorize content. This type of categorization model poses several problems:
  • taxonomy quality - it depends on the initial vision and attention to detail, and whether it has been kept current;
  • term creep - initial categorization will not always accommodate where and how the content will be used over time, or predict relevancy beyond its original focus;
  • policy evolution - it can't easily apply new or evolving policies, regulations, compliance requirements, etc.;
  • cost and complexity - it is difficult and costly, if not practically impossible, to retroactively expand the original categorization of the existing content if big amount of content is added.
Automatic Categorization

Using technology to automatically categorize content is a solution. It applies the rules more consistently than people do. It does it faster. It frees people from having to do the task, and therefore has less costs. And, it can actively or retroactively categorize batches or whole collections of documents.

You can experience these benefits by using concept-based categorization driven by an analytics engine integrated into the content management system. These systems mathematically analyze example documents you provide to calculate concepts that can be used to categorize other documents. Identifying hundreds of keywords per term, they are able to distinguish relevance that escapes keyword and other traditional taxonomy approaches. They are even highly likely to make connections that a person would miss.

Consider 3D printers as an example. These are also known as "materials printers", "fabbers", "3D fabbers", and as "additive manufacturing". If all of those terms are not in the taxonomy, then relevant documents that use one or more of them, but not 3D printer, would not be optimally categorized.

People looking for information about 3D printers who are not aware of the alternative terms would miss related documents of potential significance. This particularly impacts  external facing websites that sell products on their websites. Their business depends on fast and easy delivery of accurate and complete information to their customers, even when the customer doesn't know all of the various terms used to describe the product they are looking for.

In contrast, through example-based mathematical analysis and comparison along multiple keywords, conceptual analytics systems understand that these documents are all related. They would be automatically categorized and tagged as relevant to 3D printing.

Another difference is that taxonomy systems require someone to enter the newly developed or discovered terms. In conceptual analytics, it is simply a matter of providing additional example documents that automatically add to the system's conceptual understanding.

The days of keeping everything "just in case" are long gone. From cost and risk exposure concerns, organizations need to keep only what is necessary, particularly as the volume and variety of content continue to grow. Good categorization and tagging systems are essential to good content management and to controlling expense and exposure.

Outdated and draft documents unnecessary expand every company's content repositories. Multiple copies of the same or very similar content are scattered throughout the organization. By some estimates, these compose upwards of 20% or more of a company's content.

Efficiently weeding out that content means 20% less active and backup storage, bandwidth, cloud storage for offsite disaster recovery, and archive volume. Effective and thorough tagging can identify such elements to reduce these costs, and simultaneously reduce the company's cost and exposure related to legal or regulatory requirements.

The Value Beyond Cost Savings

An effectively managed content delivers better cost of content management and reduced exposure to risk. While this alone is reason to implement improvements in categorization, there are other reasons.

Superior categorization through conceptual analysis also affects operational efficiency by enabling fast, accurate, and complete content gathering. A significant benefit for any enterprise is that it allows more time for actual work by reducing the time it takes to find necessary information. It is of critical importance for companies whose revenue depends on their customers quickly and easily finding quality information.

Conceptual analytics systems deliver two other advantages over traditional taxonomy methods and manual categorization. It creates a mathematical index, so it is useless to anyone trying to discover private information or clues about the company. Also, it is deterministic and repeatable. It will give the same result every time and so it is very valuable in legal or regulatory activities.

Concept-based analysis makes content findable and actionable, regardless of language, by automatically categorizing it based on understanding developed from example documents you provide. Both internally and externally, the company becomes more competitive with one of its most important assets which is unstructured information.

Wednesday, April 30, 2014

Video Instant Messaging for Collaboration and Knowledge Management

Due to falling costs, high-availability data connections, smart mobile devices, and the growth of cloud computing, knowledge management and enterprise collaboration are undergoing something of a rebirth. Most enterprise collaboration remains centered on documents. There is an opportunity to extend beyond document-centric tasks to other collaborative, linear, process-oriented work. Simple to use video instant messaging (VIM) technology may be very useful in this much needed shift.

Today, most companies use video for:
  • team conferencing;
  • customer interaction/service support;
  • pre-recorded-corporate communications;
  • an alternative to text for training/education.
Beyond those areas, VIM functionality is just starting to arrive to companies. There are different opinions about VIM. like it and feel it helps tocreate a better "human connection," and others don't like it and claim that it invades their privacy. In fact, the most common arguments in favor of and against the use of video are surprisingly similar to one another and suggest a cultural split that can be difficult to bridge or even manage.

Video instant messaging, as its name makes clear, is potentially the most invasive use of video in companies. For example, it can be images of unexpected video calls on an active laptop coming from an angry boss, with the recipient of the call being unprepared. One-to one, face-to-face discussions are the most optimal use of such real-time technology. There are valuable ways of using the technology.

What is Video Conferencing

Video conferencing is when multiple people, typically four or more, in more than one geographic location use audio and video technology to connect via a virtual conference room environment to conduct a pre-arranged meeting.

Web conferencing (or webinar) is where multiple users watch a single remote screen; this method is typically used for one-to-many presentations.

Video instant messaging (VIM) is distinct from the other two in that the interactions are typically one-to-one and are also most typically not pre-arranged, but rather ad hoc.

Opportunities

VIM is not particularly new, but its use (and misuse) has until recently been largely confined to the consumer world. Some knowledge management vendors such as Citrix, TIBCO and Teambox are adding VIM to their product suites because VIM functionality is particularly relevant in the mobile era: almost all handheld devices contain cameras. This is key to understanding the value of VIM, since it's not so much the video as the mobile camera that has much potential.

Future use cases for VIM technology in knowledge management will be situations where a physical object or environment needs to be collaboratively viewed. For example, in a service situation, a technician is looking at a complex wiring structure and using a mobile device to stream video of the situation to a remote expert, enabling them to share visuals to resolve the situation. Similarly, designers and creative people can discuss and view samples and compare options in real time. In situations like these, the ability to share a clear real-time moving image of the in real environment via a handheld device can be invaluable and a highly efficient use of collaboration technology.

Pros and Cons of Using Video

Pros
  • Seeing the other attendees builds a deeper connection.
  • It reduces travel costs and time on the road.
  • Meetings can be recorded.
  • Processes can be more efficient.
  • Collaboration can be enhanced.
Cons
  • People are self-conscious on camera.
  • Audio/video quality varies.
  • A poor or lost connection means no conference at all.
  • It is not as good as face-to-face meetings.
  • Signal delays can ruin the meeting.
New Horizons

For traditional users, there is a somewhat parallel use case in that they often want to share a document, drawing or whiteboard in real time with a colleague. It is about seeing the same thing and collaborating on it. VIM can have a valuable role in supporting collaborative processes in areas like law enforcement, healthcare, and engineering. The established use for video are users who create a lot of text-based communications and need to occasionally see one another from their home offices.

Moving from a fixed camera position to a fully mobile situation opens the technology to many more uses. Video stream can be used for all the key management and productivity functions, such as calendaring, expertise location, file access edit and view, knowledge sharing, and team and project management.

Three Priority Objectives

On a more tactical level, it will be important to use these objectives in VIM use.

First, focus on tying collaboration and knowledge sharing back to clearly defined business processes that involve human interaction. Take existing technology and configure it to meet your organization needs, and also use VIM to engage with work objects and environments, rather than simply using it as a face-to-face conversation tool.

In an organization situations, communication can't simply stop if someone hangs up. Whether with video or text or voice, the trail needs to be kept traceable, and the files and data created should be contextualized and made relevant. Documents need to be retained intact until a decision has been made to remove or destroy them, depending on governance, housekeeping and compliance requirements. This option should be available for video and other forms of interaction. Information lifecycle needs to be considered.

Third, if VIM is to be used for collaboration, then still-image functionality that freezes and captures views in HD should be provided within the VIM frame. This enables a detailed visual examination of a specific point in time, and also creates a file of record. Shifting between moving and still images in a collaborative engagement, allowing a free-flowing interaction and the capture of specific elements, would be of great value to many industry and process-specific collaborative situations.

Future

VIM has a key role to play in taking enterprise collaboration to the next level. This role is not so much about being able to see the face of the other person as it is about viewing and capturing in real time what the other party is seeing.Refining the use cases and collaborative process that leverage VIM will take some effort. Over time, VIM will prove to be a value in the growth of enterprise collaboration. It will be especially useful for industries such as healthcare, law enforcement, engineering, and maintenance. Enterprise collaboration tools and suites already have many of the key pieces of the puzzle; now they need to put them together into a coherent, practical whole.

Tuesday, March 25, 2014

Search Applications - Vivisimo

Vivisimo was a privately held technology company that worked on the development of computer search engines. The company product Velocity provides federated search and document clustering. Vivisimo's public web search engine Clusty was a metasearch engine with document clustering; it was sold to Yippy, Inc. in 2010.

The company was acquired by IBM in 2012 and Vivisimo Velocity Platform is now IBM InfoSphere Data Explorer. It stays true to its heritage of providing federated navigation, discovery and search over a broad range of enterprise content. It covers broad range of data sources and types, both inside and outside an organization.

In addition to the core indexing, discovery, navigation and search engine the software includes a framework for developing information-rich applications that deliver a comprehensive, contextually-relevant view of any topic for business users, data scientists, and a variety of targeted business functions.

InfoSphere Data Explorer solutions improve return on all types of information, including structured data in databases and data warehouses, unstructured content such as documents and web pages, and semi-structured information such as XML.

InfoSphere Data Explorer provides analytics on text and metadata that can be accessed through its search capabilities. Its focus on scalable but secure search is part of why it became one of the leaders in enterprise search. The software’s security features are critical, as organizations do not want to make it faster for unauthorized users to access information.

Also key is the platform’s flexibility at integrating sources across the enterprise. It also supports mobile technologies such as smart phones to make it simpler to get to and access information from any platform.

Features and benefits

1. Secure, federated discovery, navigation and search over a broad range of applications, data sources and data formats.
  • Provides access to data stored a wide variety of applications and data sources, both inside and outside the enterprise, including: content management, customer relationship management, supply chain management, email, relational database management systems, web pages, networked file systems, data warehouses, Hadoop-based data stores, columnar databases, cloud and external web services.
  • Includes federated access to non-indexed systems such as premium information services, supplier or partner portals and legacy applications through the InfoSphere Data Explorer Query Routing feature.
  • Relevance model accommodates diverse document sizes and formats while delivering more consistent search and navigation results. Relevance parameters can be tuned by the system administrator.
  • Security framework provides user authentication and observes and enforces the access permissions of each item at the document, section, row and field level to ensure that users can only view information they are authorized to view in the source systems.
  • Provides rich analytics and natural language processing capabilities such as clustering, categorization, entity and metadata extraction, faceted navigation, conceptual search, name matching and document de-duplication.
2. Rapid development and deployment framework to enable creation of information-rich applications that deliver a comprehensive view of any topic.
  • InfoSphere Data Explorer Application Builder enables rapid deployment of information-centric applications that combine information and analytics from multiple sources for a comprehensive, contextually-relevant view of any topic, such as a customer, product or physical asset.
  • Widget-based framework enables users to select the information sources and create a personalized view of information needed to perform their jobs.
  • Entity pages enable presentation of information and analytics about people, customers, products and any other topic or entity from multiple sources in a single view.
  • Activity Feed enables users to "follow" any topics such as a person, company or subject and receive the most current information, as well as post comments and view comments posted by other users.
  • Comprehensive set of Application Programming Interfaces (APIs) enables programmatic access to key capabilities as well as rapid application development and deployment options.
3.Distributed, highly scalable architecture to support large-scale deployments and big data projects.
  • Compact, position-based index structure includes features such as rapid refresh, real-time searching and field-level updates.
  • Updates can be written to indices without taking them offline or re-writing the entire index, and are instantly available for searching.
  • Provides highly elastic, fault-tolerant, vertical and horizontal scalability, master-master replication and “shared nothing“ deployment.
4. Flexible data fusion capabilities to enable presentation of information from multiple sources.
  • Information from multiple sources can be combined into “virtual documents“ which contain information from multiple sources.
  • Large documents can be automatically divided into separate objects or sub-documents that remain related to a master document for easier navigation and comprehension by users.
  • Enables creation of dynamic "entity pages" that allow users to browse a comprehensive, 360-degree view of a customer, product or other item.
5. Collaboration features to support information-sharing and improved re-use of information throughout the organization.
  • Users can tag, rate and comment on information.
  • Tags, comments and ratings can be used in searching, navigation and relevance ranking to help users find the most relevant and important information.
  • Users can create virtual folders to organize content for future use and optionally share folders with other users.
  • Navigation and search results can return pointers to people to enable location of expertise within an organization and encourage collaboration.
  • Shared Spaces allow users to collaborate about items and topics that appear in their individualized views.