Thursday, January 29, 2015

Taxonomy in the Age of Agile and Shadow IT

Marketing Meets IT and Merges

Information presented to customers must be capable of meeting near-instant information demands in a multitude of perspectives. This includes all end users, internal and external. Organizations will need to continue to be versatile and clever in their approach to data management.

Digital marketers are becoming quite clever in dealing with data, using it to persuade their (or their clients’) customers or potential customers. Taking principles from usability, marketers now use terms such as “user design”, “user experience” (UX) and “user interaction” (UI) and develop specialty roles to turn data into the most pleasant user experience possible. The food, beverage and hospitality industries aren’t the only ones in the experience this issue. Every industry is.

Build to Further Understand

Taxonomy and information architecture is not just about designing a great way to organize content for the end user. Taxonomy can also be used to understand data. Information, its structure and agility are keys to modern design techniques where such vast volumes of data exist.

Your organization can benefit from developing taxonomy for your organization’s information. Designing your information flow isn’t always as easy as it sounds. Consulting with a trusted professional that can analyze various aspects of your business is often needed to alleviate the stresses of a complex business taxonomy. A specialist can take your data and help you make sense of it. Whether you are a hospital that needs to define protocols for accessing patient data or a retail website seeking comprehensive analysis of information about web traffic, an internal audit of information systems can help to get you on the right track toward efficiency.

Start with assessing your digital (and even non-digital) tools to determine problem areas within the organization such as incomplete records or inconsistent rules or terms. The way in which systems communicate with customers, employees or other stakeholders is important to consider as well. Check that these systems can perform essential functions properly and that proper access and other rules are clearly defined. A master data management solution will help with this. Many fortune 1000 companies are going this route to deal with their organization’s information.

Agile Systems Will Assist in Achieving Maximum Comprehension

The right information asset management tools make all the difference. Having software for terms and concepts will provide users within the organization the right context for use. Thesaurus management, ontology software, metadata or cataloging software, auto-categorization, search, and other tools used in concert will keep the stability.

For example, in dealing appropriately with taxonomy, an agile system including auto-categorization and search tools (including text mining) would contain pre-installed user editable and non-editable taxonomies, be able to auto-generate editable taxonomies, support import of editable and non-editable taxonomies. To be agile for any number of end user type, these must be able to play out in several different varying combinations.

The same principle applies to other software categories like content management or thesaurus software. Search functions, for example, are more useful to the end user when they contain spell-checking functions or multiple display options. Remember, these principles apply to both the internal and external users. Remember that information architecture mastered on the inside will translate better to the outside.

Taxonomy and information architecture should be the foundation of an enterprise view of the customer. So how must an organization view its own vocabulary? Ensure that your master data management is interpreting terms consistently and includes context for those terms. Without context, people are often left searching for clues instead of getting the information they came for.

Maintaining Culture While Establishing Order

The above phrase sounds more like a political statement than one of taxonomy. However, when you think about it from a business perspective, it makes some sense. When you are implementing your agile system for taxonomy and information architecture, you don't want to disrupt the critical flow of business nor the information that is required to actually do business.

What you do want is to be able to open the pipeline of information further to increase productivity and enable efficient processes within the organization. That statement sums up the general need to implement an agile system for handling information.

Part of the solution to this is governance and compliance controls. By introducing hard controls for governance and compliance, you are forming a backbone with controls for how systems are using and integrating data. Your taxonomy, metadata and other information may connect business processes or even use content to complete or help complete a variety of tasks.

The exact structure of any organization varies from enterprise to enterprise and in parallel does their culture. This contrast can be reckoned with reason. The key to being able to harness information collectively, selectively and to varying degree is what will make the major difference in opening that pipeline up with controls in place to establish order.

Taxonomy Soup: Collaboration, Integration and Access

Here is analogy to social science: Just as the language of a region or culture may vary in dialect, so too does the language of business. The language of business can be quite diverse. Between industries, organizations, departments, fields of study/practice and a wide variety of other factors, confusion exists. The real benefit to a system with agility is the ability to communicate more efficiently. The combination of collaboration, integration, and access are the key ingredients to making the perfect taxonomy soup.

The ability to sort terms is very important and will become even more important in the big data era. A great system will be one that can differentiate taxonomy with due diligence. Collaborating will become more efficient this way.

By integrating data that does not fit into the dialect of terms, the organization will be able to make better use of its information assets, whatever they may be. This includes getting all of the information into the right places and ultimately into the right hands in the correct way. Policies and procedures are important examples of such data.

Analyze Your Needs Carefully

Take a look around your organization. Take notes on every detail you can to make an informed decision about what to implement and where. Consulting with a professional is the next step. The aforementioned details of creating an agile environment for taxonomy and information architecture within an enterprise of nearly any size are helpful in beginning to form a strategy for handling your enterprise information. Consulting a professional will help alleviate the overwhelming and burdensome task of data complexity.

Monday, January 12, 2015

Making Information Easier to Find Becomes Ever More Important

Taxonomy is becoming so much more important in the digital age that entire enterprises may one day develop out of the need just to classify information. The many ways we have traditionally classified content has exponentially grown in the digital age to a size un-imagined and continually growing. 

The Library of Congress and other libraries, large and small, have gone to using digital tools to classify and re-classify information about books, documents, texts and even multimedia content. The Internet was, of course, developed to help more easily share and organize documents and other content across a computer network. Now, here we are with the cloud and big data.

Where to begin?

The proportion of data-to-enterprise, or even data-to-individual, can become difficult or even unmanageable without the right tools or experience to guide you. As humans and consumers, we tend to expect that our options will be categorized into specific types based on the larger type. For instance, if we buy a computer, the choices are usually as follows: brand, device type, operating system, etc. 

Then we get into Apple vs Dell, Desktop vs Tablet or BlackBerry vs Android. The more immediate platforms that come to mind are Google or Bing search engines, hashtags or networks on social media like Twitter and Facebook. These are the most recent consumer examples of classifying information in a multitude of ways using software. Enterprises of all industries, however, are becoming more dependent on systems to help them manage information to scale.

The time it takes members of an organization to find important or relevant information is productivity lost. It also adds to personnel frustration, even at management level.

Time to Give Industries Options for Information Management

Companies are responding to the needs of industry. Taxonomy, metadata, ontology, data virtualization and data governance are some of the key areas of need for many organizations dealing with vast amounts of data coming from customers, partners, legal or other channels.

Top Quadrant, who released a web-based taxonomy solution recently, is an example of how these enterprise needs are so far being addressed, according to KMWorld. 

TopBraid, the software referenced, is able to help end users reference data with more easily accessible visual models of the data, laid out in a clean way. 

Much more emphasis on visual representation of data is becoming an IT industry-wide way to tackle some of the problems associated with extrapolating and explaining complex data sets. Asian countries have had a great deal of success, in fact, in using visual models to teach mathematics and transition students into new topics easier, which Americans have had some difficulty with in many educational settings.

TopQuadrant is just one recent example. There are tools and software being developed in the market to deal with this exponentially growing challenge.

Taxonomy Time for Taxonomists

So what does a taxonomist do that can help arrange and set a standard for all of this enterprise information that we are dealing with in ever increasing amounts? Well, for one, taxonomists are tasked, not only with categorization of terms, but also governance and definition of those terms as well. 

They often use a commercial software that is dedicated to this work, such as a dedicated thesaurus or taxonomy management application. Some of these can be developed internally as well, for the right organization, as long as it fits their particular needs. 

Sometimes taxonomy management tools are part of another suite or software, in which taxonomy is a feature. In the case of Drupal, a website content management software tool to build and maintain websites, taxonomy is used to define or classify content, which can then be configured to display nodes, pages, etc. to the end user. Sometimes, other software can be used, such as spreadsheets or other types of software tools. 

Lastly, open source software for taxonomy and ontology are becoming available for use as well.

The benefits to having a system or person that maintains taxonomy within the enterprise are several. One benefit is that information is organized, as I have already alluded to. Another is that this information can also be made easier to find for customers, personnel, vendors, supply partners, etc., which I have also discussed. One reason that we have not discussed is standardization. 

This refers to terminology and jargon within your particular company. Every company can create a manual of terms, glossary, thesaurus, etc. But a taxonomist or someone working with taxonomy software can refine this process and create a standard that efficiently works across the company, so everyone is in compliance. It is kind of like having a style guide, but only for key terms of the business. 

Compliance is another key benefit to all of this. Regulations need to be followed and adhered. There are other legal and regulatory impacts that information has and taxonomy, ontology and information management are a few ways to stay ahead of the mess. Information audits can be a great way to find holes in your system and develop ways to patch those holes for greater governance and compliance. All of this can save us time and money on our business operations in one way or another.

Techniques for Creating a Great Information Structure

Taxonomy and information management starts with a few basic techniques to help guide end users to information they have are trying to navigate to. The less time to navigate, the better.

Not only are terms important, but so are their relationships. Sometimes information can be found using one term or another, depending on the scenario. If you are looking for blue cars that are fast, you could search cars by color or by speed, as one example illustrates this point.

Standards should also weed out content that is irrelevant or invalid. Other types of information related to terms can be used in conjunction with taxonomy. There should be clear hierarchies of information within the enterprise as well. 

All of this data should be able to be used with other tools like content management, indexing, search and others. It should always support ANSI/NISO Z39.19 or ISO 2788 thesaurus standards. Different classes of information may apply to sets or subsets even. 

Make sure that any software you use will generate reports for you on analytics (of terms) and so forth. This is very important.

There are a variety of ontology and thesaurus options available. They are available in a multitude of platform formats. Here are a few: MultiTes Pro (Microsoft Windows), Cognatrix (Apple Mac OS X), One-2-One (Windows) and TheW32 (multi-platform). There don't appear to be many options for the mobile platforms yet (iOS, Android, BlackBerry and Windows Phone). 

The information management problem in the world of big data seems pervasive, but there is a growing trend toward developing new ways of dealing with it. Now is the time to start looking at creating a plan to develop a system for dealing with taxonomy, oncology and information management to help your organization users to access data more quickly, efficiently and sensibly. 

The more content builds up, the more the organization needs to change, adapt and most importantly, handle of the big data involved. 

Tuesday, December 30, 2014

Latest Applications in Enterprise Search

In my previous post, I described the future of enterprise search. In this post, I will describe few new search applications that could be interesting.

Concept Searching

Founded in 2002, Concept Searching provides software products that deliver automatic semantic metadata generation, auto-classification, and powerful taxonomy management tools. Concept Searching is the only platform independent statistical metadata generation and classification software company in the world that uses concept extraction and compound term processing to significantly improve access to unstructured information. The Concept Searching Microsoft suite of technologies runs in all versions of SharePoint, Office 365, and OneDrive for Business.

The technologies are being used to improve search outcomes, deploy an enterprise metadata repository, enable effective records management, identify and secure sensitive information, improve governance and compliance, social tagging, collaboration, text analytics, facilitate eDiscovery, and drive intelligent migration.

Concept Searching, developer of the Smart Content Framework™, provides organizations with a method to mitigate risk, automate processes, manage information, protect privacy, and address compliance issues. This infrastructure framework utilizes a set of technologies that encompasses the entire portfolio of unstructured information assets, resulting in increased organizational performance and agility.

Lexalytics, Inc.

Lexalytics provides enterprise and hosted text analytics software to transform unstructured text into structured data. The software extracts entities (people, places, companies, products, etc.), sentiment, quotes, opinions, and themes (generally noun phrases) from text. Text is considered unstructured data which comprises somewhere between 31% and 85% of what is stored in any given enterprise.

Lexalytics is an OEM vendor of text analytics and sentiment analysis technology for social media monitoring, brand management, and voice-of-customer industries. The software uses natural language processing technology to extract the above-mentioned items from social media and forums; the voice of the customer in surveys, emails, and call-center feedback, traditional media, pharmaceutical research and development, internal enterprise documents, and others.

Lexalytics, provides a text mining engine that is used by a number of search partners like Coveo, Playence, and Oracle to add additional metadata to their search. This is additional intelligence around "just what do those words actually mean?" In other words, this engine is boosting the value of search by providing more information into the index. This enables other applications, and helps search be "smarter".

MaxxCAT

MaxxCAT provides enterprise search solutions for corporate intranets, web sites, databases, file systems and applications, and other environments that require rapid document retrieval from multiple data sources. The flagship products offered by MaxxCAT are the SB-250 series and the EX-5000 series network search appliances. Also available are series of cloud-enables storage appliances.

Basis Technology

Founded in 1995, this software company specializes in applying artificial intelligence techniques to understanding documents written in different languages. Their software enhances parsing tools by classifying the role of words and provides metadata on the role of words to other algorithms. Software from Basis Technology will, for instance, identify the language of an incoming stream of characters and then identify the parts of each sentence like the subject or the direct object.

The company is best known for its Rosette Linguistics Platform which uses Natural Language Processing techniques to improve information retrieval, text mining, search engines and other applications. The tool is used to create normalized forms of text by major search engines, and, translators. Basis Technology software is also used by forensic analysts to search through files for words, tokens, phrases or numbers that may be important to investigators.

dtSearch

Founded in 1991, this company specializes in text retrieval software. Its current range of software includes products for enterprise desktop search, Intranet/Internet spidering and search, and search engines for developers (SDK) to integrate into other software applications

LTU technologies

Founded in 1999, this company is in the field of image recognition for commercial and government customers. The company provides technologies for image matching, similarity and color search for integration into applications for mobile, media intelligence and advertisement tracking, ecommerce and stock photography, brand and copyright protection, law enforcement and more

Sematext Group, Inc.

This company's product SSA - Site Search Analytics - continuously monitors, measures, and improves the search experience. It identifies top queries, problematic zero-hit queries, common misspellings, etc. It measures and compares search relevance and improves conversion rates. It is available It is available on-premises and in the cloud.

Exorbyte

This is a privately held software company which was founded in 2000 in Konstanz, Germany, with an additional office in the United Kingdom (Bristol). The company develops intelligent software for search and analysis of structured and semi-structured data.

Their product MatchMaker is the leading error-tolerant search & match platform for huge master data volumes. The multiple award-winning software technology thinks, searches and finds like a human – but dramatically faster, in much more complex configurations and with no serious data restriction using keys or similar methods. It is available on-premises and in the cloud.

Federal authorities, insurance agencies, ICT firms and more use this software to identity a resolution in diverse, data-intensive business processes such as input management, enterprise search and data quality. It has easy customization and integration.

Inbenta

Founded in 2005,this company provides enterprise semantic search technology based on artificial intelligence and natural language processing. It offers intuitive search solutions and intelligent content support for website and corporate Intranets.

Content Analyst Company

This is a privately held software company which develops concept-aware text analytics software called CAAT, which is licensed to software product companies for use in eDiscovery. In 2013, five CAAT-powered products were named in the Gartner eDiscovery Magic Quadrant Report, and the analyst firm 451 Group referred to CAAT as The Hottest Product in eDiscovery.

Content Analyst's CAAT analytics software is a machine learning system based on latent semantic indexing technology. CAAT provides several text analytics capabilities using both supervised learning and unsupervised learning methods including concept search, categorization, conceptual clustering, email conversation threading, language identification, near-duplicate identification, auto summarization and difference highlighting.

SearchYourCloud

With SearchYourCloud and its patented, federated search technology, a single search request in Outlook simultaneously and transparently searches your email, desktop and all of your cloud storage sources and delivers highly targeted results. You get exactly the information you need with just one query.

Docurated

Docurated aggregates all your documents in one place, turning them into a searchable and customizable database. Docurated will now provide Dropbox integration as well. It accelerates sales in companies looking for fast growth by making the best marketing content readily available to Sales around the world. Docurated works with your existing content stores and uses machine learning to enable your team to find and re-use the most effective content with no manual tagging or uploading.

This is the next generation visual knowledge management platform which solves the information retrieval problem for leading companies like Clorox, Omnicom, Netflix, Weather Channel, and many others. Docurated enables sales, marketing, and technology teams to surface and use the exact chart or slide they need, no matter where it is stored, without slogging through folders and files. Docurated seamlessly integrates with existing folder-based repositories.

Lucene

Apache Lucene is a free open source information retrieval software. It is supported by the Apache Software Foundation and is released under the Apache Software License. While suitable for any application which requires full text indexing and searching capability, Lucene has been widely recognizedfor its utility in the implementation of Internet search engines and local, single-site searching.

At the core of Lucene's logical architecture is the idea of a document containing fields of text. This flexibility allows Lucene's API to be independent of the file format. Text from PDFs, HTML, Microsoft Word, and OpenDocument documents, as well as many others (except images), can all be indexed as long as their textual information can be extracted.

These are just few search applications that are currently on the market. There are many others. Choosing the right application is based on your organization's requirements.

Future of Enterprise Search

Enterprise search is a developing industry. In this post, I will describe the latest developments in enterprise search.

Effective enterprise search represents one of the most challenging areas in business today. The whole area of search has been revolutionized by Google. Employees now expect to be able to locate relevant data as easily as they navigate the web through Google. When this ease of search is not replicated in organizations' systems, it can be quite frustrating. As we create more content than ever before, the importance of effective search across the enterprise continues to grow.

Until recently, much of the enterprise search technology remained unchanged. The general purpose enterprise search offerings were fairly similar in technology and scope. There are now many software companies who direct their efforts towards enterprise search. The future will bring shorter innovation cycles, continuous user experience improvements, deeper integration with first- and third-party applications and more ETL-like (extract, transform and load) functionality to handle poor quality content.

In the second half of the 2000’s, the enterprise search companies were absorbed by the large software companies:
  • Microsoft acquired FAST Search in 2008
  • Adobe acquired Mercado in 2009
  • Dassault Systèms acquired Exalead in 2010
  • Hewlett Packard acquired Autonomy in 2011
  • Oracle acquired Endeca in 2011
  • IBM acquired Vivisimo in 2012
User experience is a broad topic in itself, with active trends including:
  • Richer information about the user to determine context, such as their business context, social context, mobile device sensors, location, speech recognition, preferences and historical usage.
  • Advances in visualization such as HTML 5.
  • Natural language processing as in the trends seen with Wolfram Alpha and smart phone digital assistants, such as Apple’s Siri, Microsoft’s Cortana and Google Now.
  • Richer results that look less like a page of links and more like answers to questions.
  • Elements of knowledge management that add meaning to queries and results.
  • Enterprise search products will become increasingly and more deeply integrated with existing platforms, allowing more types of content to be searchable and in more meaningful ways. It will also become less of a dark art and more of a platform for discovery and analysis.
The future of enterprise search seems destined to continue with simple keyword and Boolean searching, augmented by faceted navigation based on metadata. Virtually every e-commerce web site today offers guided navigation based on metadata.

This ubiquitous model now appears in most of the leading enterprise search products and users immediately understand how a simple text query can quickly be focused to a specific domain by clicking on a metadata filter. This updated search model is increasing demand for auto-classification products which can generate descriptive metadata automatically based on an analysis of the document’s unstructured content.

Open source software has made significant improvements, displacing many of the traditional search vendors. Lucene and its supporting companies like LucidWorks provide solid search functionality at a hard-to-beat price. Where vendors are seeing success is in four main areas:
  • Providing functionality beyond typical "search" – extending to facets, true knowledge management, multimedia search, and other functionality.
  • Focusing on vertical-specific applications like fraud and supply-chain management.
  • Working with larger, more conservative enterprises.
  • Providing a SaaS, one-stop-shop for zero (or low) touch functionality.
A few major factors are going to drive the industry going forward:
  • Open source will continue to get better and drive out inefficiency in the market .
  • More, better information about the searcher: location awareness, profile sharing, time dependence, deeper understanding of the context and content of the search. With this information, you can provide better, more relevant results. 
  • Lower tolerance for hassle: people expect search to "just work" – not understanding that it can be just as complicated as any other major IT initiative. By having low-touch solutions, SaaS providers will make major progress in the small/medium business world.
  • Search all the things!: Integrated understanding of objects, video, speech, as well as traditional semantic sources like text will combine together better into a whole that allows for information retrieval no matter what the format.
Another area for future development is machine to machine consumption of information and sharing. Search providers are increasingly applying advanced analytics of text and other media so their users’ desires are more deeply satisfied through relevant search results. Search will be increasingly entity-centric and collaborative.

Future of search will include more semantic understanding of both content and queries. For example Exorbyte is focused on searching in structured master data – people, products and places, and its ability to query this data without use of restrictive match-keys for both lexicographical and semantic similarity is globally unique.

The future of search goes through natural language processing while on the other hand it will entail the capability of providing advanced information analysis during indexation time.

The facility to search within the document itself is becoming vital. The Docurated platform caters for instant access to the most relevant page or slide without even having to open the document.

Effective enterprise search can eradicate inefficiency. Enterprise search will become instant and intuitive, paving the way for increased productivity across the enterprise.

In my next post, I will highlight few search applications that could be worth looking into...

Sunday, November 30, 2014

SharePoint 2013 Improvements

In this post, I will describe few improved features in SharePoint 2013.

Cross-Site Publishing

SharePoint 2013 has cross-site publishing. In the previous versions of SharePoint, it was not possible to easily share content across sites. Using cross-site publishing, users can separate authoring and publishing into different site collections: authored content goes into an indexable "catalog", and you can then use FAST to index and deliver dynamic content on a loosely coupled front end.

This feature is required for services like personalization, localization, metadata-driven topic pages, etc. An example of its use is a product catalog in an e-commerce environment. It can be used more generally for all dynamic content. Note that cross-site publishing is not available in SharePoint Online.

Here is how it works. First, you designate a list or a library as a "catalog". FAST then indexes that content and makes it available to publishing site collections via a new content search web part (CSWP). There are few good features put into creating and customizing CSWP instances, including some browser-based configurations. Run-time queries should execute faster against the FAST index than against a SharePoint database.

Cross-site publishing feature could significantly improve your content reuse capabilities by enabling you to publish to multiple site collections.

Templates

Creating templates still begins with a master page which is an ASP.NET construction that defines the basic page structure such as headers and footers, navigation, logos, search box, etc. In previous versions, master pages tended to contain a lot of parts by default, and branding a SharePoint publishing site was somewhat tricky.

SharePoint 2013 has new Design Manager module, which is essentially a WYSIWYG master page/page layout builder. Design Manager is essentially an ASP.NET and JavaScript code generator. You upload HTML and CSS files that you create and preview offline. After you add more components in the UI (for example, specialized web parts), Design Manager generates the associated master page. Page layouts get converted to SharePoint specific JavaScript that the platform uses to render the dynamic components on the page.

You can generate and propagate a design package to reuse designs across site collections. There are template snippets that enable you to apply layouts within a design package, but they are not reusable across design packages.

This process is more straight forward than the previous versions, but it still would likely involve a developer.

Contributing Content

SharePoint 2013 enables contributors to add more complex, non-web part elements like embedded code and video that does not have to be based on a specific web part. This feature is called "embed code". Note that if you are using cross-site publishing with its search based delivery, widget behavior may be tricky and could require IT support.

With respect to digital asset management, SharePoint has had the ability to store digital assets. However, once you got past uploading a FLV or PNG file, there was scant recourse to leverage it. SharePoint 2013 brings a new video content type, with automatic and manual thumbnailing.

Creating image renditions capability has also improved. It allows you to contribute a full fidelity image to a library, and then render a derivative of that image when served through a web page.

Other added features include better mobile detection/mobile site development and an improved editing experience.

Metadata and Tagging Services

SharePoint 2013 has solid metadata and tagging services with improved and simplified the term store. However, there is still no versioning, version control or workflow for terms.

Big improvement is that using FAST, you can leverage metadata in the delivery environment much more readily than you could in previous versions. You can use metadata-based navigation structures (as opposed to folder hierarchies), and deploy automated, category pages and link lists based on how items are tagged.

Saturday, November 15, 2014

Enterprise Content Management and Mobile Devices

With mobile devices becoming increasingly powerful, users want to access their documents while on the move. iPads and other tablets in particular have become very popular. Increasingly, employers allow employees to bring mobile devices of their choice to work.

"Bring Your Own Device" (BYOD) policy became wide spread in organizations and users started expecting and demanding new features that would enable them to work on their documents from mobile devices. Therefore, the necessity to have mobile access to content has greatly increased in recent years.

As with most technology, mobile and cloud applications are driving the next generation of capabilities in ECM tools. The key capabilities in ECM tools are the ability to access documents via mobile devices, ability to sync documents across multiple devices, and the ability to work on documents offline.

Most tools provide a mobile Web-based application that allows users to access documents from a mobile’s Web browser. That is handy when users use a device for which the tool provides no dedicated application.

The capabilities of mobile applications vary across different tools. In some cases, the mobile application is very basic, allowing users to perform only read-only operations. In other cases, users can perform more complex tasks such as creating workflows, editing documents, changing permissions or adding comments.

Solutions and Vendors

Solutions emerged that specialize in cloud based file sharing capabilities (CFS). Dropbox, Google Drive, Box.com, and Syncplicity (acquired by EMC) provide services for cloud-based file sharing, sync, offline work, and some collaboration for enterprises.

There is considerable overlap of services between these CFS vendors and traditional document management (DM) vendors. CFS vendors build better document management capabilities (such as library services), and DM vendors build (or acquire) cloud-based file sharing, sync, and collaboration services. Customers invested in DM tools frequently consider deploying relevant technology for cloud file sharing and sync scenarios. Similarly, many customers want to extend their usage of CFS platforms for basic document management services.

DM vendors which actively trying to address these needs include Alfresco (via Alfresco Cloud), EMC, Microsoft (via SkyDrive/ Office 365), Nuxeo (via Nuxeo Connect), and OpenText (via Tempo Box). Collaboration/social vendors like Jive, Microsoft, and Salesforce have also entered the enterprise file sharing market. Other large platform vendors include Citrix which acquired ShareFile. Oracle, IBM, and HP are about to enter this market as well.

Key Features

Number of Devices - Number of devices that the ECM vendor provides mobile applications for is very important. Most tools provide specific native applications for Apple’s iPhone and iPad (based on iOS operating system) and Android-based phones and tablets. Some also differentiate between the iPhone and iPad and provide separate applications for those two devices. Some provide applications for other devices such as those based on Windows and BlackBerry.

File sync and offline capabilities - Many users use more than one device to get work done. They might use a laptop in the office, a desktop at home, and a tablet and a phone while traveling. They need to access files from all of those devices, and it is important that an ECM tool can synchronize files across different devices.

Users increasingly expect capabilities for advanced file sharing, including cloud and hybrid cloud-based services. Most tools do that by providing a sync app for your desktop/laptop, which then syncs your files from the cloud-based storage to your local machine.

Most tools require users to create a dedicated folder and move files to that dedicated folder, which is then synced. A few tools like Syncplicity allow users to sync from any existing folder on your machine.

A dedicated folder can be better managed and seems to be a cleaner solution. However, it means that users need to move files around which can cause duplication. The other approach of using any folder as a sync folder allows users to keep working on files in their usual location. That is convenient, but if users reach a stage when they have too many folders scattered around on their laptop and other synced machines, they might have some manageability issues.

Some tools allow users to selectively sync. Rather than syncing the entire cloud drive, users can decide which folders to sync. That is useful when users are in a slow speed area or they have other bandwidth-related constraints. In some cases, they can also decide whether they want a one-way sync or a bi-directional sync. Once they have the files synced up and available locally, they typically can work offline as well. When they go online, their changes are synced back to the cloud.

Most tools that provide a dedicated mobile applications can also sync files on mobile devices. However, mobile syncing is usually tricky due to the closed nature of mobile device file systems.

While most ECM and DM vendors provide some varying capabilities for mobile access, not all of them can effectively offer file sync across multiple devices.

Your options should be based on your users' requirements. Access them very carefully before deciding on a suitable solution for your organization.

Friday, October 31, 2014

Success in Enterprise Content Management Implementations

A successful enterprise content management (ECM) implementation requires an ongoing partnership between IT, compliance, and business managers.

Strict top-down initiatives that leave little for users' requirements consideration result in ECM systems that users don’t want to use.

Similarly, an ad hoc, overly decentralized approach leads to inconsistent policies and procedures, which in turn leads to disorganized, not governed, not foundable content. In both extremes, the ECM initiative ends with a failure.

Whether your organization uses an agile, waterfall or mixed approach to ECM deployment, ECM leaders must think about program initiation, planning, deployment, and ongoing improvement as a process and not as isolated events. Team composition will change over time of ECM project planning and roll-out, as different skill sets are needed.

For example, a business analyst is a key member of the team early in the project when developing a business case and projecting total cost of the project, while legal department will need to get involved when documenting e-discovery requirements.

But, there is often no clear location in the org chart for fundamental content management responsibilities, and that can contribute to weakened strategy, governance and return on investment (ROI).

Approach to ECM

Successful ECM initiatives balance corporate governance needs with the desire of business units to be efficient and competitive, and to meet cost and revenue targets.

Organizations should determine the balance of centralized versus decentralized decision making authority by the level of industry regulation, jurisdiction, corporate culture and autonomy of business units or field offices.

A central ECM project team of content management, business process, and technology experts should define strategy and objectives and align with the technology vision. Local subject matter experts in business units or regional offices can then be responsible for the execution and translation of essential requirements into localized policies and procedures, along with the business unit’s content management goals.

Business managers can help to measure current state of productivity, set goals for improvement, contribute to a business case or forecast total cost of a CMS ownership over a number of years. A trainer will be needed during pilot and roll-out to help with change management and system orientation. Legal department should approve updates to retention schedule and disposition policies as practices shift away from classification schemes designed for paper to more automated, metadata-driven approach.

Project Roles

The following roles are essential for an ECM project:
  • Steering committee is responsible for project accountability and vision. Their role is to define an overall vision for an ECM project and outline processes and procedures to ensure integrity of information.
  • Project manager is responsible for the ECM project management during CMS deployment. The project manager's role is to create project plans and timetables, identify risks and dependencies, liaise with business units, executive sponsors, IT, and other teams.
  • Business analyst is responsible for outlining the desired state of CMS implementation and success metrics. This role is to gather business and technical requirements by engaging with business, technical, and legal/compliance stakeholders. They need to identify the current state of operations and outline the desired future state by adopting a CMS system.
  • Information architect's role is to define and communicate the standards to support the infrastructure, configuration, and development of ECM application.System administrators - their role is to define and implement an approach to on-premises, cloud, or hybrid infrastructure to support a CMS.
  • CMS administrator is responsible for the operation of the CMS. This role is to define and implement processes and procedures to maintain the operation of the CMS.
  • User experience specialist's role is to define standards for usability and consistency across devices and applications, and create reusable design and templates to drive users' adoption.
  • Records and information managers' role is to define and deploy taxonomies, identify metadata requirements, and to develop retention, disposition, and preservation schedules.
Core competencies will be supplemented by developers, trainers, quality assurance, documentation, and other specialists at various phases of the ECM deployment project. It is important to provide leadership during the deployment of a CMS. The team should bring technical knowledge about repositories, standards and service-oriented architectures, combined with business process acumen and awareness of corporate compliance obligations.

Information architects will be important participants during both the planning and deployment phases of the project. Communication and process expertise are essential for ongoing success. IT, information architect, and information managers should learn the vocabulary, pain points, and needs of business units, and help translate users' requirements to technical solutions so that the deployed CMS could help to improve current processes.

Compliance subject matter experts should communicate the implications and rationale of any change in process or obligations to users responsible for creating or capturing content.

Project plans, budgets and timetables should include time for coaching, communication, and both formal and informal training. Even simple file sharing technology will require some investment in training and orientation when processes or policies are changed.

Strategic Asset

ECM is a long-term investment, not a one-time technology installation project. Enterprises can often realize short-term ROI by automating manual processes or high-risk noncompliance issues, but the real payoff comes when an enterprise treats content as a strategic asset.

A strong ECM project team demonstrates leadership, communication skills and openness to iteration, setting the foundation for long-term value from the deployment efforts.

For example, a company aligned its deployment and continuous improvement work by adopting more agile approaches to project delivery, as well as a willingness to adopt business metrics (faster time to market for new products), instead of technology management metrics (number of documents created per week). That change allowed the company to better serve its document sharing and collaboration needs of sales teams in the field.

The project team must engage directly with the user community to create systems that make work processes better. It is a good idea to include hands-on participation and validation with a pilot group.

Recommended Practices

Follow best practices from completed ECM projects. Review processes, applications, forms, and capture screens to identify areas of friction when people capture or share content. User experience professionals have design and testing experience, and they need to be included in the ECM deployment team.

User participation is valuable throughout the ECM deployment project. Direct input on process bottlenecks, tool usability and real-world challenges helps prioritize requirements, select technologies and create meaningful training materials.

Senior managers who participate on a steering committee, or are stakeholders in an information governance strategy, should allow their teams to allocate adequate time for participation. That might mean attending focus groups, holding interviews, attending demos and training, or experimenting with new tools.

Be Proactive

A sustainable and successful ECM initiative will be responsive to the changing behavior of customers, partners and prospects, changing needs of users, and corporate and business unit objectives. Stay current with ECM and industry trends. ECM project team members should keep one eye on the future and be open to learning about industry best practices.

Businesses will continue to adopt mobile, cloud and social technologies for customer and employees communication. Anticipate new forms of digital content and incorporate them into the ECM program strategy proactively, not reactively.

Proactively push vendors for commitments and road maps to accommodate those emerging needs. Stay alert to emerging new vendors or alternative approaches if the needs of business stakeholders are shifting faster than current ECM technology. Aim for breadth as well as depth of knowledge, and encourage team members to explore adjacent areas to ECM to acquire related knowledge and think more holistically.

Saturday, September 6, 2014

Managed Metadata in SharePoint - Part Two

In part one of this post, I described using metadata in SharePoint. In this part two, I will describe metadata management.

Managed metadata makes it easier for Term Store Administrators to maintain and adapt your metadata as business needs evolve. You can update a term set easily. And, new or updated terms automatically become available when you associate a Managed Metadata column with that term set. For example, if you merge multiple terms into one term, content that is tagged with these terms is automatically updated to reflect this change. You can specify multiple synonyms (or labels) for individual terms. If your site is multilingual, you can also specify multilingual labels for individual terms.

Managing metadata

Managing metadata effectively requires careful thought and planning. Think about the kind of information that you want to manage the content of lists and libraries, and think about the way that the information is used in the organization. You can create term sets of metadata terms for lots of different information.

For example, you might have a single content type for a document. Each document can have metadata that identifies many of the relevant facts about it, such as these examples:
  • Document purpose - is it a sales proposal? An engineering specification? A Human Resources procedure?
  • Document author, and names of people who changed it
  • Date of creation, date of approval, date of most recent modification
  • Department responsible for any budgetary implications of the document
  • Audience
Activities that are involved with managing metadata:
  • Planning and configuring
  • Managing terms, term sets, and groups
  • Specifying properties for metadata
Planning and configuring managed metadata

Your organization may want to do careful planning before you start to use managed metadata. The amount of planning that you must do depends on how formal your taxonomy is. It also depends on how much control that you want to impose on metadata.

If you want to let users help develop your taxonomy, then you can just have users add keywords to items, and then organize these into term sets as necessary.

If your organization wants to use managed term sets to implement formal taxonomies, then it is important to involve key stakeholders in planning and development. After the key stakeholders in the organization agree upon the required term sets, you can use the Term Store Management Tool to import or create your term sets. You can also use the tool to manage the term sets as users start to work with the metadata. If your web application is configured correctly, and you have the appropriate permissions, you can go to the Term Store Management Tool by following these steps:

1. Select Settings and then choose Site Settings.
2. Select Term store management under Site Administration.

Managing terms, term sets, and groups

The Term Store Management Tool provides a tree control that you can use to perform most tasks. Your user role for this tool determines the tasks that you can perform. To work in the Term Store Management Tool, you must be a Farm Administrator or a Term Store Administrator. Or, you can be a designated Group Manager or Contributor for term sets.

To take actions on an item in the hierarchy, follow these steps:

1. Point to the name of the Managed Metadata Service application, group, term set, or term that you want to change, and then click the arrow that appears.
2. Select the actions that you want from the menu.

For example, if you are a Term Store Administrator or a Group Manager you can create, import, or delete term sets in a group. Term set contributors can create new term sets.

Properties for terms and term sets

At each level of the hierarchy, you can configure specific properties for a group, term set, or term by using the properties pane in the Term Store Management Tool. For example, if you are configuring a term set, you can specify information such as Name, Description, Owner, Contact, and Stakeholders in pane available on the General tab. You can also specify whether you want a term set to be open or closed to new submissions from users. Or, you can choose the Intended Use tab, and specify whether the term set should be available for tagging or site navigation.

Managed Metadata in SharePoint - Part One

Using metadata in SharePoint makes it easier to find content items. Metadata can be managed centrally in SharePoint and can be organized in a way that makes sense in your business. When the content across sites in an organization has consistent metadata, it is easier to find business information and data by using search. Search features such as the refinement panel, which displays on the left-hand side of the search results page, enable users to filter search results based on metadata.

SharePoint metadata management supports a range of approaches to metadata, from formal taxonomies to user-driven folksonomies. You can implement formal taxonomies through managed terms and term sets. You can also use enterprise keywords and social tagging, which enable site users to tag content with keywords that they choose. SharePoint enable organizations to combine the advantages of formal, managed taxonomies with the dynamic benefits of social tagging in customized ways.

Metadata navigation enables users to create views of information dynamically, based on specific metadata fields. Then, users can locate libraries by using folders or by using metadata pivots, and refine the results by using additional key filters.

You can choose how much structure and control to use with metadata, and the scope of control and structure. For example:
  • You can apply control globally across sites, or make local to specific sites.
  • You can configure term sets to be closed or open to user contributions.
  • You can choose to use enterprise keywords and social tagging with managed terms, or not.
The managed metadata features in SharePoint enable you to control how users add metadata to content. For example, by using term sets and managed terms, you can control which terms users can add to content, and who can add new terms. You can also limit enterprise keywords to a specific list by configuring the keywords term set as closed.

When the same terms are used consistently across sites, it is easier to build robust processes or solutions that rely on metadata. Additionally, it is easier for site users to apply metadata consistently to their content.

Metadata Terms

A term is a specific word or phrase that you associated with an item on a SharePoint site. A term has a unique ID and it can have many text labels (synonyms). If you work on a multilingual site, the term can have labels in different languages.

There are two types of terms:

Managed terms are terms that are pre-defined. Term Store administrators organize managed terms into a hierarchical term set.

Enterprise keywords are words or phrases that users add to items on a SharePoint site. The collection of enterprise keywords is known as a keywords set. Typically, users can add any word or phrase to an item as a keyword. This means that you can use enterprise keywords for folksonomy-style tagging. Sometimes, Term Store administrators move enterprise keywords into a specific managed term set. When they are part of a managed term set, keywords become available in the context of that term set.

Term Set

A Term Set is a group of related terms. Terms sets can have different scope, depending on where you create the term set.

Local term sets are created within the context of a site collection, and are available for use (and visible) only to users of that site collection. For example, when you create a term set for a metadata column in a list or library, then the term set is local. It is available only in the site collection that contains this list or library. For example, a media library might have a metadata column that shows the kind of media (diagram, photograph, screen shot, video, etc.). The list of permitted terms is relevant only to this library, and available for use in the library.

Global term sets are available for use across all sites that subscribe to a specific Managed Metadata Service application. For example, an organization might create a term set that lists names of business units in the organization, such as Human Resources, Marketing, Information Technology, and so on.

In addition, you can configure a term set as closed or open. In a closed term set, users can't add new terms unless they have appropriate permissions. In an open term set, users can add new terms in a column that is mapped to the term set.

Group

Group is a security term. With respect to managed metadata, a group is a set of term sets that share common security requirements. Only users who have contributor permissions for a specific group can manage term sets that belong to the group or create new term sets within it. Organizations should create groups for term sets that will have unique access or security needs.

Term Store Management Tool

The Term Store Management Tool is the tool that people who manage taxonomies use to create or manage term sets and the terms within them. The Term Store Management tool displays all the global term sets and any local term sets available for the site collection from which you access the Term Store Management Tool.

Managed Metadata column

A Managed Metadata column is a special kind of column that you can add to lists or libraries. It enables site users to select terms from a specific term set. A Managed Metadata column can be mapped to an existing term set, or you can create a local term set specifically for the column.

Enterprise Keywords column

The enterprise Keywords column is a column that you can add to content types, lists, or libraries to enable users to tag items with words or phrases that they choose. By default, it is a multi-value column. When users type a word or phrase into the column, SharePoint presents type-ahead suggestions. Type-ahead suggestions might include items from managed term sets and the Keywords term set. Users can select an existing value, or enter a new term.

Social Tags

Social tags are words or phrases that site users can apply to content to help them categorize information in ways that are meaningful to them. Social tagging is useful because it helps site users to improve the discoverability of information on a site. Users can add social tags to information on a SharePoint site and to URLs outside a SharePoint site.

A social tag contains pointers to three types of information:
  • A user identity
  • An item URL
  • A term
When you add a social tag to an item, you can specify whether you want to make your identity and the item URL private. However, the term part of the social tag is always public, because it is stored in the Term Store.

When you create a social tag, you can choose from a set of existing terms or enter something new. If you select an existing term, your social tag contains a pointer to that term.

If, instead, you enter a new term, SharePoint creates a new keyword for it in the keywords term set. The new social tag points to this term. In in this manner, social tags support folksonomy-based tagging. Additionally, when users update an enterprise Keywords or Managed Metadata column, SharePoint can create social tags automatically. These terms then become visible as tags in newsfeeds, tag clouds, or My Site profiles.

List or library owners can enable or disable metadata publishing by updating the Enterprise Metadata and Keywords Settings for a list or library.

In the second part of this post, I will describe managing SharePoint metadata.

Sunday, August 31, 2014

Role of Automatic Classification in Information Governance

Defensible disposal of unstructured content is a key outcome of sound information governance programs. A sound approach to records management as part of the organization’s information governance strategy is rife with challenges.

Some of the challenges are explosive content volumes, difficulty with accurately determining what content is a business record comparing to transient or non-business related content, eroding IT budgets due to mounting storage costs, and the need to incorporate content from legacy systems or merger and acquisition activity.

Managing the retention and disposition of information reduces litigation risk, it reduces discovery and storage costs, and it ensures organizations maintain regulatory compliance. In order for content to be understood and determined why it must be retained, for how long it must be retained, and when it can be dispositioned, it needs to be classified.

However, users see the process of sorting records from transient content as intrusive, complex, and counterproductive. On top of this, the popularity of mobile devices and social media applications has effectively fragmented the content authoring and has eliminated any chance of building consistent classification tools into end-user applications.

If classification is not being carried out, there are serious implications when asked by regulators or auditors to provide reports to defend the organization’s records and retention management program.

Records managers also struggle with enforcing policies that rely on manual, human-based classification. Accuracy and consistency in applying classification is often inadequate when left up to users, the costs in terms of productivity loss are high, and these issues, in turn, result in increased business and legal risk as well as the potential for the entire records management program to quickly become unsustainable in terms of its ability to scale.

A solution to overcome this challenge is automatic classification. It eliminates the need for users to manually identify records and apply necessary classifications. By taking the burden of classification off the end-user, records managers can improve consistency of classification and better enforce rules and policies.

Auto-Classification makes it possible for records managers to easily demonstrate a defensible approach to classification based on statistically relevant sampling and quality control. Consequently, this minimizes the risk of regulatory fines and eDiscovery sanctions.

In short, it provides a non-intrusive solution that eliminates the need for business users to sort and classify a growing volume of low-touch content, such as email and social media, while offering records managers and the organization as a whole the ability to establish a highly defensible, completely transparent records management program as part of their broader information governance strategy.

Benefits of Automatic Classification for Information Governance

Apply records management classifications as part of a consistent, programmatic component of a sound information governance program to:

Reduce
  • Litigation risk
  • Storage costs
  • eDiscovery costs
Improve
  • Compliance
  • Security
  • Responsiveness
  • User productivity and satisfaction
Address
  • The fundamental difficulties in applying classifications to high volume, low touch content such as legacy content, email and social media content.
  • Records manager and compliance officer concerns about defensibility and transparency.
Features
  • Automated Classification: automate the classification of content in line with existing records management classifications.
  • Advanced Techniques: classification process based on a hybrid approach that combines machine learning, rules, and content analytics.
  • Flexible Classification: ability to define classification rules using keywords or metadata.
  • Policy-Driven Configuration: ability to configure and optimize the classification process with an easy "step-by-step" tuning guide.
  • Advanced Optimization Tools: reports make it easy to examine classification results, identify potential accuracy issues, and then fix those issues by leveraging the provided "optimization" hints.
  • Sophisticated Relevancy and Accuracy Assurance: automatic sampling and bench marking with a complete set of metrics to assess the quality of the classification process.
  • Quality Assurance : advanced reports on a statistically relevant sample to review and code documents that have been automatically classified to manually assess the quality of the classification results when desired.