Wednesday, November 30, 2011
In my last post, I mentioned that there are three types of metadata: descriptive, structural, and administrative. Today, I am going to talk more about metadata schemes.
Many different metadata schemes are being developed in a variety of user environments and disciplines. I will discuss the most common ones in this post.
The Dublin Core Metadata Element Set arose from discussions at a 1995 workshop sponsored by OCLC and the National Center for Supercomputing Applicatons (NCSA). As the workshop was held in Dublin, Ohio, the element set was named the Dublin core. The continuing development of the Dublin Core and related specifications is managed by the Dublin Core Metadata Initiative (DCMI).
The original objective of the Dublin Core was to define a set of elements that could be used by authors to describe their own Web resources. Faced with a proliferation of electronic resources and the inability of the library profession to catalog all these resources, the goal was to define a few elements and some simple rules that could be applied by noncatalogers. The original 13 core elements were later increased to 15: Title, Creator, Subject, Description, Publisher, Contributor, Date, Type, Format, Identifier, Source, Language, Relation, Coverage, and Rights.
Because of its simplicity, the Dublin Core element set is now used by many outside the library community - researchers, museums, music collectors to name only a few. There are hundreds of projects worldwide that use the Dublin Core either for cataloging or to collect data.
Meanwhile the Dublin Core Metadata Initiative has expanded beyond simply maintaining the Dublin Core Metadata Element Set into an organization that describes itself as "dedicated to promoting the widespread adoption of inter-operable operable metadata standards and developing specialized metadata vocabularies for discovery systems.
The Text Encoding Initiative (TEI)
The Text Encoding Initiative is an international project to develop guidelines for marking up electronic texts such as novels, plays, and poetry, primarily to support research in the humanities.
TEI also specify a header portion, embedded in the resource, that consists of metadata about the work. The TEI header, like the rest of the TEI, is defined as an SGML DTD Document Type Definition) — a set of tags and rules defined in SGML syntax that describes the structure and elements of a document. This SGML mark-up becomes a part of electronic resource itself.
Metadata Encoding and Transmission Standard METS)
The Metadata Encoding and Transmission Standard (METS)was developed to fill the need for a standard data structure for describing complex digital library objects. METS is an XML Schema for creating XML document instances that express the structure of digital library objects, the associated descriptive and administrative metadata, and the names and locations of the files that comprise the digital object.
Next time: architecture for for authoring, producing, and delivering information.
Monday, November 28, 2011
What is metadata? Metadata is structured information that describes, explains, locates or otherwise makes it easier to retrieve, use, or manage information resources. Metadata is often called data about data or information about information. It is used to describe data.
For example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, and other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document.
There are three main types of metadata:
Descriptive metadata describes a resource for purposes such as discovery and identification. It can include elements such as title, abstract, author, keywords.
Structural metadata indicates how compound objects are put together, for example, how pages are ordered to form chapters.
Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it. There are several subsets ofadministrative data; two of them sometimes are listed as separate metadata types:
Rights management metadata which deals with intellectual property rights.
Preservation metadata which contains information needed to archive and preserve a resource.
Metadata can describe resources at any level. It can describe a collection, a single resource, or a component which is a part of a larger resource (for example, a photograph in an article).
Metadata can be embedded in a digital object or it can be stored separately.
Metadata is often embedded in HTML documents and in the headers of image files.
Storing metadata with the object it describes ensures the metadata will not be lost, eliminates problems of linking between data and metadata, and helps to ensure that the metadata and object will be updated together.
However, it is impossible to embed metadata in some types of objects for example, artifacts. Also, storing metadata separately can simplify the management of the metadata itself and facilitate the search and retrieval. Therefore, metadata is commonly stored in a database system and linked to the objects described.
More about metadata next time...
Tuesday, November 22, 2011
Guided by the key factors, we can define and follow a taxonomy development process that addresses business context, content, and users. The steps in creating taxonomy are: assemble a team, define a scope, create, implement, test, maintain.
Assemble a team
Successful taxonomy development requires both taxonomy expertise and in-depth knowledge of the corporate culture and content. Therefore a taxonomy team should include subject matter experts or content experts from the business community who have in-depth knowledge of corporate culture and content. For small projects, the group may simply be part of a user focus group that is concentrating on the taxonomy task. Taxonomy interrelates with several aspects of web development, including website design, content management, and web search. So, these roles should be included in the taxonomy team. Common considerations are overall project scope, target audience, existing organizational taxonomy initiatives, and corporate culture.
- What is the purpose of the taxonomy?
- How is the taxonomy going to be used?
- What is the content scope? (Possibilities include company-wide, within an organizational unit, etc.)
- What content sources will the taxonomy be built upon? (Specifically, the locations of the content to be covered in the taxonomy.)
- Who will be using the taxonomy? (Possibilities include employees, customers, partners, etc.)
- What are the user profiles?
Answering the following questions would help to define the scope of taxonomy:
This step should also define metrics for measuring the taxonomy values. For websites, baselines should be established for later comparison with the new site. An example would be the number of clicks it takes a site visitor to locate certain information.
Taxonomy creation can either be manual, automated, or a combination of both. It involves analyzing context, content, and users within the defined scope. The analysis results serve as input for the taxonomy design, including both taxonomy structure and taxonomy view. The taxonomy development team is responsible for the actual mechanics of taxonomy design, whereas the taxonomy interest group is responsible for providing consultation on content inclusion, nomenclature, and labeling.
The design of the taxonomy structure and taxonomy view may run in tandem, depending on the resources available and project time frame. All concepts presented through the taxonomy view need to be categorized properly according to the taxonomy structure. This will ensure that every content item is organized centrally through the same classification schema.
Along with taxonomy structure and taxonomy view, standards and guidelines must be defined. There should be a categorizing rule for each category in taxonomy view and taxonomy structure. In short, you must define what type of content should go under any given category. Content managers can then refer to these rules when categorizing content. If an automated tool is used for content tagging, these rules can be fed to the tagging application. Standards and guidelines help ensure classification consistency, an important attribute of a quality content management system and search engineering process.
Implement the taxonomy
The next step includes setting up the taxonomy and tagging content against it. This is often referred to as "populating" the taxonomy. Similar to taxonomy creation, implementation can be manual, automated, or a combination of both. The goal here is to implement the taxonomy into the website design, search engineering, and content management.
For website design, taxonomy view provides the initial design for the site structure and interface. The focus is on the concepts and groupings, not so much on nomenclature, labeling, or graphics. There may be a need to go through multiple iterations, moving from general to specific in defining levels of detail for the content. Types of taxonomy view include site diagrams, navigation maps, content schemes, and wire frames. The final site layout is built by applying graphical treatment to the last iteration of taxonomy view.
For search engineering, implementation can be accomplished in various ways. Taxonomy structure as a classification schema can be fed into a search engine for training purposes or integrated with the search engine for a combination of category browsing and searching. In the latter case, the exposed taxonomy structure is essentially a type of taxonomy view. One of the most challenging aspects of taxonomy implementation is the synchronization between the search engine and the taxonomy, especially for search engines that do not take taxonomic content tagging in the indexing process. In such cases, a site visitor may receive different results from searching and browsing the same category, which could prove confusing.
Taxonomy structure needs to be integrated within the content management process. Content categorization should be one of the steps within the content management workflow, just like review and approval. If a content management tool is available, the taxonomy structure is loaded into the tool, either through a manual setup process, or imported from a taxonomy created externally. Through the content management process, content is tagged manually or automatically against the taxonomy. In other words, the taxonomy is populated with content.
The goal of testing is to identify errors and discrepancies. The test results are then used to refine the taxonomy design. The testing should be incorporated into the usability testing process for the entire web application, including back-end content management testing and front-end site visitor testing. Here is a sample checklist of testing topics:
Given specific information topics, can the site visitors find what they need easily, in terms of coverage and relevancy? Given specific information topics, how many clicks does it take before a site visitor arrives at the desired information? Given specific tasks, can the site visitors accomplish them within a reasonable time frame? Do the labels convey the concepts clearly or is there ambiguity? Are the content priorities in sync with the site visitors' needs? Does the structure allow content managers to categorize content easily?
Testing results are recorded and can later be compared with the baseline statistics to derive the measurements of improvements.
Taxonomy design and fine-tuning is an ongoing process similar to content management. As an organization grows or evolves, its business context, content, and users change. New concepts, nomenclature, and information need to be incorporated into the taxonomy. A change management process is critical to ensure consistency and currency.
Better structure equals better access
Taxonomy serves as a framework for organizing the ever-growing and changing information within a company. The many dimensions of taxonomy can greatly facilitate website design, content management, and search engineering. If well done, taxonomy will allow for structured web content, leading to improved information access.
Next time: what is metadata?
Monday, November 21, 2011
Taxonomy is a hierarchical structure for the classification and/or organization of data. In content management and information architecture, taxonomy is used as a tool for organizing content. Development of an enterprise taxonomy requires the careful coordination and cooperation of departments within your organization.
Once the taxonomy is created, it needs to be managed. There is no such thing as "finished taxonomy". Taxonomy needs to be revisited and revised periodically. Why? Business changes, new content is created, old content is archived.
The two key aspects of taxonomy are taxonomy structure and taxonomy view. Taxonomy structure provides a classification schema for categorizing content within the content management process. Taxonomy view is a conceptual model illustrating the types of information, ideas, and requirements to be presented on the Web. It represents the logical grouping of content visible to a site visitor and serves as input for Web site design and search engineering. Together, these concepts can guide your Web development efforts to maximize return on investment. Build it right, and they will come.
There are the three key factors of taxonomy development: business context, users, and content.
These factors reflect the fundamental business requirements for most taxonomy projects. Strategically, they provide a "trinity compass" for the road of taxonomy development.
Here's a description of each factor:
"Business context" is the business environment for the taxonomy efforts in terms of business objectives, Web applications where taxonomy will be used, corporate culture, past or current taxonomy initiatives, and artifacts within the organization and across the industry.
"Users" refers to the target audience for the taxonomy, user profiles, and user characteristics in terms of information usage patterns.
"Content" is the type of information that will be covered by the taxonomy or that the taxonomy will be built upon.
There are two common techniques for taxonomy strategy.
A single taxonomy is used to store and deliver content. When content contributors utilize the content management system, they add, remove, and manage content in a structure that closely resembles the navigation and hierarchy of the delivery framework (your website or application). The navigation structure is the taxonomy.
This method is conceptually simple and makes it quite easy to dynamically build your navigation from knowledge of this hierarchy. However, this model does have drawbacks:
Every time you reorganize the website, the organization of content in your management application shifts. Admittedly, this isn’t much of a drawback if you’re managing content for one moderately sized site or if your team of contributors is small.
It is difficult to reuse content in this structure. If you hope to reuse assets throughout your website, where are they organized in this structure?
In an environment with many contributors and diverse security requirements, organizing content (in the management application) in another way, say by contributor or by department, may be more intuitive.
A more robust, albeit more complex, method of managing content is to maintain structures and metadata in the content management application that is independent of the delivery system’s organization (navigation).
Content is organized, at the source, as may be required by your security, workflow, or organizational needs. Perhaps your data lives in a content management system or database where different organizational mechanisms exist. Unfortunately, the navigation for your consuming application (the presentation framework) is often managed by some other means.
By some rule or algorithm, leveraging your content classification data, material gets “mapped” to the presentation framework.
Advantages of this model:
There may be more than one way to organize content (think: content reuse). Given the same set of content, same set of classification criteria, but multiple algorithms, we can now build a delivery framework that allows for many methods of organization.
You no longer need to reorganize your content management application to change the delivery application. Just the algorithms (mappings) change.
If you hope to build your navigation dynamically, often you’ll need to build a tool or alternate hierarchy. You may not find much value in the content’s taxonomy.
Content, in your management environment, may be orphaned in your presentation framework if there are no rules mapping to an accessible part of the site.
Parts of the site may only be sparsely populated. It may not be readily obvious that you are creating gaps (with little or no content) in your site.
While powerful, this technique can be difficult to administer without having a fairly comprehensive understanding of the site design and algorithms for "mapping".
Assuming there are hierarchical structures within your content classification system, there is a very good chance that valuable information exists in the hierarchy. By taking advantage of relationships within your hierarchical metadata structures, richer algorithms may be developed for your content delivery framework.
Friday, November 18, 2011
Taxonomy is very important in content management. It ensures that search and navigation work properly and that content is accessible and can be found via two access points: searching and browsing.
Taxonomy is the science and practice of classification. The word is derived from Greek words "taxis" meaning "arrangement" and "nomia" which means "method". Taxonomy uses taxonomic units, known as taxa (singular taxon). A taxonomy, or taxonomic scheme, is a particular classification ("the taxonomy of ..."), arranged in a hierarchical structure or classification scheme.
Taxonomy is organized by supertype-subtype relationships, also called generalization-specialization relationships, or less formally, parent-child relationships. Once a taxonomy tree has been created, all the items in the tree are tagged as belonging to one or more specific taxonomy categories. This process is typically referred to as "categorization", "tagging" or "profiling". Users can then browse and search within specific categories.
In such an inheritance relationship, the subtype by definition has the same properties, behaviours, and constraints as the supertype plus one or more additional properties, behaviours, or constraints. For example: a bicycle is a kind of vehicle, so any bicycle is also a vehicle, but not every vehicle is a bicycle. Therefore a subtype needs to satisfy more constraints than its supertype. Thus to be a bicycle is more constraint than to be a vehicle.
Historically used by biologists to classify plants or animals according to a set of natural relationships, in content management and information architecture, taxonomy is used as a tool for organizing content. Creating a taxonomy is central to any enterprise content strategy as means of organizing content so that it could be found by either searching or browsing.
Here is an example of food taxonomy:
Next time: more about taxonomy as it applies to content management and the best strategies to develop it.
Thursday, November 17, 2011
I have been asked many times about the difference between document management and document control. Today's post is about this subject.
Document Management is also referred to as Content Management. Content Management is the set of processes and technologies that support the collection, managing, and publishing of electronic information. These processes and technologies allow managing electronic content through its lifecycle from its creation, review, storage and dissemination to destruction. Main goals are accessibility, findability, and re-use of content.
Document Control is revision control of documents, assigning and tracking documents numbers, change control management, assuring documents compliance, documents routing and tracking. It can also include Bill of Materials (BOM) and Approved Vendor List (AVL) management in engineering organizations.
Content or Document Management usually includes Document Control activities.
They are entirely different terms and should not be used interchangeably.
In companies, especially in regulated industries, there are document control people for performing document control functions separately. They do not have any functions related to content management. Document control is usually part of QA. It is mandated function in regulated industries.
I will talk more about document control in my future posts.
Wednesday, November 16, 2011
Today, I will talk about two last components of ECM cycle - Preserve and Deliver.
Eventually, content ceases to change and becomes static. The "Preserve" components of ECM handle the long-term, safe storage and backup of static information, as well as the temporary storage of information that does not need to be archived. A content management system usually has capabilities to accommodate these functions.
Long-term storage systems require the timely planning and regular performance of data migrations, in order to keep information available in the changing technical landscape. As storage technologies fall into disuse, information must be moved to newer forms of storage, so that the stored information remains accessible using contemporary systems. For example, data stored on floppy disks becomes essentially unusable if floppy disk drives are no longer readily available; migrating the data stored on floppy disks to Compact Discs preserves not only the data, but the ability to access it. This ongoing process is called continuous migration.
To secure the long term availability of information different strategies are used for electronic archives. The continuous migration of applications, index data, metadata and objects from older systems to new ones generates a lot of work, but secures the accessibility and usability of information. During this process, information that is no longer relevant can be deleted. Conversion technologies are used to update the format of the stored information, where needed. Emulation of older software allows users to run and access the original data and objects. Special viewer software can identify the format of the preserved objects and can display the objects in the new software environment.
The Deliver component of ECM provides content to users. Content gets where and to whom it needs to go through a number of tools. Content can be delivered via print, email, websites, portals, RSS feeds.
Security is involved in delivering the content to users. It prevents the illegal distribution of rights-managed content by restricting access to content down to the sentence level as well as granting/restricting permissions for forwarding and accessing content.
In order to effectively manage all components of ECM cycle, a content management system is the best solution. I am going to talk about content management systems in one of my next posts.
Tuesday, November 15, 2011
Yesterday, I talked about "Manage" component of ECM cycle. Today, I will talk about "Store" component.
"Store" components store information that has been captured. The "Store" components can be divided into three categories: Repositories as storage locations, Library Services as administration components for repositories, and Storage Technologies. These infrastructure components are sometimes held at the operating system level, and also include security technologies that work together with the "Deliver" components.
Among the possible kinds of repositories are: file systems, content management systems, databases, data warehouses.
File systems are used on data storage devices such as hard disk drives, floppy disks, optical discs, or flash memory storage devices.
A content management system (CMS) is a system providing a collection of procedures used to manage documents work flow in a collaborative environment. In a CMS, information can be defined as nearly anything: documents, movies, pictures, scientific data, etc. CMSs are used for storing, controlling, revising, and publishing documents. Serving as a central repository, the CMS controls the version level of documents updates. Version control is one of the primary advantages of a CMS. The main objectives of CMS are to streamline access, eliminate bottlenecks, encourage collaboration, optimize security, and maintain integrity of documents.
Databases administer access information, but can also be used for the direct storage of documents, content, or media assets.
Data warehouses are complex storage systems based on databases, which reference or provide information from all kinds of sources. They can also be designed with global functions, such as document or information warehouses.
Library services are the administrative components of the ECM system that handle access to information. The library service is responsible for taking in and storing information from the Capture and Manage components. It also manages the storage locations in dynamic storage, the actual "Store," and in the long-term Preserve archive. The storage location is determined only by the characteristics and classification of the information. The library service works in concert with the Manage components' database to provide the necessary functions of search and retrieval.
Among the possible kinds of storage technologies are magnetic online media, magnetic tape, digital optical media, cloud computing.
Magnetic online media is usually hard drives which may be local or part of a storage area network (SAN).
Magnetic tape data storage, in the form of automated storage units called tape libraries, use robotics to provide nearline storage. Standalone tape drives may be used for backup, but not online access.
Digital optical media includes CD, DVD and other specialized optical formats like magneto-optical drives for storage and distribution of data. Optical jukeboxes can be used for nearline storage. Optical media in jukeboxes can be removed, transitioning it from nearline to offline storage.
In cloud computing, data can be stored on offsite cloud computing servers, accessed via the Internet.
Monday, November 14, 2011
As I mentioned in my last post, ECM combines five components: capture, manage, store, preserve, and deliver.
Capture of content in the content management environment is usually performed using content management system. I will talk about content management systems in my future posts.
After content has been captured, it moves to the "manage" component of the process. As I mentioned in my last post, "manage" component includes document management, web content management, collaboration, documents workflow, and records management. Let's look more closely into these components.
Document management includes functions like check-in and check-out, version management, search and navigation, and organizing documents. Check-in and check-out are also are called the library service.
The library service is responsible for taking in and storing information from the Capture component. Check-in and check-out functions allow for the document to be checked out for editing and then to be checked into the system after the editing of the document is completed. This allows for version management of documents. Every time a document is checked out and then checked in, a content management system will register that there was a change in the document and will assign the consecutive version number to the document. The library service generates logs of information usage and editing, called an "audit trail."
Version management allows to keep track of different versions of the same document. The document can be returned to the previous version if necessary.
The library service works in concert with the manage components' database to provide the necessary functions of search and retrieval.
Search and navigation ensure that documents are accessible and can be found via two access points: searching and browsing.
Organizing documents allows for the search and navigation to work. Taxonomy is used for organizing documents.
Collaboration component helps users to work with each other to develop and process content. ECM systems facilitate collaboration by using information databases and processing methods that are designed to be used simultaneously by multiple users, even when those users are working on the same content item. They make use of knowledge based on skills, resources and background data for joint information processing. Administration components, such as virtual whiteboards for brainstorming, appointment scheduling and project management systems, communications application such as video conferencing, etc. may be included. Collaborative ECM may also integrate information from other applications, permitting joint information processing.
Web content management
Web content management (WCM) component is used to present information already existing and managed in the ECM repository. The information is presented using Web technologies - on the Internet, an extranet, or on a portal.
Documents workflow allows for documents to move through the process of creating, revising, and approving with few people involved in the process. Workflow management includes reminders, deadlines, delegation, and other administration functions as well as monitoring and documentation of process status, routing, and outcomes.
Records management refers to the administration of records, important information, and data that companies are required to archive. Some of the functions of records management are: file plans and other structured indexes for the orderly storage of documents; indexing of documents, supported by taxonomies, thesauri, and controlled vocabularies; management of record retention schedules and deletion schedules; protection of documents in accordance with its characteristics, sometimes down to individual content components in documents; use standardized metadata for the identification and description of stored documents.
Friday, November 11, 2011
Enterprise Content Management (ECM) is a formalized means of organizing and storing an organization's documents and other content. The term encompasses strategies, methods, and tools used throughout the lifecycle of the content.
ECM is an umbrella term covering document management, web content management, search, collaboration, records management, digital asset management (DAM), workflow management, capture and scanning. ECM is primarily aimed at managing the life-cycle of information from initial publication or creation all the way through archival and eventually disposal.
ECM aims to make the management of corporate information easier through simplifying storage, security, version control, process routing, and retention. The benefits to an organization include improved efficiency, better control, and reduced costs.
ECM combines five components: capture, manage, store, preserve, and deliver.
Capture component involves converting information from paper documents into an electronic format through scanning. Capture is also used to collect electronic files and information into a consistent structure for management. Capture technologies also encompass the creation of metadata (index values) that describe characteristics of a document for easy location through search technology. For example, a medical chart might include the patient ID, patient name, date of visit, and procedure as index values to make it easy for medical personnel to locate the chart.
The Manage component includes document management, web content management, collaboration, documents workflow, and records management. First four components address the dynamic part of the information's lifecycle. Records management focuses on managing finalized documents in accordance with the organization's document retention policy, which in turn must comply with government mandates and industry practices.
In many cases Manage components already include the "store" components.
Store component stores information that isn't required, desired, or ready for long-term storage or preservation. Even if the Store component uses media that are suitable for long-term archiving, "Store" is still separate from "Preserve."
Preserve component involves the long-term, safe storage and backup of static, unchanging information. Preservation is typically accomplished by the records management features of an ECM system and many are designed to help companies comply with government and industry regulations.
The Deliver components of ECM present information from the Manage, Store, and Preserve components to users.
Next time: more about ECM components.
Thursday, November 10, 2011
What is content?
In recent times information is typically referred to as content. Content is any type or unit of information. It can be text, images, graphics, video, sound, documents, records, etc. It can be in digital format or it can be in hard format. Digital content may take the form of text, such as documents, multimedia files, such as audio or video files, or any other file type which follows a content lifecycle which requires management.
We all use content management to some degree. In the early stages of the company's life, information is stored in the folder system on a network hard drive. This folder hierarchical system is set up by one person and as the company grows the location of content within the folders is passed on via written procedures or more likely through word-of-mouth.
It is cheap and easy to use when the amount of content is small. As the company starts taking on more projects, developing more products, and hiring more employees the amount of content increases and so does the amount of people needing access to that information.
Content Management is the set of processes, strategies, methods, tools, and technologies used to capture, manage, store, publish, preserve, and deliver content.
These processes, strategies, methods, tools, and technologies allow managing content through its lifecycle from its creation, review, storage, and dissemination to destruction.
Main goals are accessibility, findability, and re-use of content.
How do we achieve this? This is the talk for tomorrow.
Wednesday, November 9, 2011
“The information in the world doubles every day. What they don’t tell us is that our wisdom is cut in half at the same time.” -- Joey Novick
Management of information is the solution to the information overload. The true value of information is not its immediate use. In order to effectively use information, it must be readily available for analysis and synthesis with other information.
In recent times information is also referred to as content. Content usually follows life cycle. Information or content management covers the entire scope of content whether it is in the form of a paper document, an electronic file, a database, audio, video or an email.
Why do we need to manage content?
There are few reasons to manage content:
- Central documents repository
- Enable collaboration
- Eliminate paper records
- Automate processes
- Protect sensitive information
- Improve control of information
- Increase efficiency, and productivity
- Reduce cost
- Improve legal and regulatory compliance
Tomorrow, I will talk more about content management.
Tuesday, November 8, 2011
“Information is the oxygen of the modern age. It seeps through the walls topped by barbed wire, it wafts across electrified borders”. -- Ronald Reagan
In our society recorded information and knowledge are growing in volume and complexity. Companies, universities, laboratories, government, schools, etc. are acquiring and using information at greater rates than at any time in the past. Information appears in many diverse forms.
Information can save time, money, and sometimes lives. Therefore more and more people of all ages and occupations are becoming increasingly dependent upon efficient access to information.
Information can save time, money, and sometimes lives. Therefore more and more people of all ages and occupations are becoming increasingly dependent upon efficient access to information.
In today's volatile marketplace, businesses are in a never ending search for information. Businesses and individuals alike are being assaulted by a barrage of information that exceeds their ability and/or time to analyze, synthesize, and disseminate it. Everybody has the information, most are being asphyxiated by it.
Tomorrow I will talk about the solution to this problem.