Tuesday, July 31, 2012
OpenText Portal (formerly Vignette Portal) is a part of OpenText ECM Web Content Management Solution. It enables you to create web sites with rich content and applications, enabling customized users interactions. It provides a highly scalable and efficient means of aggregating content and applications for use across a variety of initiatives inside and outside the firewall.
It enables users to combine web services, repository data, and user interfaces in meaningful ways to create valuable business applications without IT help. Users can create web pages by simply selecting portlets from OpenText’s library of over 200 portlets.
Portal layout management allows users to easily apply a variety of page layouts via visually intuitive tools. There is interaction between portlets. Pages refresh only as needed. Portlets load separately, so the end user does not have to wait until the entire page loads. Pages are dynamic and load quickly.
Users can create a template from an existing site to be used later to create similar sites. Portlets can be embedded at any web site. User can utilize pre-defined portlets that allow rapid site creation of portals with common functionality that is integrated with existing applications and data. There are portlets that enable teams to share and publish portal documents as part of any business processes.
Social platforms such as blogs, wikis, rating, ranking, tagging could be integrated. Out-of-box federated search and taxonomy management tools are also available. There is an ability to custom connect 3rd party search engines.
All portals could be managed from a unified permissions-based management console while allowing delegated administration to individual portals which allows diverse multi-dialect administrators to manage virtually all of their portal objectives.
Content could be delivered to customer's PDA, cell phone or other device of choice. Content could be targeted from different repositories based on dynamic user segments.
User experience can be improved by enabling inter-portlet communications, web services integration and display of 3rd party portlets in an integrated, contextual way.
One can enhance security and auditing of the site activity via a native reporting interface that reports on site modifications.
Web presence is globalized with extensive internationalization for portal users and administrators to support diverse audiences.
Live sites could be updated faster via incremental item changes instead of import/export of the entire navigation tree. Import/export components or versions of components, categories of components or versions of components using batch processes.
Friday, July 27, 2012
The following products deliver the Web Content Management component of the OpenText ECM Suite:
OpenText Web Experience Management is a comprehensive solution for managing content in high performance, scalability, and transaction-oriented web applications.
OpenText Portal works in tandem with OpenText Web Experience Management to allow you to rapidly create mashups and composite applications built on Web services, repository data, and user interfaces.
OpenText Dynamic Portal for Third-Party Portals works in tandem with OpenText Web Experience Management to allow you to publish content directly into portals such as Liferay, IBM WebSphere, or Oracle WebCenter.
OpenText High Performance Web Delivery provides a unique, integrated combination of real-time caching and intelligent cache management capabilities. It improves Web site performance, makes Web sites more scalable, and in many cases reduces costs and manual overhead.
OpenText Semantic Navigation is a powerful and engaging technology, OpenText Semantic Navigation combines content analytics with information retrieval to automatically present Web site visitors with content that is relevant to what they’re looking for or viewing.
OpenText's Web Experience Optimization products give you the capabilities to optimize each phase of your on-line marketing campaign lifecycle and provide customers with a more relevant Web experience.
OpenText Campaign Management helps deliver highly personalized content to individual recipients through online and offline touch points. From simple campaigns to more sophisticated marketing programs, it enables the easy design, execution and measurement of multipart, results-driven communications across a variety of channels.
OpenText Business Integration Studio is a graphical development environment for rapidly integrating business applications, processes and information that facilitates the integration of OpenText's Web content management, social media, and portal management applications with disparate applications and content repositories inside and outside the enterprise.
Today's post is about OpenText Web Experience Management.
OpenText Web Experience Management (formerly Vignette Content Management) is a solution for creating and managing content for enterprise Internet, extranet, or intranet applications.
- create new sites from site templates derived from their existing sites or launch a sample site with out-of-the-box content types, workflows, and presentation assets;
- apply graphical themes, page and region layouts to pages, layouts or whole sites;
- browse content in contextual, multi-dimensional workspaces by site, content type, folder, category or explorer views.
It offers user-friendly console, branded themes, and preferences that empower its content owners to easily create and manage web content while automatically adjusting to day-to-day authoring actions. This enables users to edit pages and content with no experience and non-intrusive toolbars, improve productivity with contenxtual views that present information users need when and where they need it, and publish in one click.
Powerful and easy to use site layout, theme, and content templating interface enables users to control how site content is presented and helps ensure consistent branding and communication to a variety of audiences, while reducing site development and maitenance cost.
Users can manage all content through intuitive and configurable role-based management console. The console includes ribbon menu and properties toolbar for commonly used items, content tracker, task inbox, and content search with saved queries. There are ergonomic controls for faster editing including language, time zone, filters, page and content settings.
Cotent items can be reused across multiple sites. For example, one article can be published on 100+ sites with a single management workflow.
Vanity URLs can be completely automated or manually defined to help increase site rankings in major search engines and support marketing campaigns, promotions, and messaging that can help increase the number of visitors to your site.
It integrates well with social and collaboration sites.
Users can create content using their favorite tools and web forms. Content from other repositories can be dynamically integrated or migrated for full cycle workflow and publishing management. Images, podcasts, Adobe Flash files and video metadata management allows editors to streamline approval, metadata tagging, and publishing of these assets.
Support for roles allows organizations to customize access to content creators, approvers, developers, and other users. This allows individulas to participate in selected processes automatically while standardizing and enforcing business practices that are exposed to users through delegated administration.
There are good workflows and content types modeling and the best practices template in the sample site. The content type modeler provides an intuitive interface to create and modify content objects such as articles, products, news, etc. Content type evolution allows you to make common modifications.
Content could be in any data format including files, database records, XML documents, rich media assets such as images, videos, and podcasts. There are library services such as check-in and check-out, version control, rollback, content history, security, content classification, metadata indexing, and search. Content could be in any language.
There are tools to optimize the staging and delivery of managed content through web sites, portals, and other applications. The application streamlines the retrieval of content items according to their multi-faceted taxonomies and then transforms the items to suit the intended delivery context, application or device.
User can publish content through automated workflows that deliver content to multiple delivery applications (web servers, databases, application servers). The publishing engine manages content dependencies, so content retains its context throughout content lifecycle.
Search capabilities allow parametric search across content, content attributes, and metadata, both within the management console and for the site search features as well as framework for 3rd party search, enabling high degrees of search accuracy.
There is an ability to manage user access centrally including delegated administration based on LDAP standards.
I will continue describing other products of web content management component of the OpenText ECM Suite in my next posts. Follow me and stay tuned!
Wednesday, July 25, 2012
In my previous posts, I mentioned that the taxonomy is necessary to create navigation to content. If users know what they are looking for, they are going to search. If they don't know what they are looking for, they will look for ways to navigate to content, in other words, browse through content. Taxonomies can also be used as a method of filtering search results so that results are restricted to a selected node on the hierarchy.
Once documents have been classified, users can browse the document collection, using an expanding tree-view to represent the taxonomy structure.
When there are many documents involved, creating taxonomy could be time consuming. There are few tools on the market that provide automatic classification. Another use of the automatic classification is to automatically tag content with controlled metadata (also known as Automatic Metadata Tagging) to increase the quality of the search results.
The tools that provide automatic classification are: Autonomy, ClearForest, Documentum, Interwoven, Inxight, Moxomine, Open Text, Oracle, SmartLogic.
These tools can classify any type of text documents. Classification is either performed on a document repository or on a stream of incoming documents.
Here is how this software works. Example: "International Business Machines today announced that it would acquire Widget, Inc. A spokesperson for IBM said: "Big Blue will move quickly to ensure a speedy transition".
The software classifies concepts rather than words. Words are first stemmed, that is they are reduced to their root form. Next, stop words are being eliminated. These include words such as a, an, in, the - words that add little semantic information. Then, words with similar meanings are equated using thesaurus. For example, the words IBM, International Business Machines, and Big Blue are treated as equivalent.
Next, the software will use statistical or language processing techniques to identify noun phrases or concepts such as "red bicycle". Further, using thesaurus, these phrases are reduced to distinct concepts that will be associated with the document. In this example, there are 3 instances of IBM, 2 instances of acquisition (acquire, speedy transition), and 1 instance of Widget, Inc.
Approaches to Classification
Manual - requires individuals to assign each document to one or more categories. It can achieve a high degree of accuracy. However, it is labor intensive and therefore are more costly than automatic classification in the long run.
Rule-based - keywords or Boolean expressions are used to categorize a document. This is typically used when a few words can adequately describe a category. For example, if a collection of medical papers is to be classified according to a disease together with its scientific, common, and alternative names can be used to define the keywords for each category.
Supervised Learning - most approaches to automatic classification require a human expert to initiate a learning process by manually classifying or assigning a number of "training documents" to each category. This classification system first analyzes the statistical occurrences of each concept in the example documents and then constructs a model or "classifier for each category that is used to classify subsequent documents automatically. The system refines its model, in a sense "learning" the categories as documents are processed.
Unsupervised Learning - these systems identify both groups or clusters of related documents as well as the relationship between these clusters. Commonly referred as clustering, this approach eliminates the need for training sets because it does not require a preexisting taxonomy or category structure. However, clustering algorithms are not always good at selecting categories that are intuitive to users. On the other hand, clustering will often expose useful relationships and themes implicit in the collection that might be missed by a manual process. For this reasons, clustering generally works hand-in-hand with supervised learning techniques.
Each of approaches is optimal for a different situation. As a result, classification vendors are moving to support multiple methods.
Most real world implementations combine search, classification, and other techniques such as identifying similar documents to provide a complete information retrieval solution. Organizations having document repositories will generally benefit from a customized taxonomy.
Once documents are clustered, an administrator can first rearrange, expand or collapse the auto-suggested clusters or categories, and then give them intuitive names. The documents in the cluster serve as initial training sets for supervised-learning algorithms that will be used subsequently to refine the categories. The end result is a taxonomy and a set of topic models are fully customized for an organization's needs.
Building an extensive custom taxonomy can be a large expense. However, automated classification tools can reduce the taxonomy development and maintenance cost.
Organizations with document collections that span complex areas such as medicine, biotechnology, aerospace will have a large taxonomy. However, there are ways to refine taxonomy so it does not become an overwhelming task.
Together, enterprise search and classification provide an initial response to information overload.
Thursday, July 19, 2012
Joomla is a free and open source content management framework (CMF) for publishing content on the World Wide Web and intranets. It includes features such as page caching, RSS feeds, printable versions of pages, news flashes, blogs, polls, search, and support for language internationalization.
Over 9,200 free and commercial extensions are available from the official Joomla! Extension Directory, and more are available from other sources. It is estimated to be the second most used CMS on the Internet after WordPress. Joomla won the Packt Publishing Open Source Content Management System Award in 2006, 2007, and 2011.
You can think of a Joomla! website as bringing together three elements:
- your content, which is mainly stored in a database;
- your template, which controls the design and presentation of your content (such as fonts, colors and layout);
- Joomla! which is the software that bring the content and the template together to produce webpages.
A Joomla template is a multifaceted Joomla extension which is responsible for the layout, design and structure of a Joomla powered web site. While the CMS itself manages the content, a template manages the look and feel of the content elements and the overall design of a Joomla driven web site.
The content and design of a Joomla template is separate and can be edited, changed and deleted separately. The template is where the design of the main layout for a Joomla site is set. This includes where users place different elements (components, modules, and plug-ins), which are responsible for the different types of content. If the template is designed to allow user customization, the user can change the content placement on the site, i.e. putting the main menu on the right or left side of the screen.
The template is the where the design of the main layout is set for a Joomla site. This includes where users place different elements (components, modules, and plug-ins, which are responsible for different types of content.
Using CSS within the template design, users can change the colors of the backgrounds, text, links or just about anything that they could using (X)HTML code.
Images and Effects
Users can also control the way images are displayed on the page and even create flash-like effects such as drop-down menus.
The same applies to fonts. They are set within the template's CSS file(s) to create a uniform look across the entire site, which makes it easy to change the whole look just by altering one or two files rather than every single page.
Joomla! is composed of a Platform and extensions.
Joomla! extensions help extend the Joomla web sites' ability. There are five types of extensions for Joomla: components, modules, plugins, templates, and languages. Each of these extensions handles a specific function.
Components: they are the largest and most complex extensions. They can be seen as mini-applications. Most components have two parts: a site part and an administrator part. Every time a Joomla page loads, one component is called to render the main page body. Components are the major portion of a page because a component is driven by a menu item and every menu item runs a component.
Plugins: they are more advanced extensions and are, in essence, event handlers. In the execution of any part of Joomla, a module or a component, an event can be triggered. When an event is triggered, plugins that are registered with the application to handle that event execute. For example, a plugin could be used to block user-submitted articles and filter out bad words.
Templates: this describes the main design of the Joomla web site and is the extension that allows users to change the look of the site. Users will see modules and components on a template. They are customizable and flexible. Templates determine the style of a website.
Modules: rendering pages flexibly in Joomla requires a module extension, which is then linked to Joomla components to display new content or new images. Joomla modules look like boxes – like the search or login module. However, they don’t require HTML to Joomla to work.
Languages: they are very simple extensions that can either be used as a core part or as an extension. Language and font information can also be used for PDF or PSD to Joomla conversions.
Joomla also has built-in extensions which include: component (Banner, Contacts, Joomla! Update, Messaging, Newsfeeds, Redirect, Search, Smart Search), Content, Menus, ect.
Tuesday, July 17, 2012
Exalead provides search platforms and search-based applications. Search-based applications (SBA) are software applications in which a search engine platform is used as the core infrastructure for information access and reporting. SBAs use semantic technologies to aggregate, normalize and classify unstructured, semi-structured and/or structured content across multiple repositories, and employ natural language technologies for accessing the aggregated information.
Exalead has a platform that uses advanced semantic technologies to bring structure, meaning, and accessibility to previously unused or under-utilized data in the disparate, heterogeneous enterprise information overload.
The system collects data from virtually any source, in any format, and transforms it into structured, pervasive, contextualized building blocks of business information that can be directly searched and queried, or used as the foundation for a new breed of lean, innovative information access applications.
Exalead products include the CloudView platform and the ii Solutions Suite of packaged SBAs, all built on the same powerful CloudView platform.
Exalead CloudView enables organizations to meet demands for real time, in-context, accurately delivered information, accessed from diverse web and enterprise big data sources, yet delivered faster and with cost less than with traditional application architectures. This platform is used for both online and enterprise SBAs as well as enterprise search.
Available for on-premises or cloud delivery, Exalead CloudView is the infrastructure that powers all Exalead solutions, including Exalead’s public web search engine, the company’s custom SBAs, and the Exalead ii Solution Suite of packaged, vertical SBAs.
Exalead ii Solutions Suite
Exalead ii ("information intelligence”) applications are packaged, workflow specific SBAs that transform large volumes of heterogeneous, multi-source data into meaningful, real time information intelligence, and deliver that information intelligence in context to users to improve business processes.
On the data side, the Exalead information infrastructure uses semantic technologies to non-intrusively aggregate, align and enhance multi-source data to create a powerful base of actionable knowledge (i.e., information intelligence).
Exalead Advanced Search options appear as a drop-down menu below the search form, where users select the search criteria that will be entered directly into the search form. A different set of advanced search options is available for each search type.
Registered users can select the "Bookmark" option below for any search results to add these results to a list of saved sites, accessible on the Exalead homepage as a collection of image thumbnails.
- truncation, proximity, and many other advanced operators not available from other search engines;
- includes thumbnails of pages;
- provides excellent narrowing options on right side.
Exalead appears to support Boolean operators and nested searching with the operators AND, OR, and NOT. Either AND, OR or NOT can be used. Searching can be nested using parentheses. Operators must be in upper case. Exalead can also use "for NOT" but only when it is not used along with the Boolean operators. In the Advanced Search, it also has drop-down choices for "containing," "not containing," and "preferably containing." There is also an OPT operator which means that the word following it is an optional word.
Phrase searching is available by using "double quotes" around a phrase. It also supports a NEXT operator for ordered proximity of one word (in other words, the same thing as a phrase search.) So "double quotes" should get the same results as double NEXT quotes. Exalead also supports the NEAR operator for 16 word proximity. You can change it to NEAR/5 (or any other number) to specify a different proximity value.
Exalead's Advanced Search also offer some unusual types of special searches:
- phonetic spelling with the sounds like: operator;
- approximate spelling with the spells like: operator;
- regular expression using regex syntax.
On a search with two or more words, stemming is automatic. Exalead also supports truncation using an asterisk * symbol. Stemming is also controlled on the preferences page. Exalead has no case sensitive searching. Using either lower or upper or mixed case will result in the same results. Exalead supports a title search.
Exalead has limits for language, country, file type, site, and date available on the Advanced Search page. The file type limit includes text, PDF, Word, Excel, Powerpoint, Rich Text Format, Corel WordPerfect, and Shockwave Macromedia Flash. You can place the file type search command into the search box. The Advanced Search page offers a site limit, which can be used to limit results to those from the specified domain. The language limit is available in the Advanced Search.
Some common words such as 'a,' 'the' and 'in' are ignored, but they can be searched with a + in front. Within a phrase search, all words are searched.
Results are sorted by a relevance algorithm. Pages are also clustered by site. Only one page per site will be displayed. Others are available via the yellow folder and domain name. The Advanced Search page used to include two date sort options, but those disappeared with the new interface in October 2006. They are still available via the field prefix of sort. Two options are available: sort:new and sort:old.
Saturday, July 14, 2012
Yesterday, I described information architecture design patterns for web sites and best practices for this design. Today, I am going to describe methods and techniques for information architecture design.
There are a few different approaches commonly used for information architecture design.
Card sorting is a low cost, simple way to figure out how best to group and organize your content based on user input. Card sorting works by writing each content set or page on an index card, and then letting users sort them into groups based on how they think the content should be categorized.
There are several types of card sorting methodologies. The basic method starts out with cards in random order and users sort them in the way they think they should be grouped. In reverse card sorting, the cards are pre-sorted into groups, and users are then given the task of rearranging them as they see fit. Open card sorting lets users name the groups they’ve created for the cards, while closed card sorting will have group names in which the participant places the cards into.
Various methods can be used to analyze the data. The purpose of the analysis is to extract patterns from the population of test subjects, so that a common set of categories and relationships emerges. This common set is then incorporated into the design of the site, either for navigation or for other purposes.
There are a number of tools available to perform card sorting activities with survey participants via the internet. The perceived advantage of remote card sorting is that it allows a larger group of participants to be reached at a lower cost. The software can also assist in the process of analyzing card sort results. The advantages of a remote card sort must be traded off against the lack of personal interaction between card sort participants and the card sort administrator, which may produce valuable insights.
Wireframes and Prototypes
Basic wireframes can do a lot more than just give an outline of the design layout of a site. It also informs us how content will be arranged, at least on a basic level. Putting content into wireframes and prototypes gives us a good sense of how the content is arranged in relation to other content and how well our information architecture achieves our goals.
When you are wireframing, and especially when you are prototyping, you should be working with content that at least resembles what the final content of the site will be.
Site Maps and Outlines
Site maps are quick and easy ways to visually denote how different pages and content relate to one another. It is an imperative step that "mocks up" how content will be arranged.
These content outlines show how all the pages on your site are grouped, what order they appear in, and the relationships between parent and child pages. This is often a simple document to prepare, and may be created after a round or two of card sorting.
For existing sites or content that must be placed in a web site, a content inventory is usually the prelude to this phase.
Information Architecture Design Styles
There are two basic styles of information architecture: top-down and bottom-up. The thing that many designers must realize is that it is useful to look at a site from both angles to devise the most effective IA. Rather than just looking at your projects from a top-down or bottom-up approach, look at it from both ends to see if there are any gaps in how things are organized.
Top-down architecture starts with a broad overview and understanding of the website’s strategy and goals, and creates a basic structure first. From there, content relationships are refined as the site architecture grows deeper, but it is all viewed from the overall high-level purpose of the site.
The bottom-up architecture model looks at the detailed relationships between content first. With this kind of architecture, you might start out with user personas and how those users will be going through the site. From there, you figure out how to tie it all together, rather than looking at how it all relates first.
Different websites require different types of information architecture. What works best will vary based on things like how often content is updated, how much content there is, and how visitors use the site.
Friday, July 13, 2012
Without a clear understanding of how information architecture (IA) should be set up, we can end up creating web sites that are more confusing than they need to be or make web site content virtually inaccessible. Here are some popular IA design patterns, best practices, design techniques, and case examples.
Information Architecture Design Patterns
There are a number of different IA design patterns for effective organization of web site content. Understanding these IA models will help you pick the most appropriate starting point for a site’s information structure. Let us talk about five of the most common web site IA patterns.
The first pattern is the single page model. Single page sites are best suited for projects that have a very narrow focus and a limited amount of information. These could be for a single product site, such as a website for an iPhone app, or a simple personal contact info site.
This information structure puts all the pages on the same level. Every page is just as important as every other page. This is commonly seen on brochure style sites, where there are only a handful of pages. For larger sites with a lot more pages, the navigation flow and content findability gets unwieldy.
A main page with subpages is probably the most commonly seen web site IA pattern. This consists of a main page (we know this more commonly as a "home page" or "front page"), which serves as a jump-off point for all the other pages. The sub-pages have equal importance within the hierarchy.
Strict Hierarchy Pattern
Some websites use a strict hierarchy of pages for their information design. On these sites, there will be an index page that links to sub-pages. Each sub-page (parent page) has its own subpages (child pages). In this pattern, child pages are only linked from its parent page.
Co-Existing Hierarchies Pattern
As an alternative to the strict hierarchy, there is also the option of co-existing hierarchies. There are still parent and child pages, but in this case, child pages may be accessible from multiple parent pages/higher-level pages. This works well if there’s a lot of overlapping information on your site.
Best Practices for Information Architecture Design
There are a number of things you need to remember when designing the information architecture of your site. Most importantly, you need to keep the user experience at the forefront when making choices about how best to present and organize the content on your site.
Don’t Design Based on Your Own Preferences
You are not your user. As a designer, you have to remember that site visitors won’t have the same preferences as you. Think about who a "site user" really is and what they would want from the site.
Research User Needs
Researching what your users need and want is one of the most important steps in creating an effective information architecture. There are a number of ways to research user needs. You could get feedback through interviews, surveys, user side testing, and other usability testing methods prior to the site launch to see if users are able to navigate your site efficiently.
Once you know what your users actually need, rather than just your perception of what they need, you will be able to tailor your information architecture to best meet those needs.
Have a Clear Purpose
Every site should have a clear purpose, whether that’s to sell a product, inform people about a subject, provide entertainment, etc. Without a clear purpose, it is virtually impossible to create any kind of effective IA.
The way the information on a site is organized should be directly correlated to what the site’s purpose is. On a site where the end goal is to get visitors to purchase something, the content should be set up in such a way that it funnels visitors toward that goal. On a site that is meant to inform, the IA should lead people through the content in a way that one page builds on the last one.
You may have sub-goals within a site, requiring you to have subsets of content with different goals. That is fine, as long as you understand how each piece of content fits in relation to the goals of a site.
Creating personas, a hypothetical narrative of your various web site users, is another great way to figure out how best to structure the site’s content.
In its very basic form, developing personas is simply figuring out the different types of visitors to your site and then creating "real" people that fit into each of those categories. Then throughout the design process, use the people you have profiled as your basis for designing and testing the site’s IA.
Keep Site Goals in Mind
It is important that you keep the site’s goals in mind while you’re structuring content. Pick the right IA pattern for those goals. Use goals to justify why the information structure should be the way you designed it.
Consistency is central to exemplary information architectures. If eight of your nine informational pages are listed in a section, why wouldn’t you also include the ninth page there? Users expect consistency.
The same goes for how information is structured on each page. Pick a pattern and stick to it. If you deviate from that pattern, make sure you have a very good reason to do so; and make the deviation is consistent in similar cases. Inconsistencies have a tendency to confuse visitors.
Tomorrow, I am going to describe methods and techniques for information architecture design.
Monday, July 9, 2012
OpenText Knowledge Management (formerly Livelink ECM - Knowledge Management) is a comprehensive knowledge management solution that enables organizations to search, classify, navigate, and collect all of their corporate knowledge in a single, secure, web based repository.
OpenText Knowledge Management works with OpenText Document Management or OpenText Content Lifecycle Management, leveraging the power of these content repositories and adding functionality that manages all knowledge from a single interface, regardless of originating source. Open Text Knowledge Management is a completely integrated, web based solution that delivers end-to-end, closed-loop management for all of your corporate knowledge assets.
Knowledge Management enables employees to perform their daily work more efficiently and accurately. The benefits of a centralized knowledge repository and library services ensures that you are working with the most up-to-date information. Specialized tools enable you to identify topic experts; quickly finding the best information resources from anywhere in your organization.
Powerful search, classification and navigation tools to help you find and manage an unlimited number of documents: from files, documents and objects, to project logs, search queries, discussion items, tasks, workflow maps and more in an organized, hierarchical structure.
You can identify subject matter experts and harvest their knowledge from the centralized knowledge repository. Open Text Knowledge Management extends the functionality of Open Text document management foundations - Open Text Document Management and Open Text Content Lifecycle Management. Open Text document management solutions fit your existing security framework, ensuring protection of content through permissions based access rules. Authorized users benefit from full access to all functionality from a single, secure web browser, and the flexibility of Open Text document management foundations allows for configuration of permissions on a group or individual level.
Organize and share knowledge: Knowledge Management manages any type of electronic document in any file format. You can organize electronic documents into hierarchies of folders and compound documents within three types of workspaces that reflect the different ways in which people work: the Enterprise Workspace; Project Workspaces; and Personal Workspaces.
Capture knowledge automatically: Knowledge Management allows you to associate metadata with documents. Metadata is indexed and can be used to more easily find, retrieve, and generate reports on documents based on your custom criteria. Each piece of metadata information is an attribute, and sets of attributes can be grouped into categories that can be associated with any document.
Classify and categorize knowledge assets: multiple taxonomic classifications can be associated with documents in their original locations. This enables you to browse and search documents in the knowledge management repository according to taxonomies that differ from the one implied by the folder structure without having to create multiple copies of documents. You can organize information placed in Open Text Document Management or Open Text Content Lifecycle Management repositories via manual, assisted, or automatic means. Streamline browsing and improve search precision.
Automate knowledge management processes: Knowledge Management's graphical Workflow Designer tool enables you to automate document management processes, such as document change requests and document review and approval processes, to ensure that they are carried out accurately and consistently. You can design processes according to your own requirements or those imposed by regulatory agencies.
Discover knowledge with prospective queries: Knowledge Management provides prospective searching capabilities. You can create special queries to monitor various data sources, including the OpenText Document Management or OpenText Content Lifecycle Management repositories, shared network drives, external web sites and any integrated databases. When new information is discovered, you are immediately notified.
Single point of information access: federated search enables you to query multiple repositories and brings disparate information sources together. Use powerful search tools to quickly information access locate the right information. View results on a single page in a sorted, clustered format. Show hit highlights, document summaries, relevance rankings and result themes to improve fidelity.
Dynamic, multi-dimensional navigation: create dynamic, virtual folder structures built on pre-defined information taxonomies and document attributes; no pre-defined hierarchy is required. Often, metadata is not visible or navigable when you browse for information. Using taxonomies for browsing and additional context, you can decide which dimension is best to find required documents. Drill down the hierarchy using associated metadata to refine values and corresponding documents.
Optimize taxonomy creation and maintenance: analyze and cluster related documents, and extract and generate key concepts. Create suggested taxonomy nodes based on analyses; import and export in many common formats.
Automatically collect and extract information: crawl multiple Web-based information sources including intranets, extranets, web sites, and more. Create personal entries for crawling, search specific sites and search from the Document Management or Content Lifecycle Management user interface.
Friday, July 6, 2012
Digital Asset Management (DAM) is a business process for organizing rich media assets such as pictures, images, video, and audio for storage, retrieval and distribution.
DAM is an increasingly important tool for organizations to protect and grow their brands, control the costs of creating and distributing their digital media content, and maximize the return from their digital assets. Many organizations rely on DAM to provide the much needed centralization, workflow optimization, collaboration, digital media management, and distribution solutions that are becoming an increasingly demanding requirement.
Departments in an organization such as marketing, sales, advertising, public relations benefit most from digital asset management. Using DAM enables organizations to address their most pressing rich media challenges such as managing video, presentation slide decks, creative design media, as well as marketing collateral and related processes.
Organizations face these challenges regarding their digital assets:
- disconnected and inefficient production processes;
- lack of unified collaboration and sharing among all contributors, (photographers, authors, editors, designers, marketers and distributors) international offices, third-parties and business partners;
- wasted resources, both human and capital, due to needless searching for media assets, recreating or repurchasing lost images, using incorrect versions, workflow bottlenecks, inefficient file transformation and delivery processes, lack of usage tracking and lack of production automation;
- implementing disaster recovery and archival plans;
- embracing new technologies such as RSS feeds, user-generated content, and Web 2.0 applications;
- identifying efficient ways to move images, video, audio, PDF, InDesign, PPT, Excel, Word, EPS, GIF and SVG files from where they are created and managed to their ultimate destination in web production, printed magazines, mobile devices or a whole host of other delivery points.
The Benefits of DAM
Organizations deploying specialized DAM solutions typically realize the following benefits:
Cost Savings — organizations gain immediate ROI by eliminating redundant asset creation efforts; quickly retrieving, editing, and redistributing assets; redeploying resources to other mission critical projects.
Generation of new revenue streams — organizations derive new revenue by converting and repurposing existing content like book covers to promote their eBooks. Consider an image that costs thousands of dollars to create and recreate multiple times. With DAM, it can be reused. Without DAM, its existence may not be known and the re-creation costs become reoccurring.
Brand and Messaging Continuity — built-in revision control, asset repurposing, and approval processes ensure organizations maintain consistent use and re-expression of digital assets, from brochures and corporate videos to web content.
Digital Media Management and Distribution — DAM systems enable the efficient organization, indexing, and distribution of digital assets. Advanced DAM systems provide a distributed architecture and multi-site asset storage, as well as the ability to provide multiple repositories for self-synchronization of both assets and their associated metadata. This means professionals can quickly and easily find and create what they need and distribute it to their intended audience with the click of a few buttons.
Global Web-Based Access — organizations can distribute digital masters and other types of licensed assets via secure web access. Advanced DAM systems also provide asset ordering and fulfillment modules that can easily integrate with existing ecommerce and transaction servers. Not only can organizations quickly access their files from any web interface, they can create new revenue streams and sell their assets via virtual storefronts.
Full-featured DAM solutions include a variety of tools for organizing, accessing, editing, transforming and working with images, video, audio, presentations, etc. Important features include search, transformation of media into various formats, integration with creative authoring tools, workflow, IP rights functionality, usage tracking, back-up and role-based security.
With a robust DAM solution, organizations can achieve new levels of creative effectiveness and efficiency by enabling marketing teams to quickly find content, easily reuse digital assets, enhance creative productivity, provide visibility into digital asset management usage and processes, improve brand consistency and maintain
security and policy compliance. Digital asset management solutions provide deep integration functionality with creative authoring applications such as Adobe Creative Suite 3 or Quark.
By enabling secure access and management of all digital media content throughout an organization, DAM tools provides an efficient, centralized, and connected publishing workflow environment right from the creative design stage through to production and distribution to all channels.
There are tools in the market which are specifically designed for DAM. I will describe them in my future posts.
Thursday, July 5, 2012
ECM has provided an essential service to enterprises, helping them to better capture, organize and track massive quantities of content within their organizations.
Today’s more strategic IT departments are driving businesses to rethink how they approach content management and collaboration in the enterprise. A variety of factors including persistently tight IT budgets, lower headcount, business uncertainty, and unrelenting pressure to grow through innovation have made the advantages of cloud based ECM even more compelling.
Current ECM systems can be delivered in a cloud. Delivered over the web, these new solutions offer the usability of consumer tools and recognize the need for external sharing, all at a cost amenable to today’s IT budgets.
Cloud content management (CCM) is an emerging category that combines many of the core elements and content focus of ECM with the usability and ease of sharing so prominent in collaboration software. As its name implies, CCM brings the benefits of the cloud - low maintenance, elastic and scalable, with access to content anytime, anywhere, across devices.
CCM can fulfill the content management and collaboration needs of small to medium-sized businesses, in many cases bringing content management to companies previously unable to afford it and also provide a layer of value on top of ECM solutions already deployed by large enterprises.
The best CCM solutions have open platforms that allow for easy integration across the systems a company has already deployed, as well as connections into other cloud services such as Salesforce.com and Google Apps. This is particularly useful for those businesses that are considering a full move into cloud-based software. Small businesses are leading the way toward operating fully in the cloud, and even larger enterprises are beginning to see their security concerns addressed by large cloud vendors.
CCM solutions are using the advantages of web delivery to offer additional functionality above and beyond what ECM solutions provide. For example, CCM can make it easy to view any type of content in a Web browser without even owning the software application that it was created in. Gone are the days of being unable to view content you have received because you don’t have the latest version of Microsoft Office, or haven’t invested in Adobe Illustrator. Furthermore, open platforms make it possible to also edit much of this content.
This is still an emerging category in ECM, but there are immediate opportunities to improve how businesses engage with content, and a number of CCM companies are aspiring to address them.
Whether or not businesses are ready to fully embrace cloud solutions or maintain a hybrid approach with existing infrastructure, providing dynamic, flexible collaboration tools with CCM will enhance productivity and ultimately give IT departments more insight into their organizations.
There are several reasons that the cloud’s value proposition for ECM is particularly attractive:
Consume what you need
ECM implementations in a cloud are typically a series of projects over time, each requiring different capabilities on a different scale. On-premise ECM implementation requires to implement all capabilities at the same time. The cloud model, on the other hand, gives you the flexibility to just purchase the capabilities you need at the scale you need today and to then adjust your engagement over time as necessary.
Eliminating technical complexity
On- premise ECM implementations could be complex, requiring IT organizations to assemble software components, install and configure them, apply patches, write integration code, maintain operating system updates, continuously tune system parameters, maintain hardware and manage performance. The cloud model relieves the service consumer of the burdens associated with this complexity. As Gartner noted, cloud ECM "brings with it fewer costs for infrastructure hardware, software and management and less complexity in the applications layer."
Cloud ECM projects are much easier to get an approval from the company management since with on-premise ECM, upper management has to commit lots of money and human resources to a project up front whereas cloud ECM implementation does not require these resources.
Cloud ECM implementations typically take 24% of the time of similar on-premise projects. That rapid time-to benefit translates directly into the higher ROI that business managers want.
With budgets tight, the comparatively low cost of cloud-based ECM is extremely appealing to the business. Plus, CFOs have better visibility into and control over costs when they are explicitly itemized on a vendor contract.
Cloud ECM projects don’t require large outlays for uncertain results. And a variety of protections can be written into vendor contracts. For these and other reasons, the cloud fits well into today’s corporate risk mitigation strategies.
Cloud ECM implementation gives the business this flexibility, both in terms of right-sizing capacity and in terms of aligning ECM capabilities with changing business needs.
Cloud offers an undeniable business advantages. And the uptake that we are seeing in the marketplace proves that ECM buyers agree.