Wednesday, May 29, 2013

Digital Assets Management System - Autonomy Virage MediaBin


Autonomy Virage MediaBin is the advanced and comprehensive solution to index, analyze, categorize, manage, retrieve, process, and distribute all types of digital assets within an organization.


Autonomy Virage MediaBin helps organizations with globally distributed teams to effectively manage, distribute, and publish digital assets used to promote their messaging, products, and brands.

Companies would benefit from higher-impact marketing and communications, greater agility, stronger brand equity, increased team productivity, and the security of knowing valuable corporate assets will be fully leveraged and preserved for the future. By providing self-service access to digital assets, marketing personnel no longer have to spend time fulfilling content requests.

Autonomy Virage MediaBin delivers rapid return on investment and can support implementations scaling up to the largest global enterprises.

Major Features:

Unified Management: a single environment which supports standardized and automated tagging to accelerate search and streamline the creation, management, delivery, and archival of all digital assets.

Intelligent Analytics: leverages Autonomy IDOL to automate manual processes such as metadata tagging, summarization, and categorization.

Next-Gen Rich Media Technology: leverages next generation video and speech analytics technology that extracts concepts to enable cross-referencing with other forms of information.

Effective and Agile Content Reuse: provides secure access to all content for all users. Internal and external teams can collaborate more effectively to improve coordination and productivity in all marketing programs.

Transform and Transcode on the Fly: Multi-threaded transformation task engine can handle large quantities of simultaneous complex transformations involving format conversions, color-space conversions, color adjustments, resolution, cropping, sizing, padding, watermarking, and a wide variety of advanced graphics adjustments that would normally require a user to open an editing application on their desktop.

Other Features:
  • browser based system;
  • permissions can be defined based on users roles or by folders; search incorporates permissions;
  • content can be pulled from CMS such as TeamSite and rendered on the fly;
  • each asset has unique ID which is passed over to TeamSite; TeamSite "knows" when there is a different or a new revision. If an asset gets updated in MediaBin, TeamSite gets notified;
  • has set of workflows such as approval and review, can define set of rules once assets are approved, they move to publishing area; also includes Process Studio which is the workflow tool and Template which is form builder;
  • assets can be uploaded by "drag and drop" and it can be Dragged and Dropped to Teamsite from MediaBin;
  • there is no limitation to size of the files;
  • upload can be automated for assets to go to specific folders;
  • after the download, assets will be preserved for individual users;
  • how assets are used is reported in Teamsite;
  • can pull content from SharePoint;
  • metadata is preserved, it is searchable and indexable.
  • content is automatically categorized by asset type and resolution; asset type is recognized on ingest, so no entering metadata is required;
  • Teamsite pulls images from MediaBin;
  • supports 29 languages;
  • ability to link assets together (for example: associated assets) using existing metadata;
  • ability to create a taxonomy of assets;
  • search includes saved searches, recent searches, both preset and executed searches, custom search;
  • ability to search for words in video and then go that place in the video;
  • once a user finds content, an action can be taken such as download, send it e-mail, send shortcut to content or add it to light-box which is defined by permissions;
  • there is Activity Manager which includes all taken actions and an ability to get to users' tasks.

Benefits:
  • eliminates human error and ensures quicker access to content through automatic metadata extraction and accurate search results;
  • reduces costs by automating the production, review, and distribution of digital assets;
  • encreases efficiency by providing users with self-service access at any time;
  • greater speed time-to-market while maintaining accuracy and consistency;
  • facilitates quick reuse and re-purposing of images, as well as rapid content creation;
  • produces higher-impact marketing and communications, greater agility, and stronger brand consistency;
  • increases compliance by security controlled access, complete audit trail, and control of licensed content.

Thursday, May 9, 2013

Search Engine Technology

Modern web search engines are highly intricate software systems which employ technology that has evolved over the years. There are few categories of search engines that are applicable to specific browsing needs.

These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search.

The more prevalent search engines such as Google and Yahoo! utilize hundreds of thousands of millions of computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity.

Search Engine Categories

Web search engines

These are search engines that are specifically designed for searching web pages. They were developed to facilitate searching through a large amount of web pages. They are engineered to follow a multi-stage process: crawling the infinite number of pages to skim the figurative foam from their contents, indexing the foam/buzzwords in a sort of semi-structured form (for example a database), and returning mostly relevant as links to those skimmed documents or pages from the inventory.

Crawl

In the case of a wholly textual search, the first step in classifying web pages is to find an "index item" that might relate expressly to the "search term". Most search engines use sophisticated algorithms to "decide" when to revisit a particular page, to check its relevance. These algorithms range from constant visit-interval with higher priority for more frequently changing pages to adaptive visit-interval based on several criteria such as frequency of chance, popularity, and overall quality of site. The speed of the web server running the page as well as resource constraints like amount of hardware or bandwidth also figure in.

Link map

The pages that are discovered by web crawls are often distributed and fed into another computer that creates a veritable map of uncovered resources. This looks a little like a graph, on which different pages are represented as small nodes that are connected by links between the pages. The excess of data is stored in multiple data structures that allow quick access to this data by certain algorithms that compute the popularity score of pages on the web based on how many links point to a certain web page, which is how people can access any number of resources concerned with diagnosing psychosis.

Database Search Engines

Searching for text-based content in databases presents few special challenges from which a number of specialized search engines developed. Databases are slow when solving complex queries (with multiple logical or string matching arguments). Databases allow pseudo-logical queries which full-text searches do not use. There is no crawling necessary for a database since the data is already structured. However, it is often necessary to index the data in a more economized form designed to inspire a more expeditious search.

Mixed Search Engines

Sometimes, searched data contains both database content and web pages or documents. Search engine technology has developed to respond to both sets of requirements. Most mixed search engines are large Web search engines, like Google. They search both through structured and unstructured data sources. Pages and documents are crawled and indexed in a separate index. Databases are indexed also from various sources. Search results are then generated for users by querying these multiple indices in parallel and compounding the results according to "rules".