Sunday, March 13, 2016

What is Ontology?

Ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that exist in a particular domain of information. Ontologies are created to limit complexity and to organize information. Ontologies are considered one of the pillars of the Semantic Web.

The term ontology has its origin in philosophy and has been applied in many different ways. The word element onto- comes from the Greek "being", "that which is". The meaning within information management is a model for describing information that consists of a set of types, properties, and relationship types. Ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes, and relations.

The most common ontology visualization techniques are indented tree and graph.

Ontology Components

Common components of ontologies include:
  • Individuals: instances or objects (the basic or "ground level" objects).
  • Classes: sets, collections, concepts, classes in programming, types of objects, or kinds of things.
  • Attributes: aspects, properties, features, characteristics, or parameters that objects and classes can have.
  • Relations: ways in which classes and individuals can be related to one another.
  • Function terms: complex structures formed from certain relations that can be used in place of an individual term in a statement.
  • Restrictions: formally stated descriptions of what must be true in order for some assertion to be accepted as input.
  • Rules: statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form.
  • Axioms: assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application.
  • Events: the changing of attributes or relations.
Ontologies are commonly encoded using ontology languages.

Ontology Types

Domain Ontology

A domain ontology (or domain-specific ontology) represents concepts which belong to a certain term. Particular meanings of terms applied to that domain are provided by domain ontology. For example, the word "card" has many different meanings. An ontology about the domain of "poker" would model the "playing card" meaning of the word.

Since domain ontologies represent concepts in very specific and often eclectic ways, they are often incompatible. As systems that rely on domain ontologies expand, the need comes to merge domain ontologies into a more general representation. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).

Upper Ontology

An upper ontology (or foundation ontology) is a model of the common objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that contains the terms and associated object descriptions as they are used in various relevant domain sets.

There are several standardized upper ontologies available for use such as Dublin Core, for example.

Hybrid Ontology

Hybrid ontology is a combination of upper and domain ontology.

Ontology Languages

Ontology languages are formal languages used to construct ontologies. They allow the encoding of knowledge about specific domains and often include reasoning rules that support the processing of that knowledge. The most commonly used ontology languages are Web Ontology Language (OWL), Resource Description Framework (RDF), RDF Schema (RDFS), Ontology Inference Layer (OIL).

Ontology Editors

Ontology editors are applications designed to assist in the creation or manipulation of ontologies. They often express ontologies in one of many ontology languages. Some provide export to other ontology languages.

Among the most relevant criteria for choosing an ontology editor are the degree to which the editor abstracts from the actual ontology representation language used for persistence and the visual navigation possibilities within the knowledge model. Also important features are built-in inference engines and information extraction facilities, and the support of meta-ontologies such as OWL-S, Dublin Core, etc. Another important feature is the ability to import & export foreign knowledge representation languages for ontology matching. Ontologies are developed for a specific purpose and application.

Ontology Learning

Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is labor-intensive and time consuming process, there is a need to automate the process. Information extraction and text mining methods have been explored to automatically link ontologies to documents.

Galaxy Consulting has 16 years experience working with ontologies. Please contact us for a free consultation.

Monday, February 29, 2016

Data Security

Data security should be a priority in your organization.

For hackers, large-scale data breaches such as Home Depot, Neiman Marcus, and Staples are gold mines. For businesses, keeping valuable customer data out of the hands of cyber-thieves is a constant battle. Companies need to safeguard against every possible vulnerability across their entire infrastructure.

In 2014, the total number of reported data breaches in the United States hit a record high of 783, averaging about 15 per week, based on information compiled by the Identity Theft Resource Center (ITRC).

Companies, on average, can expect to encounter 17 malicious codes, 12 sustained probes, and 10 unauthorized access incidents each month, according to research from the Ponemon Institute, a provider of independent research on privacy, data protection, and information security policy.

Despite the growing number of attacks, many companies are still not doing nearly enough to secure their customers' personal and financial information. For many companies, the wake-up call only comes after they have fallen victim to a large-scale, high-profile breach.

Forrester Research noted that outside of banking and national defense, many industries are "woefully immature" when it comes to making the necessary investments in data breach protection, detection, and response.

This prompted Forrester to conclude that most enterprises will not be able to respond to a data breach without undermining their customers' trust or dragging their own corporate reputations through the mud.

Companies need to prevent data breaches from happening. They need to have an incident response and crisis management plan in place. Efficient response to the breach and containment of the damage has been shown to reduce the cost of breaches significantly and goes a long way toward reassuring customers who might have been thrown into a panic.

The first step toward that goal is having a high-level company executive who is responsible for data security. The key to addressing information security is first understanding what customer information is stored in company databases. Create a data inventory and determine what data is sensitive. Then segment out the sensitive and nonsensitive data.

Systematically purge the data that your organization no longer needs.

Take an inventory of all of their IT assets and business processes and analyze them for vulnerabilities that could expose sensitive data, for example, cardholder data. The next step, would be to fix those vulnerabilities. This assessment should be performed at least once a year. Make sure that the company's data security program meets industry best practices, government regulations, and the company's business objectives.

Make sure your web site uses encryption for processing customer's data. Once your company no longer needs customer data, such as payment cards or any other personal information, it should be securely deleted.

It is crucial for companies to segment data so that a breach in one file does not open other data repositories.

Companies should use Internet firewalls at all times, keep their operating systems and other business software up to date, and install and maintain antivirus and anti-spyware programs. Because many companies allow employees to use their own mobile devices, including smartphones, tablets, and laptops for business, these devices should be protected in the same way. Limit some company applications and data so that employees can't access them from unsecured mobile devices.

It is extremely important that companies limit data access to those employees who need it setting up appropriate security permissions in your data systems. You can put data logging in place, with alarms for when something happens out of the ordinary. This way you will know when someone is doing something with the data that does not coincide with their job description.

Contact centers are vulnerable to hackers. They use interactive voice response (IVR) systems for surveillance and data-gathering as a precursor to phishing schemes with agents, who are unwittingly coaxed into giving out sensitive information to unauthorized callers. In most cases, the call center agents are tricked by skilled fraudsters who use a variety of social engineering techniques to get them to break normal security procedures. The only real defense is proper training and protocols.

As many as 35% of data breaches have started with basic human error, such as sending an email with personal information to the wrong person or storing company files on laptops or tablets that were lost or stolen.

Even worse than careless employees or outside hackers, though, are the contact center agents who knowingly engage in illegal activities, using their jobs to gain access to information that they can sell or use on their own.

To help contact centers deal with this threat, call center technology can completely prevent skimming by agents. At the point in the transaction where the agent needs to collect the credit card information, systems can automatically pause recordings. With other solutions, the call can be transferred to an IVR system. Agent-assisted solutions can allow agents to collect credit card information without ever seeing or hearing it. The agent remains on the phone and customers enter their credit card information directly into the system using their phones' keypads. The standard dual-tone multi-frequency tones are converted to monotones so the agent cannot recognize them and they cannot be recorded.

In this environment, contact center managers and other employees need to be trained to spot at-risk employee behaviors. Training alone, though, is not enough. Employees need to know that there will be serious repercussions for violations of company practices and security protocols. Companies need to have a clearly defined formal policy so that employees know if they violate it, there are consequences that they will have to face.

Data security, therefore, has to be a business-wide endeavor. IT professionals, company executives, and employees at every level must work together to protect critical data assets from internal and external threats. Companies need to foster a security-aware culture in which protecting data is a normal and natural part of everyone's job.

Data security is also a constant game of what-ifs. The only certainty is that cyber-criminals will never stop learning and sharing information that will help them to get into high-profile targets. They will never stop trying to break into corporate databases. The information is just too valuable on the black market. The key is to make sure that you are not leaving the front door open for hackers to get in.

Galaxy Consulting has 16 years experience protecting organizations' data. We have done it for many companies. We can do the same for you! Contact us today for a free consultation!

Saturday, February 13, 2016

Successful Self-Service Strategy

When it comes to customer service, simplicity is critical. Companies can improve customer experiences primarily by limiting the amount of effort it takes for customers to find answers to their questions and accomplish their tasks. Here lies the appeal of Web self-service, which for many consumers has become the preferred communication channel.

Instantly available, 24/7 online customer self-service portals are gaining ground over conventional agent-assisted support, marking a significant shift in consumer attitudes toward the technology. And, contrary to popular belief, interest in Web self-service technologies is not just coming from younger consumers. The technology is changing the behavior of consumers of all generations. In fact, a recent study by Forrester Research found that 72% of consumers, regardless of age, prefer self-service to picking up the phone or sending an email when it comes to resolving support issues. This certainly is welcome news for organizations looking to cut customer service costs and maximize revenue.

There are several elements to consider for successful self-service strategy.

The success of Web self-service depends on the quality and quantity of the information available and the ease with which it can be accessed. Online customers are extremely impatient and information-hungry, so the material available to customers through self-service needs to be succinct and direct, even in response to queries that are not.

The self-service option has to be easy to find on the Web site. To call more attention to the portal, organizations can prominently place a link to the self-service portal on the homepage and other common support pages that feature company, product, and services information. And, since a self-service portal is an extension of a company's Web site, it should have the same look and feel as the rest of the site.

Once on the portal, 80/20 rule applies which means that you assume that 80% of site visitors are looking for about 20% of the content, so that 20% should be easy to find.

As for the content itself, it should be clear, to the point, and easy to understand. This can be achieved by including graphic elements, such as diagrams, charts, and bullet points. When doing so, make sure the graphics are optimized for the Web. If they're not, the Web site could take too long to load, which might cause some customers to abandon it for a more costly agent-assisted channel. Consider keeping content to an eighth-grade reading level, so the average 13- or 14-year-old can make sense of it.

Ensuring accessibility also means that the site should support a variety of Internet browsers, operating systems, assistive technologies for the blind, and, of course, mobile platforms. The latter is becoming more important, especially when one considers that almost a third of all Web traffic today comes from mobile devices.

To make a self-service section even more effective, it can be combined with an automated guidance system that enables site visitors to enter questions and then takes them to specific responses without forcing them to scan an entire database for the answer they need.

One such system is marketed by WalkMe, a San Francisco start-up that enables Web site owners to enhance their online self-service options with interactive on-screen step-by-step instructions displayed as pop-up balloons. The balloons can be programmed to appear automatically when the site visitor rolls his cursor over certain items or when he clicks on a help button.

Customers who can't find answers on their own in a self-help knowledge base might be inclined to call a customer service line, but they are more likely to type their question into a Google search bar, and companies have no control over the results that the Google search returns. This presents a number of problems for a company. Not only has the visitor left your site, but he can find information that you may not want him to see.

Virtual agents are another option companies can use to help customers find what they're looking for. IntelliResponse's Virtual Agent technology simplifies its Web self-service options. The software helps site visitors to find the single right answer to their questions. To keep information current and relevant, it strips outdated FAQ entries, learns over time how to group and respond to questions, and captures data about customer service queries to find precisely what customers need so your organization can fine-tune how it presents information on its Web site.

Companies can also use Web chat to help customers through the self-service maze. It's a tool that's already widely accepted by consumers and businesses alike. LiveWebAssist chat enables agents to push prepared content such as photos, graphics, or Web link, to customers on the site with a single click.

Along with chat and virtual agents, companies can use assisted browsing, or cobrowsing, to move self-service interactions along. This functionality lets the agent—or possibly the virtual agent—temporarily take control of a customer's computer screen. Not only does this improve the self-service experience, but, when interactions move to the contact center through either phone or chat, co-browsing can reduce the average handling time.

It is important to measure response time. Perhaps the most effective measure is the number of customer questions that are submitted and get a response. This can apply to those questions where the customer finds the answer on her own as well as those that are answered through a social community or by a representative of the company. Consider these elements:
  • the number of issues resolved per month through social communities. This includes the number of new questions posed to and answered by the community, the percentage of issues resolved by members of the community rather than company employees, and the number of "this article helped me" votes received.
  • the number of issues resolved every month through FAQs and company knowledge bases. This includes the number of page views that both receive per month.
  • the average cost to resolve issues through channels that involve a company employee. These include phone, email, and chat.
And then, as with any customer service channel, it's important to collect user feedback about the self-help experience. As with any other customer service channel, this can be done through customer surveys, Web analytics and search logs, customer interviews and focus groups, usability testing, and collaborative design processes.

For self-service to be done right, it should be in the interest of the customer. You do not want customers to use self-service because they are forced to. You want them to use it because it serves their needs.

Galaxy Consulting has 16 years experience in optimizing self-service on companies web sites. We can do the same for you. Contact us today for a free consultation!

Saturday, January 30, 2016

The Power of Knowledge

Your contact center agents must be available and equipped with the knowledge they need to handle customer issues quickly and efficiently.

However, with the explosion of new channels such as Internet, social media, and mobile computing, many companies lack the tools and processes required to empower their employees to deliver great customer experience.

Organizations struggle with static, siloed knowledge systems that not only provide redundant, often inaccurate information, but are costly to maintain.

Companies that have invested in creating a Powerful State of Knowledge are delivering great customer experiences, which translate into sustainable growth and profitability.

To achieve powerful state of knowledge, companies must be able to:

1. Establish a single knowledge base. Consolidate your knowledge into one single source of truth and make it available to agents and customers across your web site, mobile, and social channels. Tie knowledge to analytics and key performance indicators (KPIs) to present valuable content and address information gaps. This new level of visibility makes it easy for agents to:
  • Update knowledge
  • Identify potential customer issues
  • Provide fast, accurate resolution
If you become driven by market demand for enhanced self-help services and internal demand for efficient productivity improvements, you can transform your customer and employee support systems, taking your existing separate knowledge repositories and establishing one central cross-channel knowledge base. This solution will help to raise efficiency and reduce the cost-per-call of your agents, and it will also improve the quality of the customer support you provide to your customers.

2. Social media has evolved knowledge management from static data residing in a structured database to dynamic, unstructured data created in every social interaction. As a result, you must monitor customers’ social conversations on Facebook, Twitter, and other sites to analyze sentiment and prioritize and respond to service issues.

3. Not many organizations are using traditional knowledge base technology. Instead, many are attempting to embrace the chaos that Big Data, social media, and the move to the cloud create, yet they still face challenges bringing it all together to make the most out of the information.

Unified indexing and insight technology enables just that - tapping into full knowledge ecosystems and providing support agents, employees and customers with contextually relevant information. This unprecedented access to actionable insight has helped companies achieve dramatic results, such as a 30%+ reduction in case resolution time, 10%+ increase in customer self-service satisfaction and more.

The need to make the most of organizational knowledge, to get as much value from it as possible is greater now than ever before. Organizations of all sizes are finding themselves with overwhelming amounts of information, often locked away in silos--different systems, different departments, different geographies and different data types, making it impossible to connect the dots and make sense of critical business information.

Traditional KM initiatives have considered knowledge a transferable commodity that can be stored in a system of record and used mechanically. Yet, in reality, knowledge goes beyond data and information, and is personal and contextual.

Data is factual information measurements, statistics, or facts. In and of itself, data provides limited value. It must be organized into information before it can be interpreted. Information is data in context organized, categorized or condensed. Knowledge is a human capability to process information to make decisions and take action.

The building blocks of knowledge are everywhere, fragmented, complex, unstructured, and often outside the systems of record (in the cloud, in social media, etc.). The key is to bring it all together, and presenting it in context to users.

Unified indexing and insight technology is the way that forward thinking companies access knowledge and experts. The technology brings content into context--assembling fragments of structured and unstructured information on demand and presenting it, in context, to users.

Designed for the enterprise, unified indexing and insight technology is built to bring together data from heterogeneous systems (e.g. email, databases, CRM, ERP, social media, etc.), locations (cloud and on-premise), and varied data formats of business today, It securely crawls those sources, unifies the information in a central index, normalizes information and performs mash-ups on demand.

The technology can be context-aware, relying on the situation of the user to anticipate and proactively offer enriched, usable content directly related to the situation at hand such as solutions, articles, experts, etc. from across the vast and growing ecosystem.

Best Practices for a Higher Return on Knowledge

Bringing relevant content to your agents and customers will increase productivity, create happier employees and drive higher customer satisfaction. Follow these best practices to achieve a higher return on knowledge:

1. Consolidate the knowledge ecosystem. Bring together information from enterprise systems, data sources, employee and customer social networks, social media, etc. Connect overwhelming amount of enterprise and social information.

2. Connect people to knowledge in context. Connect users to the information they need, no matter where it resides, within their context and in real-time.

3. Connect people to experts in context. Connect the people associated with the contextually relevant content to assist in solving a case, answer a key challenge or provide additional insight to a particular situation.

4. Personalize information access. Present employees and customers with information and people connections that are relevant, no matter where they are, and no matter what they are working on.

Investing in the creation of a powerful state of knowledge builds a defensible advantage in delivering great customer experiences. Those experiences lead to sustainable growth and profitability by driving customer acquisition, customer retention, and operational efficiency.

Service and support agents can solve cases faster. No longer do agents need to search across multiple systems or waste time trying to find the right answer or someone who knows the answer. They will have relevant information about the customer or case at hand right at their fingertips: suggested solutions, recommended knowledge base articles, similar cases, experts who can help, virtual communication timelines, etc.

Customers can solve complex challenges on their own. Logging in to customer self-service, customers will see a personalized and relevant view of information form the entire knowledge ecosystem (from inside or outside your company) intuitively presented so that they can solve their own challenges.

Employees can stop reinventing the wheel. When every employee can access relevant information, locate experts across the enterprise, and know what does and does not exist, they can finally stop reinventing the wheel.

Galaxy Consulting has 16 years experience in this area. We have done this for few companies and we can do the same for you.

Saturday, January 9, 2016

Personalization in Content Management

Content personalization in content management makes your users' experience more rewarding. Content personalization targets specific content to specific people. One simple example is showing code samples to developers and whitepapers to business users.

Segment Your Users

The first step to delivering a personalized customer experience is to segment your visitors so you can present them with what’s most relevant to them.

Any good personalization strategy starts with a fundamental understanding of your customer’s behavior, needs and goals. Upfront research goes a long way to building out the personas and having the insight from which to develop an approach to personalization. This may already be gathered through ongoing customer insight or voice of the customer programs, or be more ad hoc and project based. Regardless of the approach, be sure that any approach to personalization is grounded in a solid understanding of your users.

The next step in the process is to define the audience goals and objectives so you can know if the personalization efforts are successful. These may include top-line key performance indicators such as conversion rate or online sales, or be more specific to the personalization scenarios (i.e. landing page bounce rate). Try to be specific as possible and ensure that your measures of success directly relate to the areas of focus for your personalization efforts impact.

Personalize Your Content

In order to provide personalized content, it is necessary to determine which content is most effective for each audience segment. This content mapping process can be done alongside the audience segmentation model to ensure you have the right content for the right user at the right stage. If we use the business users and developers example from above, we can personalize the home page for the developers segment to talk about things related to the technology and how it can be extended while we serve business users with information related to how they can achieve their goals using this solution.

The biggest mistake organizations make with personalization is thinking too big and getting overwhelmed before they even start. It is exhausting to even start thinking about how to deliver the right message to the right person at every single interaction. Starting with a few specific personalization scenarios can help you more rapidly adopt the processes and technology and see what works on a small scale before expanding.

Here are a few example rules-based scenarios for an insurance company:
  • If a user in a specific region of the United States visits the site, show them regionally specific rates and agent information.
  • If a user has shown a specific interest in a vehicle, show images and offers that include that vehicle.
  • If a user is an existing customer (as identified through specific site actions or e-mail campaigns) feature tools and content that help them maintain their relationship with you.
  • If a user has already subscribed to the newsletter, replace the subscribe to newsletter call-out with a different offer or high value piece of content.
As you begin to think about the overall customer journey and digital experience, this list of scenarios is going to be far more detailed. However, it should not be more complicated than is necessary to accomplish the organizational goal of making it easier for audience segments to achieve their objectives while having the best possible user experience.

The process of content mapping and scenario planning will inevitably surface holes in the inventory of your existing content. Obviously, they will need to be filled. This will require some combination of recreating existing content for different audiences in addition to generating some which is completely new. Not to mention the ongoing process of updating and managing these content variations based on what’s working and what’s not.

Personalization in CMS

It would help to develop a content model and taxonomy for your CMS that is aligned to your audience segmentation approach. By tagging content appropriately you can often automate many areas of personalization. For example, display all white papers from a specific vertical industry.

Regardless of what tool is used to manage all of this complexity, it will require custom configuration. Some systems are naturally more user friendly than others but none of them come out of the box knowing your audience segments, content mapping, and scenarios. All of this information, once determined and defined, will need to be entered to the system.

Rules-based configuration is the most common type of work you’ll do with a CMS which is literally going through a series of "If, Then" statements to tell the CMS what content to show to what users. It’s important to have someone inside your organization or agency partner that owns the product strategy for personalization and can ensure it is consistently applied and within the best practices for that specific platform.

Sitefinity content management system has a simple interface for defining segments through various criteria such as where the visitor came from, what they searched for, their location, duration of their visit, etc. You can define custom criteria and have any combination of AND/OR criteria to define your segments.

Testing Your Personalization

Once your audience and content plans are sorted out and the technology is configured, it is time to test the experience from the perspective of each segment and scenarios within segments. You should test each variation on multiple browsers and mobile devices.

Some CMS allow to impersonate to test your results. For example, Sitefinity allows you to impersonate any segment and preview the customer experience on any device with the help of the mobile device emulators. This way you can be sure how your website looks like for every audience on any device.

Measure the Results

After you’ve segmented your audiences, personalized their experience and checked how your website/portal/CMS is presented for different audiences on different devices you should see the results of your work. They can be measured by the conversions and other website KPIs for the different segments compared to the default presentation for non-segmented visitors or to the KPIs prior to the personalization. Measuring will help you iterate and improve the results further.

Going forward it will be possible to revise previous assumptions with new information which is substantially more valid. Using the built-in analytics within your CMS or third party analytics, you’ll be able to watch how each segment interacts with the personalized content and if it was effective.

Galaxy Consulting successfully implemented content personalization for few clients. We can do the same for you. Contact us today for a free consultation.

Tuesday, December 29, 2015

Is Your Web Site Optimized for Mobile Devices?

Many people are highly dependent of their mobile devices for every day interactions, including mobile commerce. Our society is becoming highly mobile and connected. In the latest Shop.org and Forrester Research Mobile Commerce Survey, it's estimated that U.S. smartphone commerce will grow to $31 billion by 2016.

Those organizations that can best serve mobile customers will have an advantage in the competition. With a surge in mobile traffic comes the added potential to connect with and sell to customers through mobile commerce. Having a concrete mobile infrastructure plan and strategy is no longer an option, as it had been in recent years, but rather a must to compete in any customer-facing situation.

But despite this upward trajectory, retailers and other consumer-oriented companies still express some hesitancy about investing in multi-device environments. There is still some apprehension by companies, when it comes to moving forward with mobile planning. Companies still struggle to maintain uniformity across multiple device experiences when there are various screen sizes, operating systems, hardware specifications, and loading speeds to consider. One fear is that of the unknown, but security, data management, and simply proving a use case and subsequent return on investment are concerns as well.

The key issue in smartphone shopping continues to be the form factor, which can make navigation more difficult for customers. In addition to slower page load times on smartphones, some customers are concerned about the security of the transaction or simply complain that the experience just is not the same.

A successful mobile experience, like many other customer experiences, is about fulfilling customers' needs. First-time users of a mobile site or app tend to be less satisfied with their mobile experiences than frequent users because of their lack of familiarity with layouts, navigation, and functionality according to the survey of the mobile users. Knowing the different kinds of mobile devices customers use is critical. It is pertinent to develop a strategy that encompasses all types of customer scenarios.

Before embarking on any one mobile strategy, it is important to learn how your company's customers most likely would use their mobile devices. In addition to enabling customers to interact how they wish, any company looking to optimize its mobile presence must naturally consider the effects on the business as well, and how mobile usage will impact other lines of business and cross-channel marketing efforts.

In addition to justifying a use case and ROI for mobile, companies that wish to get into the mobile side of business must be aware of its limitations. Under ideal circumstances, companies want to engage with their customers and cultivate a one-to-one relationship while taking into consideration CANSPAM and privacy regulations. It is very important to adjust taxonomy and information architecture for the mobile experience. A lot of searches are made using mobile devices, so search also has to be optimized.

Optimizing your mobile site or developing a native application is no simple task. There are security considerations, as well as device-specific functions, to consider. Don't take a cookie-cutter approach. Some companies make the mistake of simply cloning online information without considering that consumer behavior on the mobile phone is dramatically different. Justify mobile ROI with consumer insight.

Consider security. Create a military-grade security infrastructure, while maintaining user-friendly design. Hire the best user interaction designer to design the security setup interaction.

Utilize mobile wisely. Once someone has discovered your brand through search, referral, or a marketing message, and they download the app, this may indicate a loyal customer. The app can be a great way to maximize and monetize that loyal relationship because it's in a controlled environment.

Galaxy Consulting has experience optimizing information architecture and search for mobile devices. Contact us today for a free consultation.

Monday, December 7, 2015

Data Lake

A data lake is a large storage repository and processing engine. Data lakes focus on storing disparate data and ignore how or why data is used, governed, defined and secured.

Benefits

The data lake concept hopes to solve information silos. Rather than having dozens of independently managed collections of data, you can combine these sources in the unmanaged data lake. The consolidation theoretically results in increased information use and sharing, while cutting costs through server and license reduction.

Data lakes can help resolve the nagging problem of accessibility and data integration. Using big data infrastructures, enterprises are starting to pull together increasing data volumes for analytics or simply to store for undetermined future use. Enterprises that must use enormous volumes and myriad varieties of data to respond to regulatory and competitive pressures are adopting data lakes. Data lakes are an emerging and powerful approach to the challenges of data integration as enterprises increase their exposure to mobile and cloud-based applications, the sensor-driven Internet of Things, and other aspects.

Currently the only viable example of a data lake is Apache Hadoop. Many companies also use cloud storage services such as Amazon S3 along with other open source tools such as Docker as a data lake. There is a gradual academic interest in the concept of data lakes.

Previous approaches to broad-based data integration have forced all users into a common predetermined schema, or data model. Unlike this monolithic view of a single enterprise-wide data model, the data lake relaxes standardization and defers modeling, resulting in a nearly unlimited potential for operational insight and data discovery. As data volumes, data variety, and metadata richness grow, so does the benefit.

Data lake is helping companies to collaboratively create models or views of the data and then manage incremental improvements to the metadata. Data scientists and business analysts using the newest lineage tracking tools such as Revelytix Loom or Apache Falcon to follow each other’s purpose-built data schemas. The lineage tracking metadata also is placed in the Hadoop Distributed File System (HDFS) which stores pieces of files across a distributed cluster of servers in the cloud where the metadata is accessible and can be collaboratively refined. Analytics drawn from the data lake become increasingly valuable as the metadata describing different views of the data accumulates.

Every industry has a potential data lake use case. A data lake can be a way to gain more visibility or to put an end to data silos. Many companies see data lakes as an opportunity to capture a 360-degree view of their customers or to analyze social media trends.

Some companies have built big data sandboxes for analysis by data scientists. Such sandboxes are somewhat similar to data lakes, albeit narrower in scope and purpose.

Relational data warehouses and their big price tags have long dominated complex analytics, reporting, and operations. However, their slow-changing data models and rigid field-to-field integration mappings are too brittle to support big data volume and variety. The vast majority of these systems also leave business users dependent on IT for even the smallest enhancements, due mostly to inelastic design, unmanageable system complexity, and low system tolerance for human error. The data lake approach helps to solve these problems.

Approach

Step number one in a data lake project is to pull all data together into one repository while giving minimal attention to creating schemas that define integration points between disparate data sets. This approach facilitates access, but the work required to turn that data into actionable insights is a substantial challenge. While integrating the data takes place at the Hadoop layer, contextualizing the metadata takes place at schema creation time.

Integrating data involves fewer steps because data lakes don’t enforce a rigid metadata schema as do relational data warehouses. Instead, data lakes support a concept known as late binding, or schema on read, in which users build custom schema into their queries. Data is bound to a dynamic schema created upon query execution. The late-binding principle shifts the data modeling from centralized data warehousing teams and database administrators, who are often remote from data sources, to localized teams of business analysts and data scientists, who can help create flexible, domain-specific context. For those accustomed to SQL, this shift opens a whole new world.

In this approach, the more is known about the metadata, the easier it is to query. Pre-tagged data, such as Extensible Markup Language (XML), JavaScript Object Notation (JSON), or Resource Description Framework (RDF), offers a starting point and is highly useful in implementations with limited data variety. In most cases, however, pre-tagged data is a small portion of incoming data formats.

Lessons Learned

Some data lake initiatives have not succeeded, producing instead more silos or empty sandboxes. Given the risk, everyone is proceeding cautiously. There are companies who create big data graveyards, dumping everything into them and hoping to do something with it down the road.

Companies would avoid creating big data graveyards by developing and executing a solid strategic plan that applies the right technology and methods to the problem. Hadoop and the NoSQL (Not only SQL) category of databases have potential, especially when they can enable a single enterprise-wide repository and provide access to data previously trapped in silos. The main challenge is not creating a data lake, but taking advantage of the opportunities it presents. A means of creating, enriching, and managing semantic metadata incrementally is essential.

Data Flow in the Data Lake

The data lake loads extracts, irrespective of its format, into a big data store. Metadata is decoupled from its underlying data and stored independently. This enables flexibility for multiple end-user perspectives and maturing semantics.

How a Data Lake Matures

Sourcing new data into the lake can occur gradually and will not impact existing models. The lake starts with raw data, and it matures as more data flows in, as users and machines build up metadata, and as user adoption broadens. Ambiguous and competing terms eventually converge into a shared understanding (that is, semantics) within and across business domains. Data maturity results as a natural outgrowth of the ongoing user interaction and feedback at the metadata management layer, interaction that continually refines the lake and enhances discovery.

With the data lake, users can take what is relevant and leave the rest. Individual business domains can mature independently and gradually. Perfect data classification is not required. Users throughout the enterprise can see across all disciplines, not limited by organizational silos or rigid schema.

Data Lake Maturity

The data lake foundation includes a big data repository, metadata management, and an application framework to capture and contextualize end-user feedback. The increasing value of analytics is then directly correlated in increase in user adoption across the enterprise.

Risks

Data lakes therefore carry risks. The most important is the inability to determine data quality or the lineage of findings by other analysts or users that have found value, previously, in using the same data in the lake. By its definition, a data lake accepts any data, without oversight or governance. Without descriptive metadata and a mechanism to maintain it, the data lake risks turning into a data swamp. And without metadata, every subsequent use of data means analysts start from scratch.

Another risk is security and access control. Data can be placed into the data lake with no oversight of the contents. Many data lakes are being used for data whose privacy and regulatory requirements are likely to represent risk exposure. The security capabilities of central data lake technologies are still in the beginning stage.

Finally, performance aspects should not be overlooked. Tools and data interfaces simply cannot perform at the same level against a general-purpose store as they can against optimized and purpose-built infrastructure.

Careful planning and organization of data lake strategy is required to make this project a success.

Monday, November 23, 2015

What is New in SharePoint 2016?

Microsoft releases a new version of SharePoint every three years. SharePoint 2016 public Beta version is available. The full version is expected in Spring 2016. Here is what is new in SharePoint 2016 version.

SharePoint 2016’s main goal is to bring the best of Office 365 Cloud technology to on-premises solutions. In this truly effective Hybrid model, organizations will be able to have the best of the Cloud, whilst keeping all their important information and data stored on-premises.

SharePoint Server 2016 has been designed to reduce the emphasis on IT and streamline administrative tasks, so that IT professionals can concentrate on core competencies and mitigate costs. Tasks that may have taken hours to complete in the past have become simple and efficient processes that allow IT to focus less on day-to-day management and more on innovation.

Main Focus

User Experiences
  • Mobile experiences
  • Personalized insights
  • People-centric file storage and collaboration
Infrastructure
  • Improved performance and reliability
  • Hybrid cloud with global reach
  • Support and monitoring tools
Compliance
  • New data protection and monitoring tools
  • Improved reporting and analytics
  • Trusted platform
MinRoles

You can now install just the role that you want on particular SharePoint 2016 servers. This will only install what’s required there, and it will make sure that all servers that belong to each role are compliant. You will also be able to convert servers to run new roles if needed. You can look at the services running on the SharePoint 2016 server and see if they are compliant.

Downtime for Updates

Downtime previously required to update SharePoint servers has been removed.

Mobile and touch

Making decisions faster and keeping in contact are critical capabilities for increasing effectiveness in any organization. The ability for end users to access information while on the go is now a workplace necessity. In addition to a consistent cross-screen experience, SharePoint Server 2016 provides the latest technologies and standards for mobile push and information synchronization. With deep investment in HTML5, SharePoint 2016 provides capabilities that enable device-specific targeting of content. This helps to ensure that users have access to the information they need, regardless of the screen they choose to access it on.

SharePoint 2016 further empowers users by delivering a consistent experience across screens, whether using a browser on the desktop or a mobile device. Through this rich experience, users can easily transition from one client to another without having to sacrifice features.

App Launcher

The App Launcher provides a new navigation experience where all your apps are easily available from the top navigation bar. You can quickly launch your application, browse sites and access your personal files.

Improved Controls

Based on SharePoint Online and OneDrive for Business, SharePoint 2016 document libraries inherit the improved control surface for working with content, simplifying the user experience for content creation, sharing and management.

Content Sharing

SharePoint 2016 improves the sharing experience by making it more natural for users to share sites and files. You can just click the "Share" button at the top right corner of every page, enter the names of people you want to share with, and press Enter. The people you just shared with will get an email invitation with a link to the site.

SharePoint still uses powerful concepts like permission levels, groups and inheritance to provide this experience. Part of sharing is also understanding who can see something. If you want to find out who already has access to a particular site, you can go to the "Settings" menu in top right corner, click "Shared with", and you will see the names and pictures of people who have access to the site.

Large File Support

SharePoint 2016 provides support for uploading files up to 10GB.

Compliance Tools

Preventing data loss is non-negotiable, and over-exposure to information can have legal and compliance implications. SharePoint 2016 provides a broad array of features and capabilities designed to make certain that sensitive information remains that way, and to ensure that the right people have access to the right information at the right time.

New In-Place Hold Policy and Document Deletion Centers will allow you to manage time-based, organization-wide in-place hold policies to preserve items in SharePoint and OneDrive for Business for a fixed period of time, in addition to managing policies that can delete documents after a specified period of time.

Cloud Hybrid Search

Cloud hybrid search offers users the ability to seamlessly discover relevant information across on-premises and Office 365 content. With the cloud hybrid search solution, you index all your crawled content, including on-premises content, in your search index in Office 365. When users query your search index in Office 365, they get unified search results from both on-premises and Office 365 cloud services with combined search relevancy ranking.

Cloud hybrid search provides some key benefits to customers of both SharePoint 2013 and early adopters of SharePoint 2016 IT preview, such as:
  • the ability to reduce your on-premises search footprint;
  • the option to crawl in-market and legacy versions of SharePoint, such as 2007, 2010 and 2013, without requiring upgrade of those versions;
  • avoiding the cost of sustaining large indexes, as it is hosted in Office 365.
With this new hybrid configuration, this same experience will also allow users to leverage the power of Office Graph to discover relevant information in Delve, regardless of where information is stored. You will not only be able to get back to all the content you need via Delve, but also discover new information in the new Delve profile experiences and even have the ability to organize content in Boards for easy sharing and access.

You will have to use the Office 365 Search for this to work. If SharePoint 2016 On-Premises users query against their On-Premises Search service, it will continue to give them local results only.

However, once available, this will allow users to fully embrace experiences like Delve in Office 365 and more to come in the future.

OneDrive Redirection

With SharePoint 2016, you can redirect your My Sites to your Office 365 subscription’s OneDrive for Business host. In other words, if a user clicks on OneDrive, he will be redirected to his Office 365 My Site and no longer to his On-Premises. Although you can use document libraries in on-premises SharePoint, Microsoft's larger strategy pushes users to use OneDrive to manage files across all devices. This creates the ability to integrate that OneDrive cloud storage into your on-premises SharePoint.

Follow Sites

Now users can click on “Follow” both On-Premises and on their Office 365 and see them all in one place under the “Sites” app in the App Launcher.

Site Folders

The OneDrive for Business area aims to bring users to one place to help them work with their files regardless of where they are. You will also be able to navigate your Sites and their libraries from there.

Saturday, November 7, 2015

Content Management Systems Review - Vasont

Vasont is a component content management system. It has powerful capabilities to store, update, search, and retrieve content. It offers version control, integrated workflows, project management, collaborative review, translation management, and reporting to manage content and business processes.

Vasont provides opportunities for multi-channel publishing and editing in your favorite applications. In addition, it provides an advanced editorial environment built to maximize, manage, and measure content reuse. Unicode support enables multi-language implementations. It also integrates the ability to process content with reusable, event driven business logic as an integral part of the system.

Content is stored in an underlying Oracle database and can be imported, exported, and stored in a variety of formats, including XML, SGML, HTML, as well as other formats that are required as input documents or deliverable formats. This is possible because Vasont can store content separately from any specific tagging structure.

Vasont can be used to store and manage embedded multimedia in structured content. It can also be used to provide a consistent organization and hierarchy to unstructured business documents and other digital assets to provide an overall document management solution. Vasont stores both component-level graphics and unstructured business documents as multimedia components.

Content can be stored at a document or sub-document level and with any content assets such as graphics and references. Vasont has great power at the component level with content organized using XML as input and output. Content can be manipulated and reused at any level of granularity. It is easy to add metadata to existing content and take advantage of the richness that metadata can provide.

Vasont also excels at integrating XML and non-XML traditional document content to provide powerful content applications that can cross departmental or functional boundaries. It is effective in a variety of content scenarios or in combined scenarios, including:
  • highly structured XML or SGML content;
  • structuring unstructured information assets such as in regulatory environments;
  • documents, especially linked to workflow and business logic;
  • digital assets such as graphics.

Vasont allows the building of content within and among these content relationships and content scenarios. It provides the power to model information in an organization and share it across different divisions. It stores all types of content in one repository. For example, structured content (i.e. XML, HTML, SGML, text and pointers), multimedia files, unstructured documents (i.e., Word, Excel, PDF files, graphics).

In Vasont Administrator, an administrator can set up the rules of structure and apply any processing options needed to transform, validate, or redirect data. The administrator can store settings for loading, extracting, editing and viewing data; user permissions; and workflow. Administrative responsibility can be assigned to specific Associate Administrators so that multiple groups or departments can share the system and yet control their own setups.

The system includes Vasont Universal Integrator (VUI) for Arbortext Editor, Adobe FrameMaker, JustSystems XMetaL, Quark XML Author, or Microsoft Word. The VUI allows authors to work in a familiar environment and provides a frequently used subset of functionality available in Vasont to simplify the editing process.

Vasont High-Level Application Architecture

Main parts are User Navigator and Content Navigator. Users, their roles and permissions are set up using User Navigator. Content Navigator includes content definitions, content instances, workflow definitions, load and extract views, and business logic which is processing options.

There is Vasont Application Programming Interface (API) for advanced customization and integration. The Vasont API allows for development of:
  • custom user interfaces;
  • web access to Vasont;
  • processing options;
  • Daemons.

Vasont Daemon Programs provides background processing routines that automate repetitive tasks such as extracting and loading content. Some customization is required to implement it.

Content Model

The content model and the corresponding rules of structure are defined by the administrator in the Vasont Administrator. These rules usually correspond closely to the structure rules defined in a Document Type Definition (DTD) or schema, but they may differ somewhat or may support multiple DTDs for different outputs. Structures may also be defined in Vasont, independent of a DTD, which is useful when storing documents and other digital assets that may need to be organized in a specific way but are not structured XML or SGML content. The rules of structure help guide you through the editing process by allowing you to place components in only the appropriate locations in a collection.

The Vasont Administrator is also used to define the big picture of how collections will be organized in Vasont, through the creation of content types and collection groups. These categories are represented in a tree or list view in Vasont and have symbols that represent them. This screen of a tree view shows the sequencing and grouping of collections.

The detailed items in a collection are called components. The top component in each tree view is called the primary. Normally a collection will contain many primaries.

Vasont has several classes of components and components can be broken down into smaller chunks, depending on the needs of the organization. The level of chunking is called granularity. It is essential to understand how your Vasont system has been configured so that you can find and edit the relevant material and maximize reuse. Granularity describes the smallest chunk of content stored in Vasont. A high level of granularity means that content is stored in large chunks. For example, you may have Book, Chapter, and Section components with no components defined at a level lower than Section. On the other hand, a very granular setup stores content in very small chunks, typically broken down into paragraph-level components or the equivalent.

Content types are the highest level of organization in Vasont and often serve as major divisions in content. Typically, different content types store content with very different content models, such as content used in different divisions or groups within a corporation. Content types are set up in the Vasont Administrator.

Content in each content type is organized into collections and optional collections groups. Inside of a content type called Publication, a collection such as Manuals is a grouping of similar content that follows the same structure. Depending on how similar the content model is, collections and collection groups within a single content type may share content. Collections in the same content type have similar content models so that content can be reused, moved, and referenced. Content in collections from different content types may be reused if the content types share similar raw components. Pointers are allowed from components in one collection to components in another collection and the collections can be in different content types.

Components are reusable chunks of content defined in the rules of structure for each collection. Although not required to, components usually correspond to elements in a document type definition (DTD). The three types of components are: text, multimedia, and pointer.

Metadata, or information about your content, helps you automate business logic and categorize, locate, filter, and extract content. Traditional types of metadata for topics include index entries that describe content or identifiers that can be used for cross-referencing or mapping context-sensitive help in software applications. Other examples of metadata include labeling content that applies to a particular customer or vendor, whether content should be published to an online help system or a printed manual, or other types of classifications. Metadata can be information that helps perform automated business logic through the use of Vasont Processing Options.

The Vasont Navigator provides an intuitive way to view, edit, reuse, and search content within a collection. Its hierarchical structure represents the organization of content in the system and icons indicate the state of items, including whether they have been included in a log. Components may be opened and closed individually or in groups. Open multiple Navigator windows to drag and drop content easily from one location to another, either within or across collections, rather than scrolling up and down the tree view.

Vasont provides powerful search capabilities to find and reuse content across the entire organization. The search function allows to search for content across collection boundaries. When performing a cross-collection search, you are prompted to select the collections to search and then specify query criteria for the content desired.

The Vasont Content Ownership feature gives a designated user the right to assign ownership to an individual user, or a group of users which provides the exclusive right to alter specified content. The designated user will have the right to assign ownership to a Primary component. Once ownership is assigned, the Vasont CMS then recognizes users who have permission to perform add/delete/change actions to the content, and prevents those who do not have ownership permissions from making changes to the content.

Each and every piece of unique content is stored in the raw material only once. Vasont compares content in the same raw component or in aliased raw components to determine if the content has been used in more than one instance. If the text of the components is the same, it is stored in the raw material as a single component. Vasont's ability to automatically reuse content where it can, without any specific setup, is called implicit reuse.

Depending on your setup, you may explicitly reuse content by referencing or “pointing to” relevant content from different contexts. For example, you may have a collection of shared procedure components that you can point to rather than storing the entire procedure in multiple locations.

Vasont can be used to store and manage embedded multimedia in structured content. It can also be used to provide a consistent organization and hierarchy to unstructured documents and other digital assets to provide an overall document management solution. Vasont stores both component-level graphics and unstructured documents as multimedia components.

Vasont offers a Translation Package that enables users to lower their overall translation costs by minimizing the amount of content that needs to be translated. This is possible because it keeps track of content that has already been translated and insures it is not re-translated. It also measures the amount of savings for each translation project by identifying the percentage of words that have already been translated.

It offers Translation Management that helps users manage projects and sub-projects by tracking dates, vendors, languages and status information. A translation project is a module of content that is being translated into multiple languages (i.e., a topic that is being translated into French, German, and Chinese). The sub-projects are each individual language to which the module is being translated (i.e., the specific French translation is a sub-project of the topic translation project).You can submit your projects for quote or send them for translation directly from Vasont's translation window. This window also provides word counts for each translation project.

Integration with translation vendors can be used with this package for an automated content delivery back and forth from Vasont. The translation package is used to consolidate the status information for all your translation projects in one place so you can keep your projects on schedule and lower your costs.

Saturday, October 24, 2015

Humanizing Big Data with Alteryx

In my last post, I described Teradata Unified Data Architecture™ product for big data. In today's post, I will describe Teradata partner Alteryx which provides innovative technology that can help you to get the maximum business value from your analytics using the Teradata Unified Data Architecture.™

Companies can extract the highest value from big data by combining all relevant data sources in their analysis. Alteryx makes it easy to create workflows that combine and blend data from relevant sources, bringing new and ad hoc sources of data into the Teradata Unified Data Architecture™ for rapid analysis. Analysts can collect data within this environment using connectors and SQL-H interfaces for optimal processing.

Create Business Analytics in an Easy-to-Use Workflow Environment

Using the design canvas and step-by-step, workflow-based environment of Alteryx, you can create analytics and analytic applications. With a single click, you can put those applications and answers to critical business questions in the hands of those who need them most. And when business conditions and underlying data change, Alteryx helps you iterate your analytic applications quickly and easily, without waiting for an IT organization or expensive statistical specialists.

Base Your Decisions on the Foresight of Accessible Predictive Analytics

Alteryx helps you make critical business decisions based on forward-looking, predictive analytics rather than past performance or simple guesswork. By embedding predictive analytics tools based on the R open source statistical language or any of the in-database analytic capabilities, Alteryx makes powerful statistical techniques accessible to everyone in your organization through a simple drag-and-drop interface.

Understand Where and Why Things Happen: Location Matters

Whether you are building a hyper-local marketing and merchandizing strategy or trying to understand the value of social media investments, location matters. Traditionally, this type of insight has been in the hands of a few geo specialists focused on mapping and trade areas. With Alteryx, you can put location specific intelligence in the hands of every decision maker.

With the rise of location-enabled devices such as smart phones and tablets, consumer and business interactions increasingly include a location data-point. This makes spatial analysis more critical than ever before. Alteryx provides powerful geospatial and location intelligence tools as part of any analytic workflow. You can visualize where events are taking place and make location-specific decisions.

Alteryx can push custom spatial queries into the Teradata Database to leverage its processing power and eliminate data movement. You can enrich your spatial data within the Teradata system using any or all of these functions provided by Alteryx:
  • geocoding of data;
  • drive-time analytics;
  • trade area creation;
  • spatial and demographic analysis;
  • spatial and predictive analysis;
  • mapping.
Alteryx simplifies the previously complex tasks of predictive and spatial analytics, so every employee in your organization can make critical business decisions based on real, verifiable facts.

Deliver the Right Data for the Right Question To The Right Person

To answer today’s complex business questions, you need to access your sources of insight in a single environment.That is why Alteryx allows you to bring together data from virtually any data source, whether structured, unstructured or cloud data, into an analytic application. Using Alteryx, you can extend the reach of business insight by publishing applications that let your business users run in-database analytics and get fast answers to their pressing business questions.

Teradata and Alteryx: Powerful Insights for Business Users

To exploit the opportunities of all their data, organizations need flexible data architectures as well as sophisticated analytic tools. Analysts need to rapidly gather, make sense of and derive insights from all the relevant data to make faster, more accurate strategic decisions. But given the variety of potential data sources, it is difficult for any single tool to be most effective at capturing, storing and exploring data. Using the Teradata Unified Data Architecture™ with Alteryx enables you to explore data from multiple sources, as well as the ability to deploy the insights derived from the data.

You can create sophisticated analytics, taking advantage of new, multi-structured data sources to deliver the most ROI. The combined solution:
  • integrates and addresses both structured and emerging multi-structured data Leverages the Teradata Integrated Data Warehouse, Teradata Aster Discovery platform and Hadoop to optimal advantage;
  • creates both in-database and cross-platform analytics quickly without requiring specialized SQL, MapReduce or R programming skills;
  • lets you combine the capabilities of the Alteryx environment with routines developed in other analytical tools within a single analytical workflow;
  • easily deploys analytics to the appropriate users beyond the analyst community.
The combined solution of Alteryx and the Teradata Unified Data Architecture™ provides an IT-friendly environment that supports the need to analyze data found inside and outside the data warehouse. Analysts and business users can leverage powerful engines to create and execute integrated applications. This kind of analysis is only possible with an environment that can bring together routines created by separate tools and running on different platforms.

Enhancing the Teradata Unified Data Architecture™ with the speed and agility of Alteryx creates a powerful environment for traditional and self-service analytics using integrated data and massively parallel processing platforms. It delivers:
  • a complete solution for the full life-cycle of strategic and big data analytics, from transforming, enriching and loading data to designing analytic workflows and putting easy-to-use analytic applications in the hands of business users;
  • improved ability to manage and extract value from structured and multi-structured data;
  • ability for business analysts to create data labs and perform predictive and spatial analytics on the Teradata data warehouse and Teradata Aster discovery platforms;
  • faster analytical processing within applications using in-database analytics in Teradata and SQL MapReduce functions in Teradata Aster.
The Alteryx solution helps customers with the Teradata Unified Data Architecture™ achieve these benefits by providing the following:
  • robust set of analytical functions;
  • access to a rich catalog of horizontal and industry-specific analytic applications in the
  • Alteryx Analytics Gallery;
  • syndicated household, demographic, firmographic, map and Census data to enrich existing sources;
  • native data integration and in-database analytical support for Teradata data warehouse and Teradata Aster capabilities;
  • ability to leverage Teradata SQL-H™ for accessing Hadoop data from Aster or Teradata Database platforms.
Use Case: Predicting and Preventing Customer Leave

Problem

A global communication service provider is interested in preventing customers leaving by identifying at-risk customers and providing special offers that reduce the likelihood of leaving in a profitable way. To do this, they need predictive analytics.

Solution

Teradata and Alteryx deliver an end-to-end analytic workflow process from data consumption and analysis to application deployment. Alteryx integrates and loads call detail records from diverse sources along with customer data from the Teradata warehouse into the Aster database to create a complete, rich data set for iterative analysis. You can run iterative discovery analysis to determine the key indicators behind customer leaving and loyalty. These key indicators are captured as repeatable applications to enrich the data warehouse with leaving and loyalty scores. In addition, the discovery analysis is captured and deployed to the business users as a parameterized application for further iterative analysis.

Key Solution Components
  • Aster Discovery platform for deep analytics and segmentation;
  • Teradata data warehouse to operate and deploy insights and enriched data across the enterprise;
  • Alteryx for the user workflow engine to orchestrate data blending and analytics.
Benefits
  • Ability to identify key customers that are likely move candidates;
  • determine problem spots on the network (cell sites, network elements) that are driving move;
  • discover other key reasons for move (performance, competitive offers);
  • discover which offers have prevented churn for similar customers in the past;
  • identify which offers will work and evaluate a least-cost offer to prevent move;
  • ability to make offers to keep customers from leaving;
  • deeper understanding of customer behavior.