Saturday, December 31, 2016

Search in the Land of Information Silos

Information access and retrieval within most organizations is a work in progress. There might be a general search system for marketing information, and probably one or more database search systems.

The larger the organization, the greater the number of information retrieval systems. Each laptop and mobile device has a search system. Mobile phone apps sport their own search systems. The lawyers in an organization may have different search systems for specific types of legal matters. The enterprise resource planning (ERP) users have a search system. When it comes to enterprise search, there are many silos.

A “silo” is a content collection available to certain users. In the face of the reality of silos, it might be impractical idea of providing access to “all” information. “All” may not mean all or even some available information. Big data is easy to talk about but difficult to make accessible. The same challenge exists for images, audio recordings, and engineering drawings with details hidden into the proprietary system’s database.

Search which is variously called universal, unified or federated search is a solution to the challenge of information silos. The term meta-search is often used to describe an integrating function that passes the user’s query across discrete content indexes and returns a single results list to the user. Endeca, Inxight Software, Northern Light, Sagemaker and Vivisimo are search applications that can be used for universal, unified or federated search in an organization.

The initial query might not unlock the information stored in the system’s index. The facets, topics and suggests make it easy for the user to click through the links without having to craft additional queries.

Behind the curtains, federated search results requires some maintenance. A user does not want to know the file format in which the information he or she needs is stored. The user wants answers. Early federating systems like WAIS relied on standards for content representation. Today, however, there are many “standards,” and content processing systems must be able to process content in the hundreds of formats found in organizations.

It is important to deliver a system that makes an organization’s disparate types of digital content available.

There are barriers to unified, federated or integrated search.

Some digital content cannot be included in a general purpose search system for security, business or legal reasons. Technical content such as chemical structure information at a pharmaceutical company requires special purpose systems. The same need applies to product manufacturing data, legal information and engineering drawings.

Most search applications exclude video streams from the index. If video is indexed, the system processes the text included in the digital file or indexing provided by the video owner.

The cost of creating connectors to connect with certain content types could be too high, or license fees could be required to gain access to the file formats.

The computational burden required to process certain types of content might exceed the organization’s ability to fund the content processing. Big data, for example, requires a computing capability able to handle the Twitter stream, RSS feeds and telemetry data from tracking devices. Cost could be prohibitive for processing all content types.

The most important challenge is the need for confidentiality. The legal department does not want unauthorized access to information related to a legal matter out of its control.

Some government contracts required that for certain types of government work, the information related to that project must be separated. Common sense dictates that plans for a new product and its pricing remain protected. If someone needs access to that information, a different search system may be used to ensure confidentiality.

Even in the absence of business or legal requirements, some professionals do not want to share content. That may be a management problem. When a manager locks up information in a no-access silo, a software script will skip the flagged server.

To summarize, silos of information present a challenge to process and effectively use in organizations. In the enterprise, integration should take place within silos of content.

Galaxy Consulting has 17 years experience in integrating information silos using universal, unified or federated search. We have experience with search applications. Contact us for no obligation free consultation!

Wednesday, November 30, 2016

Three Values of Big Data

Big Data is everywhere. But to harness its potential, organizations should understand the challenges that come with collecting and analyzing Big Data. 

The three values that are important in managing big data are volume, velocity, and variety. These three factors serve as guidance for Big Data management, highlighting what businesses should look for in solutions.

But even as organizations have started to get a handle on these three V’s, two other V’s, veracity and value are important as well, if not more so.

Volume is the ability to ingest, process, and store very large data sets. Definition of "very large" can vary by business and is dependent upon the particular circumstances of the business problem, as well as the preceding volumes used by that business.

Volume can also be defined as the number of rows, or the number of events that are happening in the real world that are getting captured in some way, a row at a time. Accordingly, the more rows that you have, the bigger the data set is going to be.

Bigger Volumes, Higher Velocities

In today’s digital age, having huge volumes of data is hardly rare. The proliferation of mobile devices ensures that companies can gather more data on consumers than ever before, and the rise of the Internet of Things will only increase this plethora of data. Moreover, businesses will have even more information on customers as they begin to use one-on-one messaging channels to interact directly with them.

The sheer volume of data available to us is greater than ever before. In fact, in many ways, nearly every human action can be quantified and logged in a bank of data that’s growing at an incredibly fast rate. All of this data can be turned into actionable insights that drive business decisions and can help transform every customer interaction, create operational efficiency, and more.

This increase in data volume is paired with a simultaneous increase in speed. The speed with which the volume is increasing, as well as the volume itself, are both increasing. These increases have forced IT staff to spend more time trying to figure out how to process and analyze that data.

Velocity is the key V of the three V’s. For example, a customer will visit a company’s site or use its mobile application but only for a short amount of time. The business may have just seconds to gather customer information and deliver a relevant response based on that information, usually just one message or offer.

This quick turnaround time requires you to process all of that real-time behavioral data as fast as possible. If you only understand that your customer was on your Web site the day after, you’re not able to contact them anymore. One aspect of a successful customer journey is being able to send the right message at the right time to the right customer. Timeliness and relevancy are the foundation of delivering personalized customer experiences in real time.

A Variety of Formats

Data sets are in a variety of formats, and the number of data types continues to grow. Radio-frequency identification (the use of electromagnetic fields to gather information from tags attached to objects), smart metering (devices that monitor information on energy consumption for billing purposes), and the ubiquity of mobile devices with geo-location capabilities are only few examples of diverse sources of consumer information.

All of these technologies have their own methods of capturing and publishing data, which adds to the complexity of the information environment.

But overcoming these data complexities could be well worth it. Having a large variety of data is crucial for creating a holistic customer view. Access to data such as a customer’s purchasing history, personal preferences based on social media postings, exercising habits, caloric intake, and time spent in the car can help companies understand that customer on a deeper level, and thus build experiences that are tailored to that customer.

But this diversity of data sources can be a blessing and a curse. A blessing because businesses have an increasingly large range of channels from which to pull customer information, but a curse because it can be difficult to filter through that information to find the most valuable content.

Variety is a little overstated in what people talk about for Big Data. Audio and video as examples of data formats that can be particularly difficult to analyze. Usually what companies do is they try to come up with an intermediate representation of that data, and then use that intermediate representation to apply old or new algorithms to try to extract signals, whatever the definition of signal is for that business problem they’re trying to solve.

Volume, velocity, and variety are undoubtedly important to managing customer information. Companies should keep in mind other important aspects of big data if they want to make the most of it.

Data tools such as Apache Hadoop and Apache Spark have enabled new methods of data processing that were previously out of reach for most organizations. While the growing volume of data, the time needed to process it, and the sheer number of input sources pose challenges for businesses, all three can largely be addressed through technology.

New V's Emerge

Investment in Big Data has begun to stabilize and enter a maturity phase over the past year. It will take time for infrastructure and architectures to mature, and best practices should be developed and refined against these architectures.

Organizations should consider how to use Big Data to bring about specific outcomes, in other words, organizations should examine the challenges of Big Data from a business perspective as opposed to a technical one. A framework that incorporates the business-oriented characteristics of veracity and value can help enterprises harness Big Data to achieve specific goals.

Not all data is the same, but organizations may not be paying enough attention to changes within individual data sets. Contextualizing the structure of the data stream is essential. This includes determining whether it is regular and dependable or subject to change from record to record, or even with each individual transaction. Organizations need to determine how the nature and context of data content in all its forms, text, audio, or video, can be interpreted in a way that makes it useful for analytics.

This is where the veracity of data or the trustworthiness of data comes in. Determining trustworthiness is particularly important when it comes to third-party data. It passes through a set of edits and validation rules.

Veracity entails verifying that data is suitable for its intended purpose, and usable within a given analytic model. Organizations should use several measurements to determine the trustworthiness and usefulness of a given data set. Establishing the degree of confidence in data is crucial so that analytic outputs based on that data can be a stimulus for business change.

Important metrics for evaluating and cleaning up data records are:
  • completeness measurements, or the percentage of instances of recorded data versus all available data within a business ecosystem or market (or the percentage of missing fields within a data record);
  • uniqueness measurements, or the percentage of alternate or duplicate data records;
  • accessibility measurements, or the number of business processes and personnel that can benefit from access to specific data, or that can actually access that data;
  • relevancy measurements, or the number of business processes that utilize or could benefit from specific data;
  • scarcity measurements, or the probability that other organizations including competitors and partners have access to the same data (the scarcer the data, the more it has impact).
Value is Paramount

While veracity can’t be overlooked, value is the most important factor. The first three V’s are really talking about architecture, infrastructure, representation of data, things that are important to IT organizations and, by far, less interesting to the business stakeholders.

The business stakeholders really don’t care about the first three, they only care about the value they can extract from the data. Executives often expect the analytical teams at their organizations to hide the first three V’s (volume, velocity, and variety) and only generate the last V - the value that is fundamental to the success of the business.

The concept of value is essential for organizations to succeed in monetizing their data assets. Value is a property that helps identify the purpose, scenario, or business outcomes that analytic solutions seek to address. It helps to confirm what questions are to be answered and what actions will be taken as a result, and defines what benefits are anticipated from collecting and analyzing the data.

Value is a motivating force when it comes to developing new and innovative ideas that can be tested by exploring data in different ways.

The ability to pull valuable information from Big Data and use that information to build a holistic view of the customer is absolutely critical. It’s no longer just an option to develop one-to-one relationships with customers; it’s a requirement. And to build that relationship, companies have to leverage all the customer information they can to personalize every interaction with them.

By using such information to lead customers on a personal journey, companies can help ensure that customers will stay with them long term, and even become brand advocates. Value is derived from making the data actionable. Organizations can have all the information about a customer, but it’s what we they can do with it that drives value for the business.

The Three V’s model of volume, velocity, and variety is useful for organizations that are just beginning to take control of their data, and certainly should not be forgotten by enterprises that have advanced further in their management of customer information.

The first three V’s are equally important. In the digital age, companies have accumulated more data than ever before, are pulling data from a variety of sources, and are increasing the rate at which that data flows, and that a combination of these three factors can help organizations to create relevant, personal, and one-on-one customer interactions.

Deriving value is the ultimate business goal for any enterprise. The standard Three V’s model does not satisfactorily identify any data properties from a business usage perspective. Even though Big Data, and data in general, provides organizations with a lot of capabilities, the challenge for businesses is to make sure that they adapt how they think about the business processes, how they report on them, and how they define key performance indicators.

Organizations should try to get to the value. They need to turn that data into value. It’s figuring out how to use that data to optimize business processes. In the end, the Three V’s model for Big Data is a useful start point. But then it becomes about the ultimate goal, the one organizations must not lose sight of: driving value.

Galaxy Consulting has 17 years experience in big data management. We are on the forefront of driving value of big data.

Friday, November 11, 2016

Voice Search

Roughly 56% of teenagers and 41% of adults use voice search on their mobile phones every day, according to Northstar Research. 

For example, modern consumers in Boston are much more likely to ask Google Now, Siri, Cortana, or Amazon’s Alexa to find the nearest coffee shop than they are to type "coffee shops near Boylston Street in Boston" into a search bar on Google's homepage.

This truly creates a challenge for search engine providers and for the providers of those personal assistants. But as consumers increasingly turn to voice search on their mobile phones, those Boston coffee shops will now have to rethink their search engine optimization (SEO) strategy if they hope to show up in voice search results.

Search is a science, and the rules are different for text and voice search.

It will only become more vital that business data on a company web site such as store locations, hours, and contact information is accurate and up to date. Businesses will also need to make sure that they are portrayed accurately on local review sites like Yelp.

Companies also need to consider how and where consumers are conducting their voice-based searches. When using computer-based search, it's assumed that a user is at a computer, so there is more screen space and more time to search. When using a mobile device, it's assumed a user is out, time is short, and the user needs access to quick bits of information on a small screen.

Therefore web sites need to be designed so that they dynamically adjust to fit any screen the consumer is using.

Google and Bing use mobile-friendliness as a ranking factor in their SEO algorithms. Both Microsoft and Google offer tools to help companies determine whether their sites are mobile-friendly. The tools look at factors such as loading speed, the width of page content, the readability of text on the page, the spacing of links and other elements on the page, and the use of plug-ins.

When it comes to voice search, web content that delivers the answers consumers want, in the quickest way possible, will ultimately win. The information should be concise and to the point, with more of an emphasis on usefulness than visual appeal.

Web content should be presented in more of a natural, conversational style and structured more like FAQs, answering the questions consumers might pose in voice search queries without requiring them to click on additional links or take other actions. Voice searches might be initiated in the car while someone is driving.

Companies also need to consider how consumers ask for information through voice search. More often than not, voice search queries are phrased using the same types of who, what, when, where, how, and why questions that are part of natural conversations. During these conversational, natural language search queries, consumers do not typically use the same keywords or metadata that are the hallmarks of text-based searches. Using basic keywords to set SEO parameters alone is no longer enough.

Use long-tail keywords. Rather than relying on a single word or phrase, long-tail keywords involve multiple keyword phrases that are very specific to whatever the company is selling.

Companies need to teach their systems and the search engines a very specialized lexicon that corresponds to their product and service names.

SEO rules keep changing and so SEO strategy needs to change accordingly.

Galaxy Consulting has many years experience with search. Contact us for a free consultation.

Sunday, October 2, 2016

Content Localization

Content is the fuel of your organization. Content spills into all areas of an organization and fuels absolutely everything with which customers interact, whether it's a commerce web site, marketing or sales materials, or any other channels.

Preferences for content vary by continent. Companies looking to expand operations overseas will inevitably face cultural and language barriers. They will have to think about how content is resonating with global audiences and they will face the challenge of determining which content to adjust to meet the needs of local markets.

To help ensure a quality customer experience, the top global brands employ a localization strategy to adapt their online content for regional specificity. Content should clearly resonate with companies' international audiences.

Content localization is the process of modifying content to make it usable for a new locale. Often, this includes translating the content from the source language into the language used in the locale. It is an integral part of adapting a product for a particular market. For example, if a company is currently selling products in US and is expanding into France, the company would need to translate the content into French.

Content translation and localization are not the same. Translation is the process of changing an original language of content into a different language. Content localization is a more specialized process of adapting content for regional or local consumption. It goes beyond translation to modify the source language and other site elements to appeal to the customer’s cultural preferences in their own target language. Localization is about refining your message and curating your brand to meet the cultural, functional, and language expectations of your global markets.

Why to localize the content? There are few reasons:
  • catering to local customers - to show your local customers that you care about them and to bring more value to your products;
  • reduce risks - to avoid liability by not using words that might be offensive in a different country;
  • enhance marketing - having a great campaign or advertisement is useless if your target audience doesn’t speak English well enough to understand your marketing message;
  • increase sales - localized content will sell more often because consumers will be able to fully understand what they are buying.
Content localization includes other processes in addition to translation. For example, U.S. and French currencies are different, so you might need to change references from dollars to euros. It is possible to localize information without translating it such as changing examples, currency, geographically specific information, and perhaps color schemes.

These are standard best practices that help to minimize the cost of localization:

Content Development:
  • source text should use clear, simple, concise language;
  • do not use jargon, idiom, metaphors, or other creative language;
  • use simple words;
  • use simple language structure;
  • use consistent terminology
  • limit text in graphics, and put text into an editable layer;
  • use legends rather than embedded text for graphics.
  • use predictable structure;
  • use styles or structured authoring;
  • do not use embedded formatting.

Include a language flag on content so the system can deliver content to readers in the language they want.

  • use professional translators;
  • use translation memory systems;
  • work with localization professionals to identify potential problems;
  • look for opportunities to change source content instead of requiring changes in each target language.
When creating graphics, make sure that they are culturally neutral and acceptable for global audience. These are the best practices for graphics:
  • use color carefully because colors meaning varies across cultures;
  • images of people are not appropriate in some cultures and in other cultures, images of women are not acceptable;
  • images of hands or feet are not appropriate in some cultures;
  • when designing icons make sure that they are universally understood.
Some examples of cultural content include:
  • colors, shapes, sizes, styles;
  • images, icons, graphics;
  • societal codes; i.e. humor, etiquette, rituals, myths, symbols;
  • societal values, power, relationships, beliefs.
Some examples of functional content include:
  • date and time formats, telephone numbers, contact information;
  • weights, measurements, geographical references;
  • language and linguistic content; product descriptions, reviews.
Content translation and localization also differ on a tactical level. While simple translation may be appropriate for some content types in certain markets, localization is most often required for adapting highly emotive, creative marketing content so that it clearly resonates across locales.

There are several content types, from marketing collateral to legal and technical information and user-generated forum content. For reasons of efficiency and cost, is is the best practice to map these content types to the most appropriate translation or localization methods.

Not all company's content needs to be translated. Companies that located in one location should make it a priority to determine which content should be tailored for various regions. For example, blogs and tweets are not as significant as marketing and training materials, one help sheets, business forms, support email, FAQs.

It is generally easier to select the best fit when you consider your audience and the content’s source and intent. Other parameters include volume, update frequency, content lifecycle, and budgetary considerations. Depending on your language service provider’s (LSP) capabilities, there are several methods from which to choose. When making these decisions, it’s best to consult an experienced LSP that offers a wide range of services.

Following these best practices will help to reduce the overall cost of localization. Translators very often use memory tools and following these best practices would help these tools to recognize the same patterns.

Companies can try to discover which locales and languages provide the greatest return on investment. Identify languages that make up 90% of global online economic opportunities.

Be sure to follow industry best practices to reduce complexity, speed time-to-market, control costs and ensure quality localized content for all of your global markets. Companies will need to adopt stronger content strategies, keep closer tabs on the ways in which customers are consuming content, and take into consideration consumer preferences based on geographical region and digital trends.

Achieving global excellence means improving local experiences and getting better insights into those markets.

Wednesday, July 27, 2016

Navigating Big Data

Big Data is an ever-evolving term which is used to describe the vast amount of unstructured data. Published reports have indicated that 90% of the world’s data was created during the past two years alone.

Whether it’s coming from social media sites such as Twitter, Instagram, or Facebook, or from countless other Web sites, mobile devices, laptops, or desktops, data is being generated at an astonishing rate. Making use of Big Data has gone from a desire to a necessity. The business demands require its use.

Big Data can serve organizations in many ways. Ironically, though, with such a wealth of information at a company's disposal, the possibilities border on the limitless, and that can be a problem. Data is not going to automatically bend to a company's will. On the contrary, it has the potential to stir up organizations from within if not used correctly. If a company doesn't set some ground rules and figure out how to choose the appropriate data to work with, as well as how to make it align with the organization's goals, it's unlikely to get anything worthy out of it.

There are three layers of Big Data analytics, two of which lead to insights. The first of these, and the most basic, is descriptive analytics, which simply summarize the state of a situation. They can be presented in the form of dashboards, and they tell a person what's going on, but they don't predict what will happen as a result. Predictive analytics forecast what will likely happen, prescriptive analytics guide users to action. Predictive and prescriptive analytics provide insights.

Presenting the analytics on a clean, readable user interface is vital but sometimes is ignored. Users get frustrated when they see content that they can't decipher. A canned dashboard does not work for users. They need to know what action they have to take. Users demand a sophisticated alert engine that will tell them very contextually what actions to take.

Using such analytics, ZestFinance was able to glean this insight: those who failed to properly use uppercase and lowercase letters while filling out loan applications were more likely to default on them later on. Knowing this helped them identify a way to improve on traditional underwriting methods, pushing them to incorporate updated models that took this correlation into consideration. As a result, the company was able to reduce the loan default rate by 40% and increase market share by 25%.

Unfortunately, insights have a shelf life. They must be interpretable, relevant, and novel. Once an insight has been incorporated into a strategy, it's no longer an insight, and the benefits it generates will cease to make a noticeable difference over time.

Getting the Right Data

To get the right data leading to truly beneficial insights, a company must employ a sophisticated plan for its collection. Having a business case around the usage of data is the first important step. A company should figure out what goals it would like to meet, how and why data is crucial to reaching them, and how this effort can help increase revenue and decrease costs.

Data relevance is the key and what is important to a company is determined by the problems it is trying to solve. There is useful data and not useful data. It is important to distinguish them and weed out not useful data. Collecting more than what is useful and needed is impractical.

Often data is accumulating before a set of goals has been outlined by stakeholders. It is being collected irrespective of any specific problem, question, or purpose. Data warehouses and processing tools such as Hadoop, NoSQL, InfoGrid, Impala, and Storm make it especially easy for companies to quickly attain large amounts of data. Companies are also at liberty to add on third-party data sources to enrich the profiles they already have, from companies such as Dun & Bradstreet. Unfortunately, most of the data, inevitably, is irrelevant. The key is to find data that pertains to the problem.

Big Data is nothing if not available, and it takes minimal effort to collect it. But unfortunately, it will not be of use to anyone if it’s not molded to meet the particular demands of those using it. Some people are under the impression that they are going to get a lot of information simply from having data. But businesses don’t really need Big Data - information and insight are what they need. While a vast amount of data matter might be floating around in the physical and digital universes, the information it contains may be considerably less substantial.

While it might seem advisable to collect as much information as possible, some of that information just might not be relevant. Relevant insights, on the other hand, allow companies to act on information and create beneficial changes.

It is a good idea to set parameters for data collection by identifying the right sources early on. It could be a combination of internal and external data sources. Determine some metrics that you monitor on an ongoing basis. Having the key performance indicators (KPIs) in place will help companies identify the right data sources, the types of data sources that can help solve their problems.

Technology plays a key role in harnessing Big Data. Companies should figure out what kinds of technology make sense for them. Choice of technology should be based on company's requirements.

Data collection is an ongoing process that can be adjusted over time. As the business needs change, newer data sources are integrated, and newer business groups or lines of businesses are brought in as stakeholders, the dynamics and qualities of data collection will change. So this needs to be treated not as a one-time initiative, but as an ongoing program in which you continually enrich and enhance your data quality.

Companies should continually monitor the success of their data usage and implementation to ensure they're getting what they need out of it. There should be a constant feedback stream so that a company knows where it stands in relation to certain key metrics it has outlined.


Companies must always be aware of the risks involved in using data. Companies shouldn't use prescriptive analytics when there is significant room for error. It takes good judgment, of course, to determine when the payoffs outweigh the potential risks. Unfortunately, it's not always possible to get a prescriptive read on a situation. There are certain limitations. For one thing, collecting hard data from the future is impossible.

People and Processes

Big Data adoption often becomes a change management issue and companies often steer clear of it. When a company implements something that's more data-driven, there's a lot of resistance to it.

Like most initiatives that propose technology as a central asset, Big Data adoption can create conflicts among the various departments of an organization. People struggle to accept data, but people also aren’t willing to give it up. To avoid such clashes, companies should make it clear from the outset which department owns the data. Putting the owner in charge of the data, having this person or department outline the business rules and how they should be applied to customers would be helpful to overcome this issue.

These are two good tips to follow: Give credit where credit is due and don't dehumanize the job. Don’t attribute the success to the data, but to the person who does something with the data. Remember that change can't just come from the top down. Big Data adoption requires more than executive support. It needs buy-in from everyone.

Saturday, July 23, 2016

Successful Change Management in Content and Knowledge Management

It is getting more unlikely to find paper documents in filing cabinets or electronic documents in shared network drives. Filing cabinets and shared network drives have been replaced by content management systems, knowledge base application, and collaboration tools in majority of organizations.

At a certain point, it's inevitable that organizations have to make adjustments to keep up with the times. users must constantly adapt to the tools of an evolving world. After all, if customers are using advanced technology, it makes sense that companies should be interacting with them using tools that are up to date as well.

If technology adoption is to have an effect on an organization, users' commitment becomes a required element. But getting that kind of cooperation is not always a simple task. Users might not immediately take to the new processes without some resistance.

Though it's counter-intuitive that anyone would resist technology designed to make their job easier, the resistance is unavoidable element of content and knowledge management initiatives. Organizations should anticipate a number of challenges and do their best to ease their users's resistance through the transition and change management.

Drawing from our 16 years of experience successfully managing user adoption and change management in content and knowledge management initiatives, these are our guidelines for organizations to overcome challenges in these areas.

1. Communicate the Goals

There may be myriad practical reasons for why the change in how your organization manages its content needs to be put in place. Before proposing any major change, establish clear reasons for why the change is being proposed, and how it is going to enhance users' experience.

For users to understand how technology is going to help them, they need to understand what their future will look like with this technology in place. What it amounts to is if you can't articulate the benefits of making the change, you have no business of making this change.

It is very important to create a consistent narrative that instills confidence in users as well as the language you use to deliver this narrative to users. Avoid using the term "change management". The reason is that employees hear "change management" as "Whatever you have done until now is wrong, and now we are going to put you on the right track." That is not a good message.

You may want to use the term "cause management" which attributes any need for adjustment within a company to a cause. Under this approach, organizations would make an effort to craft a story that communicates the idea that this is the outcome that will best benefit the company.

Highlighting what is not going to be changing can be a source of encouragement for users. This way you are introducing a consistency while asking users to evolve.

2. Fear of Change is not Necessarily Fear of Technology

Technology itself is not usually the reason that employees are resistant to change. People are becoming less resistant to using technology. Problems begin to surface when employees are not given enough notice about what technology they are expected to use.

Even before it's been decided which technology organizations have settled on, organizations should give their employees an outline of the problems they are trying to fix. This would give them the opportunity to provide input and make suggestions about what types of processes they would like to see streamlined and how they envision their ideal work environment. Though organizations might not always have the budget for what the employees have in mind, they will at least be involving them and making them feel as though they are part of the equation from the outset.

Also important is that workers are given the time to develop the kinds of skills necessary to make full use of technology. It takes employees some conditioning to see how new technology and procedures can be of aid to them. If you can be proactive about teaching people these new skills and how to use the technology in small segments, this definitely can accelerate the change.

3. New Technology Can Bruise the Ego

All employees are proud of their work. They like to feel as though they possess an innate talent, and that there's a reason they're doing what they've chosen to dedicate so much of their time to. Regardless of age or experience level, there are certain natural emotions that might come into play when companies are proposing changes. If employees are led to believe that so much of what they spent a great deal of time mastering can be transferred to anyone with an ease, they might resent it on an emotional level that they might not even share.

Thus, it would be a good idea to communicate how the technology is going to help them work together and be more connected.

4. Technology is not Only for Managers

It goes without saying that technology should never force people to do more work than they are already doing. If you force people to use a system that is making their jobs worse, they' are going to do everything they can to avoid it.

Employees should never feel as though technology is being deployed solely for the benefit of the managers. Granted, content management system provides managers with more visibility into work processes, but the central message managers should be sending is that the technology is there to help employees do their jobs better.

It is helpful to illustrate that higher management is using the technology as well, for the sake of driving home the idea that the technology is being universally adopted by the organization.

5. Deploy Gradually

When it comes to deploying the systems that employees are going to be using regularly over an extended period of time, it is a good idea to steer clear of an abrupt implementation in favor of a more gradual one.

Use Pilot Periods. During these periods, a small subset of the company is selected to test the technology and share its experiences with the others. Keeping employees updated via email, meetings, or through other internal communication channels can be helpful, as it also lets people know what to expect. Likewise, getting user testimonials and videos in which those who have piloted the product attest to its benefits could prove useful.

However, it's important to be all-inclusive when deciding who is going to be participating in such trial periods. While it might be tempting to recruit the most enthusiastic and vocal representatives of a company to test the materials, it might be a better idea to go for a mix to act as testers. Use a subset of users that will represent those who are ultimately going to be expected to use the new technology. Of course, asking volunteers to step forward is advised, but testers should also be drawn from a segment of those who are not as keen on trying it.

Including people who are not technology experts is a good idea, because it helps drive home the point that anyone can use the solution effectively. It also reinforces the idea that there will be support and training opportunities available.

If the right group of people is selected for the pilot program, they can generate excitement about the system and show how the program has helped them do their jobs.

One small factor to keep in mind about the pilot period, however, is the capacity of the system. Since the entire program will eventually be inhabited by more users, the experience that the small subset reports might differ from the one that is waiting further down the line. For example, a system that works fine when you have ten users on it may not work as quickly when there are 200,000 users connected to it. You need to be able to account for things like that.

6. Maintain the Change

Change management is not as simple as preparing employees for the transition that is about to be introduced. It has just as much to do with ensuring that employees don't revert to outdated and inefficient methods as it does with ensuring that people begin to use it. Managing resistance is a process, not a series of events.

Because it's a process, managers should be very careful to communicate the fact that the improvements might not come all at once, but rather in small increments. Incentives can also act as fruitful aids in encouraging adoption. For this very reason, gamification applications have been gaining popularity because they allow employees to compete against one another and display to the rest of the company how well they have done by showing off their achievements.

It is important to build employees confidence and a positive environment. Set specific event days to encourage the use of the new technology. Typically held once a month, these are known as blitz days. The idea is to set aside a time period during which everybody is forced to use the technology in a fun environment. At the end of the day, the users share those results. The goal is to say that if this can be done on one particular day, why can't it be done every day? Over time, the benefits of these events could be substantial.

Change is ongoing. As time goes on, the window for change for technology is becoming much narrower than it used to be, with updates occurring far more frequently. For some people, it might seem that just as they are getting used to one change, another one is on the way. Organizations need to create an infrastructure that better supports that.

Characteristics for Driving Change
  • Be outwardly focused - avoid being locked into one area of the company. Look for ways to make an impact across organizations.
  • Be Persuasive - be clever and persuasive enough to gain the support of users.
  • Be Persistent - do not give up. Constantly work through the channels of the organization to ensure that new systems and processes are factored into organizations' way of work.

Sunday, June 26, 2016

Better Business Operations with Better Data

Businesses today understand that data is an important enterprise asset, relied on by employees to deliver on their customers' needs, among other uses of data such as making business decisions and many others.

Yet too few organizations realize that addressing data quality is necessary to improve customer satisfaction. A recent Forrester survey shows that fewer than 20% of companies see data management as a factor in improving customer relationships. This is very troubling number.

Not paying attention to data quality can have a big impact both on companies and the customers they serve. Following are just two examples:

Garbage in/garbage out erodes customer satisfaction. Customer service agents need to have the right data about their customers, their purchases, and prior service history presented to them at the right point in the service cycle to deliver answers. When their tool sets pull data from low-quality data sources, decision quality suffers, leading to significant rework and customers frustration.

Lack of trust in data has a negative impact on employees productivity. Employees begin to question the validity of underlying data when data inconsistencies and quality issues are left unchecked. This means employees will often ask a customer to validate product, service, and customer data during an interaction which makes the interaction less personal, increases call times, and instills in the customer a lack of trust in the company.

The bottom line: high-quality customer data is required to support every point in the customer journey and ultimately deliver the best possible customer experience to increase loyalty and revenue. So how can organizations most effectively manage their data quality?

While content management systems (CMS) can play a role in this process, they can't solve the data-quality issue by themselves. A common challenge in organizations in their content management initiatives is the inability to obtain a complete trusted view of the content. To get started on the data-quality journey, consider this five-step process:

1. Don't view poor data quality as a disease. Instead, it is often a symptom of broken processes. Using data-quality solutions to fix data without addressing changes in a CMS will yield limited results. CMS users will find a work-around and create other data-quality issues. Balance new data-quality services with user experience testing to stem any business processes that are causing data-quality issues.

2. Be specific about bad data's impact on business effectiveness. Business stakeholders have enough of data-quality frustrations. Often, they will describe poor data as "missing," "inaccurate," or "duplicate" data. Step beyond these adjectives to find out why these data-quality issues affect business processes and engagement with customers. These stories provide the foundation for business cases, highlight what data to focus on, and show how to prioritize data-quality efforts.

3. Scope the data-quality problem. Many data-quality programs begin with a broad profiling of data conditions. Get ahead of bottom-up approaches that are disconnected from CMS processes. Assess data conditions in the context of business processes to determine the size of the issue in terms of bad data and its impact at each decision point or step in a business process. This links data closely to business-process efficiency and effectiveness, often measured through key performance indicators in operations and at executive levels.

4. Pick the business process to support. For every business process supported by CMS, different data and customer views can be created and used. Use the scoping analysis to educate CMS stakeholders on business processes most affected and the dependencies between processes on commonly used data. Include business executives in the discussion as a way to get commitment and a decision on where to start.

5. Define recognizable success by improving data quality. Data-quality efforts are a key component of data governance that should be treated as a sustainable program, not a technology project. The goal is always to achieve better business outcomes. Identify qualitative and quantitative factors that demonstrate business success and operational success. Take a snapshot of today's CMS and data-quality conditions and continuously monitor and assess them over time. This will validate efforts as effective and create a platform to expand data-quality programs and maintain ongoing support from business stakeholders and executives.

Galaxy Consulting has over 16 years experience helping organizations to make the best use of their data and improve it. Please contact us today for a free consultation!

Sunday, June 12, 2016

Hadoop Adoption

True to its iconic logo, Hadoop is still very much the elephant in the room. Many organizations heard of it, yet relatively few can say they have a firm grasp on what the technology can do for their business, and even fewer have actually implemented it successfully at their organization.

Forrester Research predicted that Hadoop will become a cornerstone of the business technology agenda at most organizations.

Scalability, affordability, and flexibility make Hadoop uniquely suited to change the big data scene. An open-source software framework, Hadoop allows for the processing of big data sets across clusters on commodity hardware either on-premises or in the cloud.

At roughly one-thirtieth the cost of traditional data storage and processing, Hadoop makes it realistic and cost effective to analyze all data instead of just a data sample. Its open-source architecture enables data scientists and developers to build on top of it to form customized connectors or integrations.

Typically, data analysis requires some level of data preparation, such as data cleansing and eliminating errors, outside of traditional data warehouses. Once the data is prepared, it is transferred to a high-performance analytics tool, such as a Teradata data warehouse. With data stored in Hadoop, however, users can see "instant ROI" by moving the data workloads off of Teradata and running analytics right where the data resides.

Other use of Hadoop is for live archiving. Instead of backing up data and storing it in a data recovery system, such as Iron Mountain, users can store everything in Hadoop and easily pull it up whenever necessary.

The greatest power of Hadoop lies in its ability to house and process data that couldn't be analyzed in the past due to its volume and unstructured form. Hadoop can parse emails and other unstructured feedback to reveal similar insight.

The sheer volume of data that businesses can store on Hadoop changes the level of analytics and insight that users can expect. Because it allows users to analyze all data and not just a segment or sample, the results can better anticipate customer engagement. Hadoop is surpassing model analytics that can describe certain patterns and is now delivering full data set analytics that can predict future behavior.

There are few challenges.

Hadoop's ability to process massive amounts of data, for example, is both a blessing and a curse. Because it's designed to handle large data loads relatively quickly, the system runs in batch mode, meaning it processes massive amounts of data at once, rather than looking at smaller segments in real time. As a result, the system often forces users to choose between quantity and quality. At this point in Hadoop's life cycle, the focus is more on enormous data size than high-performance analytics.

Because of the large size of the data sets fed into Hadoop, the number-crunching doesn't take place in real time. This is problematic because as the time between when you input the data and the time at which you have to make a decision based on that data grows, the effectiveness of that data decreases.

The biggest problem of all is that Hadoop's seeming boundlessness instills a proclivity for data exploration in those who use it. Relying on Hadoop to deliver all the answers without asking the right questions is inefficient.

As companies begin to recognize Hadoop's potential, demand is increasing, and vendors are actively developing solutions that promise to painlessly transfer data onto Hadoop, improve its processing performance, and operationalize data to make it more business-ready.

Big data integration vendor Talend, for example, offers solutions that help organizations transition their data onto Hadoop in high volume. The company works with more than 800 connectors that link up to other data systems and warehouses to "pull data out, put it into Hadoop, and transform it into a shape that you can run analytics on.

While solutions such as those offered by Talend make the Hadoop migration more manageable for companies, vendors such as MapR tackle the batch-processing lag. MapR developed a solution that enhances the Hadoop data platform to make it behave like enterprise storage. It enables Hadoop to be accessed as easily as network-attached storage is accessed through the network file system; this means faster data management and system administration without having to move any data.

Veteran data solution vendors such as Oracle are innovating as well, developing platforms that make Hadoop easier to use and to incorporate into existing data infrastructures. Its latest updates revolved around allowing users to store and analyze structured and unstructured data together and giving users a set of tools to visualize data and find data patterns or problems.

RapidMiner's approach to Hadoop has been to simplify it, eliminate the need for end users to code, and do for Hadoop analytics what Wordpress did for Web site building. Once usable insights are collected, RapidMiner can connect the data platform to a marketing automation system or other digital experience management system to deploy campaigns or make changes based on data predictions.

Moving forward, analysts predict that leveraging Hadoop's potential will become a more attainable goal for companies. Because it's open-source, the possibilities are vast. Hadoop's ability to connect openly to other systems and solutions will increase adoption in the coming months and years.

Sunday, May 29, 2016

Oracle Knowledge Management

As customer expectations rise, delivering personalized experiences that improve customer loyalty, increase customer acquisition and optimize efficiency is increasingly more challenging to achieve. It is very important to engage customers in their preferred channel and to minimize the overall effort of that engagement.

A key to minimizing the customer effort is to deploy a knowledge management platform that crosses all channels, presents accurate content from multiple sources, maintains, relevance, and captures feedback for continuous improvement. Oracle Knowledge is a complete knowledge management solution which provides personalized cross-channel service and support.

Knowledge Platform

Oracle Knowledge Platform is an integrated set of knowledge management capabilities including advanced natural language processing, search, flexible authoring and publishing, rich analytics, customizable self-service, and agent facing knowledge applications. Oracle Knowledge is built on a highly scalable J2EE architecture and on Oracle technologies including WebLogic, Oracle Data Integrator, and Oracle Business Intelligence.

Semantic Search

The Oracle Knowledge semantic search capabilities are built on the fundamental understanding of language. Core language dictionaries are available in 20 languages understanding everyday terminology. In addition, multilingual industry dictionaries are available for major industries including high tech, telecommunications, insurance, finance and automotive.

This core understanding of a user’s language is key to finding precise answers from multiple external sources including the knowledge base, web sites, file systems and other internal knowledge repositories. The most recent release of Oracle Knowledge continues to build on this foundation with widely expanded language and geography coverage, significantly increased performance and reduced footprint with faster search response times, faster content processing performance, and a reduced semantic index size, as well as learning-based search ranking for reduced incident handling time. These improved features deliver increased productivity and lower operation costs.

Authoring, Publishing, and Workflow

Oracle Knowledge is designed to help companies to develop a knowledge base as an integral part of a user’s job. Contact center agents and customers create content as a by-product of solving support issues using a powerful, web-based, WYSIWYG rich text editor. Product experts and contact center agents can collaborate with other users and customers to refine or expand the knowledge base.

Advanced editing capability such as global find and replace and replacement tokens improve the article accuracy while lowering operation and knowledge administration costs. Oracle Knowledge comes with valuable tools to manage the life-cycle of articles. Customers can create their own article templates and metadata. The software tracks all revisions of the articles and provides detailed history. Articles may be routed for approval through the use of a workflow. Providing users with the ability to attach files to forum posts allows them to provide additional information to explain their issues. These capabilities improve self-service rates, while expanding the knowledge base and the user community.

The user interface of the authoring system is available in 24 languages, but content can be created in nearly any language. Oracle Knowledge allows customers to manage the relationship of an article across different locales and languages, while providing authors with the ability to develop locale-specific content and metadata allowing fine-tuning of the customer experience.


Analytics Dashboards are tailored to functional roles across the service organization. They harness the optimal value of company stakeholders by providing relevant insights at a glance to reduce operational costs, increase employee productivity, and strengthen customer relationships. With the configurable custom KPI wizard for creating KPI with thresholds and triggers, organizations can increase the efficiency of authoring content, increase answer relevancy, and improve the overall insight of knowledge activity.

InfoCenter: Self-Service Knowledge

InfoCenter provides a Knowledge portal for customers and employees with integrated browse and search functionality via a customizable user interface and knowledge widgets. InfoCenter surfaces the power of industry-based libraries, knowledge federation abilities, and natural language processing abilities of the platform to deliver true, intent-based best possible answer to customers. It transforms the self-service experience for customers by providing contextual, and relevant answers to their questions.

iConnect: Agent Knowledge

iConnect provides robust and scalable answer-delivery framework aiding the agent-facing service delivery model. The context-driven user interface simplifies and enhances the user experience and is tuned for increased performance. iConnect is available as an out-of-the box integration into Oracle Service, and Oracle Service Cloud. Open APIs allow for integration into most industry standard CRM applications.

AnswerFlow: Guided Knowledge

AnswerFlow provides consistent service resolution for agents and customers with the prescriptive delivery of knowledge. AnswerFlow combines decision trees with external data that leverages and increases the strategic value of Knowledge Platform across self and assisted service customer interaction channels. AnswerFlow enables to create and deploy automated interactive processes that guide users toward appropriate answers or solutions in cases where:
  • answers are conditional, and can vary based on factors such as account status, location, or specific product or model;
  • diagnosis is complex, and identifying the best response among many possible answers involves asking detailed questions and eliminating alternatives.

Galaxy Consulting has over 16 years experience in many knowledge base applications. We can help you to deploy Oracle Knowledge Management. Contact us today for a free consultation!

Wednesday, May 11, 2016

Use Knowledge Management to Increase CRM Value

Today's post focuses on how knowledge management adds an essential layer of value to customer facing systems, which ultimately drives improved customer experiences.

Two sweeping trends have emerged in recent years: the proliferation of customer channels and the resulting explosion in the amount of data produced. 

Customer relationship management has done a great job of providing a strong framework for these multi-channel interactions. Knowledge management (KM) has done an equally remarkable job by providing the brains behind the increasingly diverse network of customer contact points.

With the explosion of data from many different types of sources in the past several years, the tight integration of KM and CRM systems has become even more essential to offering customers and the agents who serve them the concise and timely information they need.

KM and CRM have a long history of ultimately serving the same goals of quickly and efficiently providing customers with information, whether it is through Web self-service, a call center agent, kiosk or mobile application.

CRM and KM Synergy

CRM applications are systems of record that manage customer data. Knowledge Management (KM) systems, in the context of customer engagement, enable businesses to systematically capture knowledge from subject matter experts within the enterprise, and social knowledge from online communities, social networks, partners, etc. for use by customer-facing organizations and end-customers.

When integrated, KM helps expand the business value of CRM, delivering transformational benefits in enhanced customer experiences, contact center productivity, and improved customer acquisition, among other things. KM systems are also able to leverage existing content management systems by adding a layer of findability and know-how for content-enabled process automation.

How it Works

There are many use cases of how CRM and KM work in tandem to deliver business value. A common one is in the customer contact center where knowledge solution is often used in conjunction with CRM.

When customers call, agents use a CRM to open a case, enter the problem description, and click on a “solve” button. This, in turn, invokes a resolution path, for example, a set of search paths to find the right answer or next steps. Agents get to the resolution using the path of their choice, “accept” the resolution, communicate it to the customer, and close the case. The interaction, including the path to the answer and the knowledge base article that was used to solve the problem or sell a product, is recorded in both the CRM and KM systems.

Business Value

Many companies worldwide leverage the combined power of knowledge and CRM to drive business value. Adopting best practices can help make the business case, implement knowledge, and manage it for sustained business value. Here are some examples:
  • Premier home appliance manufacturer: $50M in savings by eliminating unwarranted truck rolls through knowledge-powered resolution processes in the contact center and website.
  • Semiconductor giant: 59% increase in web self-service adoption, 30% increase in First Contact Resolution.
  • Global knowledge and legal services solutions provider: 70% deflection of calls and emails through knowledge-powered self-service, 30% reduction in content authoring time.
  • Leading telco provider: 42% reduction in unwarranted handset returns through knowledge-powered resolution process in the contact center.
  • Global bank: 88% reduction in agent training time and 70% increase in productivity through knowledge-powered account opening process in small business sector.
Quantify Value

Assessing expected and realized ROI before and after the deployment would help you to justify the initial investment as well as continuous improvement of the CRM - KM solution. Make sure the ROI metrics you use are aligned with business objectives. For instance, if your main business goal is to increase sales, reduction in average handle time will be a conflicting metric. As you assess ROI, keep in mind that KM delivers ROI across a broad range of business problems. Some examples are:
  • deflection of requests for agent-assisted service through effective self-service;
  • increase in first contact resolution and sales conversion;
  • reduction in escalations, transfers, repeat calls, and average handle times;
  • reduction in training time, unwarranted product returns, field visits, and staff wage premiums.
Start with Depth

Unfocused deployments almost always result in a shallow knowledge base that is full of gaps. If agents and customers can’t find answers, or receive inadequate or wrong information, they simply stop using the system. Focus first on depth rather than breadth. Start with common questions on common products or lines of business and expand out over time.

Knowledge-Centered Support (KCS)

Best practice frameworks have emerged over time in knowledge management. For example, the Knowledge-Centered Support (KCS) framework is a comprehensive methodology that helps to improve speed of resolution, optimize resources, and foster organizational learning. Adopting frameworks like KCS is a win-win-win for customers, contact center agents, and the organization alike. Implement the best practices in knowledge-centered customer support.

Maximize Findability

Users prefer different ways of searching for information, just as drivers prefer different ways of reaching their destination. A GPS-style approach with multiple options to find information dramatically improves knowledge base adoption. For example, new agents may find it difficult to wade through hundreds of keyword search results, but might fare better if they are guided through a step-by-step dialog, powered by technologies like Case-Based Reasoning (CBR).

Multiple search options such as FAQ, keyword and natural language search, topic-tree browsing, and guided help enable a broad range of users to quickly and easily find information. Make sure you leverage a unified multi-channel knowledge platform for consistent answers across customer touchpoints.

Implementing these best practices, while making sure that the KM and CRM solutions are tightly integrated, will help you deliver transformational customer service experience while generating breakthrough value for the business!

Galaxy Consulting has 16 years experience in this area. We can help you to integrate your knowledge base with your CRM. Contact us today for a free consultation!

Saturday, April 30, 2016

Data and Knowledge

Our view of data often doesn't extend further than numbers. When you think about it, data means a percentage, a total, or something to which those numbers are attached. Furthermore, we want to act on those numbers with familiar math.

We don't act on data, we act on information, and we only act on information when it creates knowledge in our minds that enables us to make informed decisions. We might call this customer insight, but in the reality it is - data, information, and knowledge.

Too often, we lose the sight of the need to add together multiple data points to arrive at information that creates useful knowledge. It leads us to think that managing data is an end goal, when the primary objective should be asking how we can make something valuable out of it.

Knowledge is a property of a human mind, so you might consider it information in motion. Knowledge is what makes people to complete their work. A person's name is data, and information might include additional data like job title and company, while knowledge is information extended by understanding the person's objectives for the year ahead. We sometimes interchange these terms.

The role of customer service in organizations has probably never been as important or as difficult as it is now. Competition has spread globally, product life cycles have been reduced, and customization has become more common. The end result is that products and services have become quite complex, and, in response, firms have generated mountains of documents outlining how their products operate as well as policies and procedures detailing how to support them. Companies are trying to consolidate that information and present relevant data to users on an as-needed basis.

The idea of managing knowledge is both vital and perplexing. It is perplexing because knowledge has always been something that we make out of information. How does one manage knowledge in any systematic way? Knowledge management is the grouping of tools, technologies, and processes that constantly and consistently make the right information available to decision-makers.

When it comes to data, organizations continue to struggle with two conflicting goals. On one hand, they want to collect and consolidate information to streamline their operations. On the other hand, data repositories often sprout up in an ad hoc fashion, so it becomes difficult, and in some cases impossible to make sense of an organization's millions and even billions of records.

To solve this problem, organizations first have to uncover the whereabouts of all of their data, which usually is scattered randomly throughout the organization. Next they need to determine how to integrate their various information sources. Finally, they have to find funding for the project. If the integration is achieved, which could be a multi-year process in large enterprises, the potential benefits are great: streamlined operations, lower service costs, and improved customer satisfaction.

Galaxy Consulting has 16 years experience solving this problem. We have helped many organization to organize their knowledge and thus increase their efficiency and productivity, improve compliance, and save cost. We can do the same for you. Contact us today for a free, no obligation consultation!

Saturday, April 23, 2016

Analytics for Big Data

Companies are just now beginning to harness the power of big data for the purposes of information security and fraud prevention.

Only 50% of companies currently use some form of analytics for fraud prevention, forensics, and network traffic analysis.

Less than 20% of companies use big data analytics to identify information, predict hardware failures, ensure data integrity, or check data classification, despite the fact that by doing so, companies are able to improve their balance of risk versus reward and be in a better position to predict potential risks and incidents.

Banks, insurance, and other financial institutions use big data analytics to support their core businesses. Large volumes of transactions are analyzed to detect fraudulent transactions and money laundering. These, in turn, are built into profiles that further enhance the analysis. Some insurance companies, for example, share and analyze insurance claims data to detect patterns that can point to the same fraudulent activities against multiple companies. Healthcare is another area in which data analysis can be used for information security.

Big data can arise from internal and external sources, spanning social media, blogs, video, GPS logs, mobile devices, email, voice, and network data. It's estimated that 90% of the data in the world today has been created in the past two years, and some 2.5 million terabytes of data are created every day.

Although many companies already use data warehousing, visualization, and other forms of analytics to tap into this high-volume data, using that data to prevent future attacks or breaches remains relatively uncharted territory. This is changing and will continue to do so as security increasingly moves from being a technical to a business issue.

To balance the business benefits of big data analytics with the cost of storage, organizations need to regularly review the data they are collecting, determine why and for how long they need it, and where and how they should store it.

The Human Element of the Big Data Equation

Because data volumes grow considerably every day, deciphering all the information requires both technology and people-driven processes. People often find patterns that a computer can pass over. Some other steps organizations can take to analyze big data for information security purposes include the following:
  • Identify the business issue;
  • construct a hypothesis to be tested;
  • select the relevant data sources and provide subject matter expertise about them;
  • determine the analyses to be performed;
  • interpret the results.
Most companies struggle to find value from their customer analytics efforts. Lack of data management, integration, and quality are the biggest inhibitors to making better use of customer analytics. 54% of surveyed companies have difficulty managing and integrating data from the many varied sources, while 50% are concerned about consistent data quality.

Companies also struggle with assembling the right type of analytics professionals, communicating the results of the analysis to relevant colleagues, performing real-time analytics and making insights available during customer interactions, protecting data and addressing privacy concerns, and keeping pace with the velocity of data generation.

While key drivers of adoption include increasing customer satisfaction, retention, and loyalty, analytics use skews largely toward acquisition of new customers. 90% of surveyed companies use analytics for this purpose.

Other factors driving the use of analytics include reacting to competitive pressures, reducing marketing budgets, and addressing regulatory issues.

The use of predictive analytics as a growing trend, with 40% of organizations use it. 70% have been using descriptive analytics and business intelligence reporting for more than 10 years.

Organizations that have already mastered basic analytics methodologies and gained efficiencies in aggregate analysis are now looking to adopt advanced ways to do real-time, future-looking analysis.

Additionally, companies would like to start using social data as a viable source of customer analytics. This was cited as a long-term goal by 17% of the companies in the survey.

Organizations should look beyond social media for unstructured data. While many marketers have embraced social media as an effective way to engage customers, from an analytics standpoint, they have only scratched the surface in how other data sources, such as call center data and voice-of-the-customer data, can feed traditional customer analytics processes.

Analytics can also be used to improve customer engagements. Customer engagement features at the bottom of the list of metrics. This is a missed opportunity for customer analytics practitioners to gain deeper insight into how individual customers interact with content, offers, and messaging across various touchpoints.

Customer analytics practitioners do a number of things right, including focusing on the right types of analytics and methodologies to achieve a basic understanding of who their customers are, their propensity to buy, how to target them effectively, and how best to experiment with content, features, and offers. But despite this, companies should develop a holistic customer analytics solution framework.

Although individual customer analytics techniques answer specific business questions, they fail to deliver efficiency in generating insights at an aggregate level.

Organizations should look outside their own four walls and connect with partners who are knowledgeable in analytics technology, analytical services, and data mining to explore the next steps for customer analytics. It is not just about buying an analytics tool, it is also about employing the professional services to make sense of the data.

Galaxy Consulting has 16 years experience in the area of analytics. Contact us today for a free consultation and let's get started!