The Power of Data Analysis Using AI-driven Components

A robot is shown with a purple background holding a glass orb in two

Holy cow, there’s a lot of data being produced these days! Every time someone uses a common generative AI tool (GPT, Midjourney, etc.) a new pile of data is born. Many think data is just data, so we spoke with Senior Data Scientist, Jerry Thomas, to learn more about data and how best to harness it to everyone’s advantage.  

Welcome to Understanding Data 101

With the sudden fascination with artificial intelligence (AI) reaching what seems a fever pitch, we need to remember that Artificial Intelligence feeds on and creates enormous amounts of data. Many organizations now are focusing on their data in order to turn it into something useable, monetizable, or marketable. Every organization collects it, so we had to ask, just how valuable is that data, really?

Before we get too far down the data trail, let’s start with some definitions and additional context so we’re all on the same page.

Three kinds of data

Let’s first define the three kinds of data involved.

Standard data

Standard data is collected by the system in the back end, without most people even realizing it.

 

This is information collected by cookies on websites, log-in credentials, preferences, and shopping data. Anyone who does anything on a smartphone, tablet, laptop, or desktop leaves behind enormous tranches of data when surfing the net, online shopping, or commenting on a social media post. This data is collected by the system in the back end, without most people realizing it.

Advanced data

Advanced data is generally collected when a user provides information in specific data fields. Businesses use this data to discover and predict patterns and trends.

 

This is the data that machine learning uses in order to make a determination, provide an assessment, or deliver any analysis. It is generally collected when a user provides information in specific data fields. Businesses use this data to discover and predict patterns and trends. This data is collected by text fields or other online tools used to gather information on a person, place, or thing.

High-caliber data

Examples of high-caliber data include how long it takes to answer a question or whether answers are changed.

 

At thinktum, we consider the behavioral data our system collects as high-caliber data. Examples include how long it takes to answer a question or whether answers are changed. We have designed our system to log when people abandon the application process (this usually indicates a problematic question if more than a few jump ship there).

How does data play with risk? Of risk appetite and underwriting philosophy spells it out.

Once you’ve collected all that data, it’s time to analyze it. Data analytics converts raw data into something that is meaningful and actionable for the business. Using our example above of a problematic question, any data analysis of the user journey will trigger a flag on that question since that’s where people often quit the application process. This should prompt the firm to change, move, or remove that question in order to get more people through the application.

There are also different kinds of analysis. Descriptive analysis scrutinizes data to describe what is going on; and predictive analysis predicts future events based on current information.

Meet a Senior Data Scientist!

Jerry Thomas is all about Data Science. For those wondering, data scientists use algorithms, artificial intelligence, and machine learning to build, deploy, and monitor predictive models for that data. In other words, they are responsible for turning data into usable information for the organization. They turn straw into gold.

The (real) truth about artificial intelligence can help you better understand how it works.

We asked Thomas to meet with us to find out more about AI, data, and how it’s analyzed into usable assets.

The term standard data is used to describe a statement to inform them of the kind of data which is captured. So all the data is very straightforward, it’s contact information, personal information, and whatever the answers the applicant gave us. It’s health information, their financial institution, or banking data.”

Then adds:

Data provided by the applicant is just straightforwardly captured. That’s why it is called standard data: because a lot of companies today collect it. It’s a very common and uncomplicated procedure. It’s all the information the user or the applicant has willingly answered within the system. At the beginning of the application they are provided with a disclosure statement to ensure they understand that the information will be gathered to assist in creating a more frictionless journey.”


AI components and data

Seems AI is injected a little bit everywhere nowadays. Most of the time in a vague and undefined way. What exactly does the pairing of AI components and data bring to the table here?

It helps guide the business toward better decisions

 

Artificial Intelligence can analyze huge amounts of data from any source and deliver it back to the business as actions, such as recommending a product or service, determining a decision-recommendation, influencing product development; or providing user behavior insights into what is attractive or popular (and what isn’t) to clients or end-users. It helps guide the business toward better decisions.

Here’s Thomas:

If you’re capturing health or healthcare information, you will be able to create some kind of a report or a graphical illustration where you see which age group has certain kinds of diseases, or income mapping by age group, or other financial details. Standard data is used to understand an audience better by creating subgroups of user profiles to better detect their wants and needs.”

Of course, as we’re all aware by now, there are real benefits to using AI components in data analysis, especially in the insurance industry, including:

  • Faster application process
  • More intuitive questioning
  • Harder to stretch the truth or not provide info
  • More accurate data for the business

AI is also responsible for converting more insurance business from applicant to policyholder because of that predictive analysis we just spoke about. AI can make a prediction regarding whether or not an applicant should be accepted, but that’s not all. AI will also minimize misrepresentation, non-disclosure, and fraud while optimizing the user journey, which leads to more accepted applications. Of course, that improves your revenue models and allows the business to meet or exceed their KPIs.

The algorithm doesn’t decide on the quality of the data provided, it simply analyzes it, and this is an important distinction. As the machine-learning components continually analyze the data, and depending on how effective the algorithm is, the system will only provide a snapshot into what is currently occurring.

To put this in context, find out How insurance carriers are modernizing their technology to meet KPI goals!

liz data to the rescue!  

thinktum’s liz data module has been developed to analyze data within the user journey. It pulls data from 3rd-party sources if that is required and can auto-load previously submitted information into pertinent fields, cutting down on application time and increasing the level of personalization. But it can do so much more than that.

Here’s a look into what liz data does as part of this user journey:

  • Provides real-time data analysis by tracking the data and providing critical alerts.
  • Monitors each answer against any existing data, looking for anomalies.
  • Tracks how long it takes to answer each question.
  • Alerts to any suspicious data or actions.
  • Organizes and packages the data for greater usability.
  • Cuts fraud and non-disclosure by half!
  • Provides an intuitive dashboard with new business trends (revenue), products, and risk trends.
  • Provides invaluable new product launch trend information.

liz data is the brains of the operation.

Here’s how Thomas explains it:

Let’s say for example, when a user completes an application, the time they spend to complete it, and to answer each question, or even if they’re going back and changing their responses, all of these small details are very relevant to their final outcome. The goal is to capture all the information that would help the organization. They would be able to backtrack their questions and see which part of the application is creating a lot of trouble for users or clients, or which flow on the application has technical issues. This can also help the development team as well. In that way, they can pinpoint certain designs or certain kinds of nodes that are not working as expected. So all these benefits are coming from the high-caliber data captured.”

With most organizations, especially those in the direct-to-consumer (DTC) space, they seem to believe that asking fewer questions is somehow better. That a shorter application process is advantageous to their business. Sure they can say their application can be completed in just a few minutes, but not asking for enough information can mean having to go back to the applicant for information or clarity, a rise in premium prices due to a higher risk, or understanding that risk as a business and accepting that there will be an influx of early claims coming in the future. That doesn’t even mention the fact that it could lead to an applicant being sold a policy or product that isn’t optimal for them or their situation, leading to denied claims and poor customer service (or worse) from both agents and organizations.

With liz data, we take a different approach. It’s not about the quantity of questions, but rather their quality. Our questions are not static but rather dynamic, meaning no two people will be given the exact same list of questions. Of course many at the beginning of the application process may be identical, such as name, address, height, weight, occupation, or whether they use nicotine; but as each answer is entered, liz data is already responding to the answers provided and writing new and even more personalized questions. And with better-fitting questions, the likelihood of an applicant abandoning the process is considerably lowered.

The data provided helps the business minimize uncertainty, and allows the organization more freedom to price products more accurately, which may result in lower premiums. More importantly, this data also allows the business to continue optimizing the user journey to remain well within corporate risk predictions. In fact, this can also be better reflected in many corporate profitability models.

With liz data, you avoid having to pay out early claims because you are staying within your range of assumptions regarding those early claims. Having more questions that are highly targeted and accurate will provide the organization with more data in the end. Which translates into better pricing.

How? Well, if questions are more targeted or written better, it means less risk the firm takes on. This can result in safer coverage and lower premiums for clients, as well as more capital for the Insurers to pay for staff, advisor commissions, and well, technology.

With more targeted questions, the process becomes much more efficient for the applicant. It’s important to understand, however, that the level of personalization we provide isn’t just for the applicant. It’s mostly for you, the organization.

Hyper-personalization paves the road to success, and enables better organizations.

Let’s bring Thomas back in for his opinion.

A disclosure statement is provided at the beginning of every client interaction and tells people what will be done with their data. So, thinktum always tries to limit themselves to the things which they have established they’re going to do. Because with data, the sky’s the limit. Generally, an underwriter looks at the information or the answers provided by the client and then maps it to various databases (healthcare and others). The liz suite then comes to a decision-recommendation. But this advanced or high-caliber data gives access to more information about the user.”

Meaning, if they’ve disclosed the information accurately, the system will be able to recommend a decision on their case, based on the relevant underwriting guidelines. This process leads to a faster decision, which does improve the experience for the applicant.

He adds:

It’s the same scenario with insurance. Real-time data such as having machine learning models in place, means you are able to do two things. First, you are able to update the flows based on what the applicants are answering with real-time data. The flow can get updated via machine learning rather than just being rule-based, if that’s how the flow was created. Machine learning can supplement and change the flows. And the second thing is that even before reaching the underwriting process or the final decision-recommendation models, the real-time data would be able to give certain probabilities to the business at each stage of the application. For instance, by about question 6 or so, you would be able to have either a determination, or be leaning toward a specific recommendation about the applicant. It gives the underwriter some indication of where their decision can or should go.”

It’s easy to see why data is deemed the new currency of business.

Behavioral data is just as important.

Up to now, we’ve really been discussing the ins and outs of how questions are written and answered, but haven’t really discussed how end-users complete the application, and that also tells a story.

Here’s why.

Structured data, when providing options, allows end-users to express themselves better and maximizes honest disclosure. In return, the organization’s back office receives that structured data. It is much easier for machine learning to provide accurate information instantly with that kind of data.

With behavioral data, the business can analyze the applicant’s behavior and correlate that with the input data to now have a multi-dimensional approach. You can see what they answered and how they answered. This helps to create better segmentation and reasoning behind specific phenomena and trends.

Many other insurtechs say they focus on behavioral data, but that can be a misnomer. In reality, it seems most organizations either want to have a better analysis of behavioral data and can’t get there or they try and don’t quite succeed, because they just don’t understand that behavior in the same way thinktum does.

What’s next?

We can expect to start tracking more additional data sources such as wearables, social media, as well as centralized medical and lifestyle data centers/platforms. There are real advantages to helping applicants get incentivized to achieve and maintain better health and lifestyle actions. In fact, what the sensor wearers receive is health guidance, not just advice. It also means that with the right behavior and lifestyle changes, it’s possible for a diabetic to manage their condition in such a way that it doesn’t impact their life expectancy. And that’s huge.

But we also need to discuss data security.

People’s health information and banking data are all part of the information collected. We must have the very highest standards of data security in place to ensure this private information never falls into another’s hands.

Security is as important to us as the data we need to protect.

Here’s how Thomas explains it:

So, in thinktum’s case, they make sure that all the data is private, and they take that seriously. The second thing is, the data collected or the machine learning models used are chosen in order to improve the client’s life in some way by using the liz suite. So with user data, it’s possible to provide end-users with better quality services or a faster journey, quicker approval times, and better products.

For the insurance industry, it can be slightly different. But at the end of the day, for a data scientist, what’s most important is data privacy, keeping the data safe. In the end, the aim is to improve user experiences, and provide them with better quality services, all because of that data.”

The technology hasn’t advanced enough to allow machine learning to diagnose conditions and illnesses as yet, but that day is probably coming. However, as computing power increases, additional security measures will be required. Machines will spit out recommendations instantly, which means no human is required to analyze the results. With fewer (or zero) eyes on the actual data, it means that information is better protected.

So in the end…

There are plenty of common data misconceptions out there that we grapple with daily. ‘They’re just numbers’ is one we particularly like, or ‘you can analyze data just by using a spreadsheet’. Both are simply untrue.

Others believe that AI will somehow disappear. It’s wishful thinking, but alas, probably not happening. That box, now opened, will never close again. So what do we think is important to know about this new avalanche of data?

With data analysis, it’s never one-and-done. Systems must continuously analyze data to be of any use. And humans must create mechanisms for systems to self-analyze their components. Machines can’t work without humans, because we are responsible for auditing the information in a way that makes those systems better, and we understand where inaccuracies occur. Humans must participate in the emotional comprehension of data through specific audits to improve it, but on an infinite and ongoing basis.

For more information on data and how it will change everything about how we do business, read our related story, It’s not how with data, it’s why.

We’ll give Thomas the last word here.

From the applicant’s point of view, as opposed to the organization’s, it was always thinktum’s belief that users care that their data is safe or not being hacked or worse. A lot of surveys have been done recently, and it has been established that people do care about their privacy. Data privacy is very, very important to them.

At thinktum, they collect it, but also keep data privacy as a top priority and use the most advanced & latest protocols to ensure that information is safeguarded. They do not share the data externally or anywhere else. Additionally, all the data collected is disclosed to end-users, so are the security measures involved in taking proper care of this data. The models used don’t contain any personal or private information that can be backtracked to the end-user. Behavioral information is simply used to improve the user journey.”

Making the most out of data is key. liz data gets you covered (and your applicants too).

If you’d like to learn more about our liz suite, its liz data module, or how we can help revolutionize your insurance application, just reach out to us!

 

This article’s featured collaborator

Headshot of a smiling man wearing a suit and tie. Jerry Thomas,
Senior Data Scientist
Headshot of a smiling man wearing a suit and tie.
Jerry Thomas, Senior Data Scientist