What we do

Conversational Commerce is a huge challenge but also a unique opportunity.

We are focused on developing highly-personalized, one-to-one shopping experiences creating a unique virtual vendor powered by AI.

Our mission is to build the most effortless, customized, and curated shopping experience possible

In function of the customer journey LEVIA’s virtual vendor adapts to offer an efficient experience. Through an intuitive conversational user experience using AI and dynamic quick replies, Levia’s virtual vendor can drive the user in a fast conversion funnel.

Levia can deal with any user conversational context break. If the user decides to switch from a context to another (ie, the user changes his mind), our technology can detect it and respond accordingly while keeping previous answers to provide a complete « human like » conversation.

 

The potential of conversational commerce in terms of experience & conversion is huge.

Our concerns as a team

Unintended bias

Just like any technology early in its evolution and application, there are a few things we want to watch out for at Levia.ai and unintended bias is at the top of our list.

 

While unintended bias can come from many causes, two of the largest drivers are (i) bias in data and (ii) bias in training.

 

The most obvious cause of bias in data is lack of diversity in the data samples used to train the AI system.

 

Artificial intelligence is helping us enhance human decision-making.

 

Our partners at Levia.ai, E-commerce retailers, work with AI to predict and recommend new products to consumers.

 

The AI that is most used today is called narrow AI.

 

General AI, which is closer to human intelligence and can span a very broad range of decisions, emotions, and judgement, will not be here anytime soon.

Narrow AI is very good at specific tasks but it can introduce some limitations, making it prone to bias.

While unintended bias can come from many causes, two of the largest drivers are bias in data and bias in training.

 

The most obvious cause of bias in data is lack of diversity in the data samples used to train the AI system.

Another large driver of bias – bias in training – can come in through rushed and incomplete training algorithms.

 

For example, an AI chatbot designed to learn from conversations and become more intelligent can pick up politically incorrect language that it gets exposed to and start using it, if it was not trained not to do so – as Microsoft learned with Tay.

 

Similarly, the potential use of AI in the criminal justice system is concerning because we do not know yet if the training for the AI algorithms is done correctly.

 

This is where what we call the Levia Bot coach in the man-to-machine continuum becomes so important.

 

Diversity in the teams working with AI also solves for training bias. When there is only a small group working on a system’s design and algorithms, it becomes susceptible to the thinking of what could be like-minded individuals. Bringing in new team members with different skills, thinking, approaches, and background drives more holistic design.

 

One of the biggest learnings is that AI is best trained by diverse teams that help identify the right questions for AI algorithms to solve.