HOW WE BUILT OUR AI SYSTEM MATCHING CONSULTANTS AGAINST ASSIGNMENTS

AI_blog_article_hero

By:

Maximillian Gustavsson / CTO

Posted on:

26-04-2023

Intro

We’re a tech and design consultancy company. This means we’re continuously matching our specialists against job adverts, broker adverts and other requirement specifications. We go through a lot of these, sometimes up to 400 a week globally.


The manual process of matching candidates to assignments is not only a laborious task but also highly expensive in terms of cost per conversion. Despite being the primary task of recruitment and consultancy companies, it demands a certain level of expertise to consider factors beyond the tech stack, such as experience and personality. The success of such matches heavily relies on specialised knowledge of the organisation, its goals, and the assignment at hand, making it an arduous task to scale.

Woman sitting at desktop

WHY DON'T WE HAVE CHATGPT DO THIS FOR US?

Maximillian Gustavsson, CTO at QueensLab

Let's build an AI system!

Our head of sales, Stefanos, asked; "why don´t we have a chatGPT do this for us?". Followed by a deep sigh from the development team. We had all been bombarded on Twitter by “chatGPT” experts claiming you’re just a few “hacks” away from having chatGPT manage your entire life.


Wait a minute..


Maybe there’s something to it, chatGPT is powered by something called LLM’s, or Large Language Models. Basically a machine learning model that is specialised and trained in dealing with, well language. Matching gigs is really a language processing task even if it was currently being done with information stored in our heads.


There’s probably something out there


We had a look and found a few, all doing a so called keyword matching. Great for matching a tech stack or specific skills but ineffective for matching the right consultant.

Human hand shakes robot hand

Trouble brewing

The EU has recently proposed a new regulation known as the “Artificial Intelligence Act.” This regulation introduces several new requirements and limitations on how AI systems can operate within the EU, with the aim of addressing potential risks and ensuring that AI technologies are used in a way that is safe, transparent, and fair.


The regulation defines four categories of AI systems:


  1. Unacceptable Risk – AI systems that pose an unacceptable risk and are prohibited in the EU. These systems include those that use subliminal techniques to manipulate human behavior, exploit vulnerable groups, or create fake personas.

  2. High Risk – AI systems that are considered high risk and will be subject to strict requirements before they can be used in the EU. Examples of high-risk systems include AI used in critical infrastructure like energy or transportation, AI used in certain public services, and AI used in law enforcement.

  3. Limited Risk – AI systems with limited risk will not be subject to additional regulation beyond existing EU laws. This category includes most business AI applications that are not categorized as high-risk.

  4. Minimal Risk – AI systems with minimal risk are exempt from any additional regulations. These systems pose little or no danger to individuals or society as a whole.


The implications of the EU AI Act are significant for businesses operating in the EU, as it introduces new compliance requirements and limitations on how AI systems can be designed and used. For example, businesses that use high-risk AI systems will need to provide detailed documentation explaining how the system works, conduct regular testing, and ensure that human oversight is in place.


We want to match for more than specific skills and here we run into trouble. Specifically the issues we run into are:


  1. We would be classified as a high risk system.

  2. We can’t have a black-boxed decision process.
Robot in court Futuristic looking glas

Minimising risk

A question for you all: What gender makes the most successful hire?


If you ask Amazons (now scrapped) A.I Hiring system it’s men.


So why’s that? If the system is trained on already biased, excluding data it will make biased, excluding decisions.


Can we, with full confidence, make sure that models used by us does not have this bias?


No - so lets exclude everything. Name, picture, gender, ethnicity etc. It’s irrelevant in the matching process so it shouldn’t even be a parameter.

Removing the black-box

The vast majority of organisations does not currently have the resources to build and train their own models. It’s a huge investment in hardware, it’s time consuming and requires recruiting highly educated specialists (AI Researchers).


A black-boxed AI refers to an artificial intelligence system whose decision-making process is not transparent. In other words, the system's output is not explainable, and it is difficult to determine how the system arrived at a particular decision or recommendation. The reality is that most A.I systems that users have been exposed to lately (chatGPT, Google Bard) are closed, proprietary systems and because of that can be considered black-boxed.


Few of us, if presented with a similar situation, would not accept a ”don’t worry about it” explanation when presented with a decision.

Working around the box

QueensLab is a part of the majority of organisations, we need to use pre-trained models to achieve our goals of matching - but we need to get around the black-box.


Enter embeddings


We use a pre-trained language model to create a numerical representation of the candidates and the assignments to map them in a 3D space. We use mathematical algorithms to measure the distance between the assignment and the candidates to find the closest match. With a dynamic threshold depending on the type of assignment we can find one or more candidates that are close to perfect for the assignment. We use another language model to explain (not decide) why it’s a good match. We use closed language models for preparation and a human readable explanation but the decision process itself is transparent and traceable.


With this we believe to be compliant with AIA.

Man looking at nodes

The result

Let’s be honest with ourselves - we all have bias, even if it’s subconscious.


We believe that with the creation of this system we’ve created a fair, unbiased system that can scale in the same velocity as our organisation.


Furthermore, the implementation of this system has not only improved the efficiency and productivity of the sales team but has also greatly contributed to lower cost per conversion. By automating the more repetitive and time-consuming tasks, the sale team can now operate much more efficiently and cover more prospects and leads than before.


In terms of accuracy, the system have also proven to be very effective, with a current accuracy rate of approximately 97%. This number is continually being improved with every iteration, as machine learning algorithms learn from their results and improve their ability to accurately predict outcomes.

Exploring the unknown

So what’s next?


Since we’re storing all assignments as embeddings we’re already expanding the system to our recruitment processes to give us an un-biased and accurate recommendation in how well a candidate matches what we actually do.


Not only what we think we do.

Bring some coffee, there’s a lot more interesting to read...