Every day we can read blogs and articles about use cases of Artificial Intelligence or better said machine learning in Financial Services, very few are going into details about how exactly artificial intelligence will be used in concrete real life examples. This gives the false impression that implementing a smart solution is a matter of days or so. So let us look at one of the most frequently referred use case, fraud detection in payment or card transactions.

Fraud detection is the most cited example and probably one area that is the most mature across all. One myth about fraud detection is that you will plug in a new machine learning solution into your process and it will start running by itself.

Well the journey is a bit longer than that, first we need to clearly state the problem, then look at the data and explore it. Stating the issue seems obvious here, but what do we want to achieve is critical, do we want to detect all frauds, just frauds or more than frauds for example, can we afford to miss a few or this is not acceptable. How many false positives are allowed and at what costs. This will impact the way you will measure the model at the end of the training phase.

Obviously the data must be labeled, this means we need to know which cases are positives, i. e. are fraudulent transactions and which are not. This is not obvious as the data might be available in different applications and so far manually processed or the data is hidden in some proprietary solution.

Once the data has been gathered, cleaned and labeled, then the potential algorithm must be applied to train the model and get a good accuracy. Wait, accuracy is not the right metric maybe as if for example 99% of the transactions are negative and your algorithm will not detect any positive case it will still show an accuracy of 99%. This is is because you should also define which metric should be used to measure the result of the trained model and accuracy is not always the best when you have what is called a imbalanced data set.

Let us assume you have now a trained model with a good result and it is ready to go into production. This is not the end of the story. First the model might give different results in production, this can be due to the fact that the data used for training and testing were coming from a different environment for example. So retraining the module is required. Constant monitoring of the performance in production is also mandatory, as the model can drift and give innacurate results after some time. So preventing measures need to be applied to alert when a model needs to be retrained and fine tuned again.

As we can see there is little magic in the whole process and this needs quite a lot of work and rework, but still I believe this is worth it if the expectations and the project lifecycle is well managed.

(Visited 30 times, 1 visits today)
%d bloggers like this: