Our 4 key takeaways: Machine Learning & AI
AI is not new, it has been around for more than 60 years
Artificial Intelligence (AI) is a field that has been around for decades, although the term itself was not introduced until the 1950s by Alan Turing, an English mathematician and pioneer of theoretical computer science and artificial intelligence. The origins of AI can be traced to early attempts to build machines that can think and reason like humans; AI today cannot reason like humans do. AI can see correlation and reveal only limited causal relationships, especially in an ever-changing context. Nor does it make autonomous decisions. These are autonomous systems that do this; AI itself is just a part of this that figures out a number. For example: Fraud equals 0.75. Based on this number, programmers will set up a rule that, for example, anything higher than 0.7 can be considered fraud, anything below it cannot. This "threshold" is never set as AI itself.
Although AI has been around for a long time, it has made significant progress in recent years thanks to the increase in computing power, the availability of large amounts of data and the improvement of algorithms. This has led to breakthroughs in areas such as image recognition, natural language processing, autonomous vehicles and voice assistants.
Indeed, one pioneering form of AI is generative AI. Briefly outlined, this is the type of artificial intelligence that can develop something new on its own using machine learning. For example: you wish to look up an image of a flower. The algorithm is trained on a dataset of images of flowers that are used to create new unique images of flowers. In the background, the algorithm learns what properties are important in creating a realistic flower image, for example, the shape of the flower, the size of the flower, the position of the stem. Generative AI is going to create new images of flowers that look realistic but are not copies of already existing flower images.
AI does not stand still and, in fact, some say it is evolving too fast...
AI can be dangerous...
That there are dangers to artificial intelligence, we cannot assume. That we can expect a Hollywood scenario where robots are going to take over, it won't get that far. It is mainly about the social, financial and legal consequences, which are often so complex and far-reaching that there is still too little regulation. Moreover, we need to make sure that the data we collect and analyze is fair and reliable, and that we fully respect the privacy of businesses and people. We must also make sure that we understand how the algorithms we use work, so that we can be careful that our decisions are not influenced by unintended biases or discriminatory factors, therein lies the greatest danger.
So did the example of a CEO of an energy company based in the UK who thought the director of the parent company was calling him for a payment of some 300 000.00 Euros. It was a deepfake voice, driven by AI. Little is known about the legal and judicial consequences in this case; no decision was made towards insurance either (yet).
But also very useful
AI can take over a lot of human tasks; you could even call it a form of artificial intelligence that allows us to be more human. One example was Unilever's selection process. They process almost 2 million job applications a year. Unilever can't afford to overlook talent just because it's at the bottom of a pile of resumes. To address this problem, Unilever partnered with Pymetrics, a specialist in AI recruitment. This to create an online platform that allows candidates to be initially assessed from the comfort of their own home, in front of a computer screen or a cell phone. We can't elaborate on the whole story, but you can read about it at this link.
"All our applicants get a few pages of feedback; how they did in the game, how they did in the video interviews, what traits they have that fit, and if they don't fit, the reason why they didn't fit, and what we think they need to do to be successful in a future job application."
So while Unilever is not quite ready to hand over the entire hiring process to machines, it has shown that it can help with the initial "shift" when it comes to pre-screening applicants.
Some statistics here:
- The time to hire was reduced from 4 months to 4 weeks
- The job offer acceptance rate increased from 64% to 82%
- The diversity in the team was never greater: the selection by AI was more human than a human selection: there is no bias by appearance, ethnicity or prior education for example
Impact on business
The question is not whether AI will impact your organization, but rather when and in what respect. Every company will need to prepare: both to risk management, privacy protection, impact on employment, ethical issues and you name it, there is much more to anticipate and foresee.
The European Union is already busy proposing a legal framework of AI.
Ps: this article was not written with Chat GPT ;)